text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Transmembrane Shuttling of Photosynthetically Produced Electrons to Propel Extracellular Biocatalytic Redox Reactions in a Modular Fashion Abstract Many biocatalytic redox reactions depend on the cofactor NAD(P)H, which may be provided by dedicated recycling systems. Exploiting light and water for NADPH‐regeneration as it is performed, e.g. by cyanobacteria, is conceptually very appealing due to its high atom economy. However, the current use of cyanobacteria is limited, e.g. by challenging and time‐consuming heterologous enzyme expression in cyanobacteria as well as limitations of substrate or product transport through the cell wall. Here we establish a transmembrane electron shuttling system propelled by the cyanobacterial photosynthesis to drive extracellular NAD(P)H‐dependent redox reactions. The modular photo‐electron shuttling (MPS) overcomes the need for cloning and problems associated with enzyme‐ or substrate‐toxicity and substrate uptake. The MPS was demonstrated on four classes of enzymes with 19 enzymes and various types of substrates, reaching conversions of up to 99 % and giving products with >99 % optical purity. Source of organisms Synechocystis sp. PCC 6083 wild-type (substrain Kazusa, geographical origin in California (USA)) was received from Prof. Tamagnini at the University of Porto. [1][2][3] The wild-type and recombinant Synechococcus elongatus PCC 7942 (alcohol dehydrogenase from L. kefir under the PpsbA1 promoter and spectinomycin resistance cassette integrated into the neutral site 1 (NS1) of the chromosome) were received from Prof. Waginkar at the Indian Institute of Technology Bombay. [4] 3 .4 Enyzmes The enzymes used in this study are enlisted in Table S11. S16 Table S11. Enzymes used in this study. Expression of enzymes in E. coli The enzymes used in this study were expressed in E. coli BL21(DE3) or its derivatives (Table S13). Transformation Plasmids (100 ng) were mixed with chemically competent E. coli BL21 (DE3) cells (100 µL), rested on ice for 30 minutes and heat-shocked for 30 seconds at 42 °C. SOC medium (200 µL) was added, and the transformed cells were incubated for 1 h at 37 °C and 300 rpm. The cells were then plated on a LBagar plate supplemented with corresponding antibiotic (ampicillin, 100 µg mL -1 or kanamycin, 50 µg mL -1 ) and incubated overnight at 37 °C. Cultivation Overnight cultures (ONC) were prepared in LB-medium (10 mL) supplemented with the corresponding antibiotic at 30 °C and 120 rpm. The ONCs were then used for the inoculation of sterile medium (1% v/v) supplemented with the corresponding antibiotic. Precultures were incubated until OD600 of 0.6 was reached, then protein expression was induced, and cells incubated further. Detailed conditions for every enzyme can be found in Table S13. Harvesting To harvest the cells, cultures were centrifuged at 3184 g, 20 min, 4 °C, the cell pellet suspended in wash buffer (1 -2.5 g cells per 10 mL phosphate buffer, 10 mM, pH 7), and then centrifuged again under the same conditions. S17 Preparation of cell-free extracts Cell pellets were suspended in lysis buffer and sonicated on ice (for conditions see Table S12). The sonicated cells were centrifuged for 25 minutes at 17 000 g and 4 °C. The supernatant (cell-free extract) was shock-frozen (liquid nitrogen) inside a round bottom flask, lyophilized, and stored at -20 °C. The pH of the buffers was adjusted by using hydrochloric acid (HCl) and sodium hydroxide (NaOH). SDS-PAGE Protein concentrations of cell pellets and supernatants were determined via a Bradford Assay. Volumes equivalent to 15 µg of protein were mixed with Laemmli sample buffer (1:1) and heated to 95 °C for 5 minutes. Prepared samples and a marker (PageRuler Prestained Protein Ladder; 7 µL) were loaded onto the 10 % SDS-PAGE gel (100 V, MOPS buffer). The gel was stained overnight using Coomassie Quick Stain and afterwards destained with deionized water. S18 Preparation of purified enzymes Buffers and sonication conditions used are listed in Table S14. Cell lysis Harvested cell pellets were resuspended in binding buffer (ene-reductases were supplemented with a spatula tip of FMN) and sonicated on ice. (Digital sonifier, BRANSON). The cell suspension was centrifuged (20 min, 18 000 g, 4 °C) and the supernatant was filtered (0.45 µm syringe filter) and stored on ice. hydroxide (10 mM NaOH in water). NaOH was removed immediately by washing the column two times with four column volumes binding buffer. The column was stored at 4 °C. His-tag purification LkADH and the ene-reductases containing the His6-tag were purified by an immobilized metal ion affinity chromatography (HisTrapTM FF, 5 mL, GE HEALTHCARE). The purification was performed at 4 °C and at a flow rate of 5 mL min -1 . Prior to loading the soluble fraction to a HisTrap FF column (GE Healthcare, 5mL) equilibrated in binding buffer it was filtered through a 45 μm syringe filter. After loading the column was washed with binding buffer according to the manual. Then, the enzyme was eluted using elution buffer. All purification steps were verified by SDS PAGE. Storage The volume of the fractions containing the enzyme (colored yellow or visualized with Bradford reagent) was reduced to 2.5 mL using a Vivaspin® 20 mL Ultrafiltration Unit (Satorius). Then, the buffer was changed to storage buffer using a Sephadex G-25 PD10 desalting column (GE Healthcare). The final enzyme solution was aliquoted and stored at -20 °C. The pH of the buffers was adjusted by using hydrochloric acid (HCl) and sodium hydroxide (NaOH). Buffers used for purification were degassed using ultrasonication and filtered with a Steritop® Filter (Millipore Express® PLUS, 0.22 µm PES Membrane). Cultivation of cyanobacteria Cyanobacteria were cultivated in BG11 medium [1] Seed cultures of recombinant Synechococcus elongatus were grown in the presence of spectinomycin (100 µg mL -1 ). Determination of the cell dry weight and chlorophyll a content The dry cell weight and the amount of chlorophyl a were determined from samples originating from at least three independent cultivations under growth conditions for working cultures, each measured in triplicates. Cell dry weight Working cultures grown and harvested as above were shock-frozen in liquid nitrogen, lyophilized overnight and weighed in three independent experiments. The chlorophyll a was determined as described. [32] A sample of the cell culture (100 μL) was mixed with cold methanol (900 μL) and incubated in darkness at 4 °C for 2-3 hours or overnight. Then, the samples were centrifuged for 3 min at 14000 g and the absorption of the supernatant was measured at 665 nm. The amount of chlorophyll was determined using the extinction coefficient ε = 78.74 L g -1 cm -1 according to Eq. S 1, where the dilution factor corresponds to 10. S22 The vials were incubated in a custom photoreactor [33] equipped with cool white LEDs (LED stripes, 5200 K) for the indicated reaction time (16 h) at 600 rpm shaking and at room temperature at an average light intensity of 215 µE m -2 s -1 (photoreactor settings: frequency 100 Hz, duty range 100, duty cycle 5) or covered with aluminum foil for dark reactions. The workup and analytics are described in section 4. (Table S17). (Table S 18). The reaction mixtures containing different shuttle-molecule pairs were extracted twice with ethyl acetate (400 μL, 250 μL), dried over anhydrous sodium sulphate and measured on GC-MS.
1,656.2
2022-08-03T00:00:00.000
[ "Biology", "Engineering", "Environmental Science" ]
Towards Optimal Sustainable Energy Systems in Nordic Municipalities : Municipal energy systems in the northern regions of Finland, Norway, and Sweden face multiple challenges: large-scale industries, cold climate, and a high share of electric heating characterize energy consumption and cause significant peak electricity demand. Local authorities are committed in contributing to national goals on CO 2 emission reductions by improving energy efficiency and investing in local renewable electricity generation, while considering their own objectives for economic development, increased energy self-sufficiency, and affordable energy costs. This paper formulates a multi-objective optimization problem about these goals that is solved by interfacing the energy systems simulation tool EnergyPLAN with a multi-objective evolutionary algorithm implemented in Matlab. A sensitivity analysis on some key economic parameters is also performed. In this way, optimal alternatives are identified for the integrated electricity and heating sectors and valuable insights are offered to decision-makers in local authorities. Piteå (Norrbotten, Sweden) is used as a case study that is representative of Nordic municipalities, and results show that CO 2 emissions can be reduced by 60% without a considerable increase in total costs and that peak electricity import can be reduced by a maximum of 38%. Introduction The UNFCCC Paris Agreement aims at strengthening global responses to limit the increase of global average temperature to 1.5 °C above pre-industrial levels. In the EU climate and energy framework the targets set by 2030 on greenhouse gas (GHG) emission reduction, renewable energy share, and energy efficiency improvement are 40% (from 1990 levels), 32%, and 32.5%, respectively [1]. Sweden's binding target set by 2030 on GHG emission reduction was decided to be 40% from 2005 levels [2]. Global energy-related CO2 emissions continue to increase and have risen by 1.7% to 33.1 GtCO2 in 2018 [3]. Sweden´s territorial CO2 emissions from electricity generation and heating have dropped from a 2005 level of 9.3 MtCO2 and reached 5.38 MtCO2 in 2017 [4]. The project Arctic Energy, funded through the EU-program "Interreg Nord," was implemented by partner institutions from Finland, Sweden, and Norway between 2016 and 2018, and it provided energy systems analyses and simulations of future scenarios in order to support decision-makers of Nordic municipalities in their efforts to achieve GHG emissions targets and increase energy selfsufficiency, while ensuring secure and affordable energy supply [5]. This work continues the studies performed during the Arctic Energy project and investigates optimal alternatives for sustainable energy systems in Nordic municipalities with a focus on electricity and heating sectors. Studies on Nordic municipal energy systems generally cover the heating sector, typically presenting the optimization of district heating (DH) systems. They include combined heat and power solutions, integration with industrial excess heat supply, solar heating, thermal energy storage, and The paper is structured as follows. Section 2 describes the methodology applied, the simulation and optimization tools and the key parameters of the municipal energy system model. Section 3 outlines the case study of Piteå, and Section 4 presents and discusses the main features of the optimal alternatives for the integrated electricity and heating sectors. Finally, Section 5 provides conclusive remarks and suggestions for further research. Methodology The deterministic simulation tool EnergyPLAN is used in this paper to model the electricity and heating sectors of the energy system of a Nordic municipality. EnergyPLAN executes a technoeconomic analysis simulating the interactions between the modeled sectors on an hourly basis for a period of one year in very short processing times [37,48]. For a detailed description of EnergyPLAN, the reader is referred to the documentation available at [34]. This section defines the multi-objective optimization problem (MOOP) and describes the applied MOEA and the interface that has been implemented to exchange information with EnergyPLAN. Moreover, the values for key parameters of the energy model are discussed, such as electricity price, biomass price, discount rate, technology costs, and grid emission factors. Objective Functions The objective functions to be minimized are the total annual system costs Ctot and the system CO2 emissions Emsys of the electricity and heating sectors of a municipal energy system. The total annual system costs Ctot calculated by EnergyPLAN are the sum of: • The annualized capital cost Cann of each component in the modelled energy system, which considers capital cost, expected lifetime, discount rate dr and fixed operation and maintenance (O&M) costs. • Variable O&M costs. • Fuel costs, which depend on biomass price pbio. • Costs/revenues from the import/export of electricity from/to the grid, both calculated with the spot price pel. System CO2 emissions Emsys calculated by EnergyPLAN are the sum of: • CO2 emissions due to the electricity imported from the national grid, Emimp, considering a grid emission factor EFgrid. • CO2 emissions due to fossil fuel use within the boundaries of the municipal energy system. Decision Variables The decision variables of the MOOP deal with the way in which the demands of the electricity and heating sectors of the municipal energy system are covered. The decision variables in the electricity sector are the installed capacities (in MW) for electricity generation technologies, such as hydropower, biomass-fired power, solar power, wind power. Lower bounds for the considered ranges of these variables can be set to zero or to existing installed capacities, whereas upper bounds can be set to limits depending, e.g., on the available areas found in municipal master plans for local renewable electricity generation technologies. Within the considered ranges, the values of the installed capacities are discretized according to the typical capacity of single electricity generation devices, e.g., a single wind turbine. The decision variables in the heating sector are the amounts of heat (in GWh) supplied during a year by heating technologies both in district and individual heating systems, such as fossil fuel boilers, biomass boilers, electric boilers, heat pumps. The lower bound for the ranges of these variables is zero, and the upper bound is the annual heating demand that has to be covered by these technologies. Constraints Satisfying electricity and heating demand at any given time are the typical constraints in energy system modelling. EnergyPLAN calculates electric energy balances as a result of given demands, component behaviors, and selected regulation strategies. It imports or exports electricity from the grid utilizing the defined transmission line and provides warnings when certain limits are exceeded. Peak electricity import Pel,maximp is an important output of the calculations, because Nordic countries have a high share of electric heating that results in extreme peak demand during the heating period. Pel,maximp is most relevant for municipalities in which transmission line capacities can become a bottleneck. The same bottleneck could limit the installation of new local electricity generation capacity, or could lead to curtailing, because of the increase of electricity export to the national grid. Peak electricity export Pel,maxexp is therefore another output to be monitored. Heating demand is balanced at any time, as the sum of the values of the decision variables related to the heating supply must be equal to the annual heating demand. The Optimization Algorithm and the Interface with EnergyPLAN The multi-objective evolutionary optimization algorithm (MOEA) described in [46] was adapted to search for the optimal trade-offs of the defined MOOP. The main feature of this MOEA is a diversity-preserving mechanism that treats diversity as a meta-objective in the evaluation phase. In order to reduce the computational effort, a consolidation ratio [49] is adopted as the criterion to terminate the search process. The numerical results, including those that this paper presents, prove that the solutions effectively converge toward the Pareto optimal front and that they are well distributed. Figure 1 gives an overview about the interface between the MOEA and EnergyPLAN modelling and simulation procedure and about the wrapper software that has been developed for the exchange of information. Model Parameters and Uncertainties Energy system analysis has to deal with multiple uncertainties related to technical, environmental, and economic parameters. In this study, sources of uncertainty for local renewable energy production include site-specific climate data and the capacity factors of the different technologies. Different CO2 emission accounting methods can lead to significant differences in emission factors. The estimation of costs is directly affected by the economic parameters, such as technology costs, electricity and fuel prices, and discount rates. Electricity Price pel Calculated costs for imported electricity and revenues for exported electricity are included in the economic objective function of the MOOP. These are based on the hourly electricity spot price pel for the Nordic electricity system as available from the NordPool power market [22]. The selection of pel value is based on the historical price trend of Nordic power market, where the average annual electricity spot price moves between a long term low of 21 EUR/MWh in 2015 and 50 EUR/MWh by the end of 2018. The considered values for the annual average pel in the sensitivity analysis are then 20, 40, and 60 EUR/MWh. Biomass Price pbio The price of biomass (pellets) pbio affects the costs associated with the heating sector that is not Discount Rate dr Discounting in energy system analysis considers two perspectives: social dr, for evaluating costs and benefits from a societal perspective, and individual dr, for evaluating investment decisions [51]. Applied social dr in energy studies ranges between 1% and 7% and the dr for industrial investors ranges from 6% to 15% [51,52]. In the sensitivity analysis of this study the considered values for dr are 3%, 9%, and 15%. Technology Costs Technology cost parameters are available from the EnergyPLAN package [34]. References of EnergyPLAN cost database include the catalogues of energy technology data published by the Danish Energy Agency and Energinet, JRC Technical Reports and the ETRI projections for 2010-2050 [53][54][55]. It is assumed in this study that technology costs are sufficiently reliable, especially with the consideration that all the technologies associated with the decision variables are mature. Weather Conditions Annual heating demand and annual electricity generation from windpower, hydropower, and solar PV can vary significantly in different years because of the weather conditions. In this work the weather data set used refers to a specific period of only one year (2015), although the study of selected years with extreme weather data would address uncertainties arising from unpredictable weather conditions. Grid Emission Factor EFgrid The European Joint Research Center (JRC) published National and European Emission Factors for Electricity consumption (NEEFE) [56,57]. JRC recommends that NEEFE are applied in the emission inventories of the municipalities that signed the EU Covenant of Mayors for Climate and Energy (CoM), a network of local governments committed to implementing EU climate and energy objectives. Most Swedish signatory municipalities historically created their emission inventories applying the EFgrid of the Nordic electricity mix, which considers import dependencies of the Nordic electricity market 2015 [58]. Currently no Swedish institution regularly updates EFgrid values for the Nordic electricity mix. Table 1 presents the different EFgrid values for Sweden in 2015. Different CO2 emission accounting methods lead to significant differences in EFgrid values. The energy system model in this paper neglects the CO2 emissions from local renewable energy production, as for CoM signatory municipalities all local renewable energy generation can be considered as CO2 free [56], so that CO2 emissions are merely a consequence of imported electricity Emimp according to the selected value of EFgrid. The Piteå Case Study in EnergyPLAN A case study for the Piteå municipal energy system was implemented by the authors in EnergyPLAN as part of the Arctic Energy project between 2016 and 2018 [5]. Piteå municipality, located in Norrbotten county of Sweden, participated in the project and can be considered as a representative municipality in the Nordic context. Piteå became a signatory to the CoM in 2009, submitted the Sustainable Energy Action Plan (SEAP) to CoM in 2010 [59], and has renewed its commitment to CoM in 2017 [60]. In the 2010 SEAP Piteå sets the following targets for the municipality and its energy sector to be achieved by 2020: • Reduce GHG emissions of the entire municipality by 50% (base year 1998). • Convert all fossil fuel fired boilers for heating and industrial processes. • Reduce net energy demand by 20% for apartment and commercial buildings and 10% for singlefamily homes (base year 2008). • Supply electricity and heating demand with 100% renewable sources. • Become a net exporter of renewable electricity. The SEAP also details a number of measures to be implemented and a progress report presents achieved results by 2013 [61]. Piteå had 41,548 inhabitants by 2015. 145 MW of wind power, 40.9 MW of hydropower, 256 kWp of PV were installed within the municipal boundaries. Industrial biomass fueled combined heat and power (CHP) plants, with an aggregated power production capacity of 78 MW, supplied about 53% of the own electricity demand of these industries. The total final electricity consumption of Piteå municipality was 1453 GWh in 2015, of which 1117 GWh were supplied from local generation and the reminder was imported from the national grid. DH supplied 269 GWh of heat in urban areas, mainly utilizing industrial excess heat. Heat consumption in buildings not connected to DH was 238 GWh, of which 166 GWh were provided by electric heating, including heat pumps, and 70 GWh from biomass boilers. More key figures about Piteå are presented in Table 2 with references to the sources. * references to SCB [62] include the internal reference code to the specific data. Case Study Decision Variables The choice of the decision variables in this case study is strictly related to the peculiar features of the municipal energy system in Piteå. In the electricity sector, the decision variables are the installed capacities (in MW) for three local renewable electricity generation technologies: utility scale solar PV systems (PV), onshore wind turbines (WindON), and offshore wind turbines (WindOFF). A possible expansion of hydropower capacity is not considered because of environmental legislation [70]. Biomass-fired CHP plants are not considered because the heating demand satisfied with DH is already covered with waste heat from industries, and the DH network already connects the most densely populated areas in the municipality. In this study the capacity of the transmission line is set to unlimited and resulting values for electricity import and export are discussed. In the heating sector, it is assumed that DH will continue to operate as it presently does. Accordingly, the decision variables are related to individual heating systems, and they are the amounts of heat (in GWh) supplied during a year by biomass boilers (BioB), heat pumps (HP) and electric heating (ElB). Table 3 shows the levelized costs of electricity LCOE in EUR/MWh and the annualized capital cost Cann in EUR/MW for the considered renewable electricity generation technologies, and Cann in EUR/unit for the considered heating technologies. The EnergyPLAN cost database provided technology cost parameters to calculate these values, which will be important for later discussions. A 50/50% mix of air-air heat pumps and ground source heat pump types is considered for HP according to sale statistics [71]. LCOE takes into account the technology costs for 2020 and considers average capacity factors for the Nordic location. It is important to note that Cann for renewable electricity generation is lowest for PV and highest for WindOFF, while LCOE is lowest for WindON and highest for PV due to the different capacity factors. Selected Scenarios Simulated with EnergyPLAN The model of the Piteå municipal energy system was built in EnergyPLAN and a number of scenarios were simulated for reference before coupling the model to the optimization algorithm ( Figure 2) [5]. The "Base2015" scenario was simulated to analyze the situation in the year 2015. With technology costs for 2015, an average pel for the Nordic electricity system of 40 EUR/MWh and a dr of 9%, Ctot resulted equal to 81.9 MEUR (Figure 2). The calculated value for Emsys was 37.4 ktCO2, neglecting the CO2 emission from local renewable energy generation as recommended by JRC to CoM signatory municipalities to consider as CO2 emissions free [56]. This is a consequence of a 374 GWh grid electricity import with a grid emission factor for the Nordic electricity mix of 0.1 tCO2eq/MWh, the same applied in the emission inventory for Piteå as presented in the Piteå SEAP [59]. Technology costs for 2020 (see Section 2 and Table 3) and an average pel for the Nordic electricity system of 40 EUR/MWh were used as standard parameters in all scenarios related to year 2020. The "Demand2020" scenario simulated a variation of the demand in electricity and heating sectors with respect to the 2015 situation, because of the achievement of the following 2020 targets mentioned in Piteå SEAP: the conversion of fossil fuel heating into renewable heating, considering a mix of biomass and heat pump solutions, and the implementation of energy efficiency measures in the building sector. This resulted in a reduction of electricity import (358 MWh) and Emsys (35.8 tCO2eq/MWh, 5.8% less than Base2015). The calculated Ctot (with dr = 9%) was 78.1 MEUR, 4.6% less than in Base2015, but if 2020 technologies costs were applied to the 2015 situation (Base2015-costs2020 scenario) then the reduction in Ctot would be 2.4% only (Figure 2). The "Balanced2020" scenarios were inspired by the net-export 2020 target in Piteå SEAP, because they simulate the condition in which there is a near-zero balance between electricity import and export considering a one year period. In each Balanced2020 scenario this condition was achieved by investing in additional capacities of a different mix of renewable electricity generation technologies (PV, WindON and WindOFF) starting from the Demand2020 situation. EnergyPLAN is able to deliver fast results for integrated energy system studies and the user can analyze different options in order to approach defined conditions for the modelled energy system. The options for the mix of technologies are chosen by the user on the basis of available domain knowledge. A condition such as a near-zero balance is relatively easy to achieve when only few technologies are included in the set of options, thanks to the EnergyPLAN feature that allows the user to perform a series of simulations in one run with up to 11 different values for a parameter (in this case the capacity of one renewable electricity generation technology). Comparing the simulated Balanced2020 scenarios with Demand2020 (Figure 2), Ctot (dr = 9%) increase between 2.2% and 12.9%, while CO2 emissions are reduced to a minimum of 20.6 ktCO2 (−42.5%) in the Balanced2020-WindON-OFF-PV scenario (with 145 MW WindON, 100 MW PV and 60 MW WindOFF additional capacities). Base2015 Pel,maximp (168.3 MW) is reduced in the Demand 2020 scenario to 164.7 MW (−2.1%), and further reduced to a minimum of 157.8 MW in the Balanced2020-WindON-OFF scenario. Balanced2020 scenarios provide possible alternatives to achieve the near-zero balance condition as created by an EnergyPLAN user, without an indication whether the corresponding CO2 emission reductions are obtained with a minimum cost increase. On the contrary, the multi-objective optimization approach shall provide better knowledge of the alternatives that represent the optimal trade-offs between the two conflicting objectives of the MOOP. Setup of Parameters for the MOOP The starting point of the optimization runs is the energy system model of Piteå municipality with the parameters of Demand2020 scenario. It is worth noting that in this scenario the only CO2 emissions are those due to electricity import, since no fossil fuel is used either in the electricity or heating sector, and, therefore, Emsys is equal to Emimp. Table 4 lists the chosen ranges and the discretization steps for the decision variables of the MOOP defined in this study. Maximum PV capacity (100 MW) was determined by limiting land use to about 0.1% of the available land area of Piteå municipality. The discretization step of 1 MW reflects the utility scale for ground-mounted solar PV systems. The discretization steps for WindON and WindOFF were set according to the capacity of single wind turbines typically installed in such wind developments in recent years. Minimum WindON capacity of 145 MW was the existing installed capacity in 2015, whereas the maximum was set to 505 MW in order to allow for the installation of one hundred additional wind turbines of 3.6 MW each [72]. Maximum WindOFF capacity was set to 330 MW considering the available area declared in the wind development plan of Piteå and in published project plans for this area [72,73]. Higher technical potentials exist anyway for all technologies. The decision variables for the heating sector (ElB, BioB, and HP) are continuous and the sum of the amounts of heat supplied by the three technologies has to satisfy the annual heating demand of the individual heating sector (i.e., the demand from buildings not connected to DH, 211.7 GWh according to the Demand2020 scenario). Results and Discussion Two different sets of optimization runs were performed considering the decision variables of a) electricity sector only and b) integrated electricity and heating sectors. Within each set, the optimization runs are repeated with different values of the economic parameters discussed in Section 2 to implement sensitivity analysis. Initial test runs confirmed sufficient diversity and convergence toward the Pareto front, considering that the termination criterion on the consolidation ratio is satisfied, as described in Section 2. The electricity sector only runs were then configured with 100 individuals and 300 generations, whereas integrated electricity and heating sectors runs were configured with 200 individuals and 500 generations. The figures in the following subsections present the Pareto fronts plotting Ctot vs. CO2 emission reduction with respect to Demand2020 scenario, and the Pareto sets of optimal solutions plotting the optimal values of the decision variables vs. CO2 emissions reduction with respect to Demand2020 scenario. The effects of the optimal solutions on Pel,maximp are also discussed. Figure 3 presents the optimal results obtained about the case study considering the capacities of the three renewable electricity generation technologies (PV, WindON, and WindOFF) as the decision variables of the MOOP. These technologies differ widely in Cann, which is affected by dr and LCOE (in the latter the capacity factor is also an important parameter, see Table 3). The different values of the economic parameters considered in the sensitivity analysis have a significant impact on the cost objective function of the MOOP. In fact, these parameters will affect the relative weights in Ctot of investment costs and operational costs/revenues due to electricity import/export: for a constant pel, a higher dr will increase Cann and in turn Ctot, whereas a higher pel will dampen the growth of Ctot as electricity export will generate larger revenues when installed generation capacities increase. This plays a fundamental role in determining which combinations of technologies are the optimal tradeoffs among the conflicting objectives of the MOOP. Electricity Sector Only WindOFF has the highest Cann of the three technologies and a LCOE between WindON and PV because of the highest capacity factor (40%); on the other hand WindON, in spite of having a higher Cann than PV, has a lower LCOE thanks to the higher capacity factor (30% vs. 12%, see Table 3). In the 3 × 3 diagrams in Figure 3 Ctot is dominated by the investment costs when a) pel = 20 EUR/MWh, which is below all LCOEs with all dr; b) pel = 40 EUR/MWh with dr = 9% and 15%; c) pel = 60 EUR/MWh with dr = 15%. In these cases, WindOFF contributes to the optimal solutions only when both WindON and PV have already reached their considered maximum installed capacities, meaning that further CO2 emission reductions from that condition are only possible with investments in WindOFF, which are the least convenient for those values of the economic parameters. As operational costs/profits start to counterbalance investment costs, the slope with which Ctot increases as CO2 emissions are reduced decreases, up to the point in which the Pareto front does not contain solutions with small CO2 emission reduction. With pel = 40 EUR/MWh and dr = 3% all solutions feature WindON at the maximum capacity, starting from emission reductions of 58% at minimum Ctot with zero capacity of WindOFF and PV. This means that is no longer economically convenient to look for solutions that achieve CO2 emissions reductions below 58% because of the large profits that can be achieved by exporting the electricity generated with WindON, which has to be exploited at the maximum considered capacity. From this condition, WindOFF provides better trade-offs than PV and increases its capacity up to the maximum, resulting in emission reductions of about 80%. Only when all available wind capacities are fully utilized PV becomes part of the solution set, further reducing emissions by 85%. With more significant operational profits due to pel = 60 EUR/MWh (at dr = 9%), emission reduction at minimum Ctot is even slightly higher than 60% and, again, all optimal solutions feature WindON at its maximum capacity. As the optimal Ctot increases from its minimum value both PV and WindOFF are convenient enough from the economic point of view to be featured in the solution set. Finally, when the revenues from exported electricity totally prevail over investment costs (pel = 60 EUR/MWh and dr = 3%), all LCOEs fall below pel and the two objectives of the MOOP are no longer conflicting. In this case only one optimal solution exists, featuring maximum installed capacities for all three technologies. Please note that, as expected, all the Pareto optimal set of solutions obtained for different values of pel and dr end up in this same solution (which also corresponds to maximum CO2 emission reduction, 85%), since this is due to the upper limit chosen for the ranges for the decision variables. It is important to observe that, in spite of its low capacity factor (12%) due to the Nordic location, PV is shown to be more economically convenient than WindOFF in six out of the nine considered combinations of the economic parameters. Figure 4 shows the reductions achieved in peak electricity import Pel,maximp with the optimal solutions obtained considering the three renewable electricity generation technologies (all diagrams correspond to those in the same position in Figure 3). In Piteå municipality Pel,maximp can be reduced from 165 MW (Demand2020 scenario) to 129 MW (−21.8% or 36 MW) by installing maximum capacities for all the three technologies. A Svenska Kraftnät report on wind power integration estimates the minimum availability factor of wind power across Sweden at 6% [74]. If this 6% were applied to the additional 690 MW capacity of WindON and WindOFF installed in the Demand2020 scenario, the wind turbines would provide 41.4 MW as a minimum, confirming the reliability of the simulation results from EnergyPLAN. Integrated Electricity and Heating Sectors This subsection presents the results of the optimization runs in which all the six decision variables of the MOOP are considered (PV, WindON, WindOFF, ElB, HP, BioB). This means that the electricity and heating sectors are considered integrated and optimized together, enabling the analysis of potential interactions between these sectors. Figure 5 duplicates the number of diagrams in the 3 × 3 grid already used in Figure 3, since the optimal values of decision variables related to the electricity sector are shown separated from those related to the heating sector for sake of clarity. In From the diagrams in Figure 5 it is apparent that the optimal decision variable values for the renewable electricity generation technologies follow in general the same trends as in Figure 3, but with some significant differences related to the interaction with the heating sector. Looking at the optimal decision variable values associated with heating technologies, the diagrams show that ElB disappears from the optimal solutions already in the early stages of CO2 reduction. No significant increase of renewable electricity generation capacities above their lower bounds is registered within this range of emission reductions, after which a 100% heating supply with HP represents the best trade-off for the heating sector. Only then the optimal solutions start to feature increasing renewable electricity generation capacities, widely following the same patterns as in the electricity only cases, until the point in which BioB in the heating sector becomes economically more convenient to further reduce emissions. A higher share of BioB determines a drop of the electricity demand in the overall municipal energy system, which reduces electricity import and, in turn, mitigates related emissions Emimp. Thus, the transition from HP to BioB causes a significant interaction with the decision variables related to the electricity sector. The substitution of HP with BioB is so beneficial, both in terms of costs and in reducing electricity demand, that the capacities of the renewable electricity generation technologies decrease during the transition while CO2 emissions are reduced. This transition from HP to BioB occurs at different levels of CO2 emissions reductions depending on the values of the economic parameters. For the same values of pel and pbio, the transition is shifted toward lower emissions by a lower dr as the difference in investment costs between the two technologies becomes less significant. On the other hand, for the same values of dr, the transition is shifted toward lower emissions by a higher pel and/or a lower pbio (biomass fuel costs are another term, relatively less significant, in Ctot). Finally, the transition is more gradual for a higher dr. Comparing the results of the integrated electricity and heating sectors ( Figure 5) with those of the electricity sector only (Figure 3), it can be observed that the same emission reductions can be achieved with lower installed capacities of the renewable electricity generation technologies. In fact, the transition in the heating sector from the current technology mix to HP and, for higher emission reductions, to BioB reduces electricity demand and results in lower emissions because of less import from the grid. The trend of Pel,maximp vs. CO2 emission reductions is shown in Figure 6. It can be interpreted as the composition of two effects: i) as in the electricity only case Pel,maximp decreases because of the growth of the installed capacities of PV, WindON and WindOFF; ii) during the transitions from one heating technology to another Pel,maximp follows the trend of the overall electricity demand, i.e., it is reduced when HP replaces ElB and then when BioB replaces HP. Minimum Pel,maximp is 103 MW, with a 38% reduction from the 165 MW in Demand2020 scenario, and it is of course achieved with 100% BioB supply of the individual heating sector and with maximum installed capacities of PV, WindON, and WindOFF. As a final note, the comparison in Figure 7 between the optimal Ctot obtained with and without the individual heating sector shows that Ctot is always lower, for the same level of CO2 emissions, when heating technologies are included in the set of decision variables. At 80% emission reductions, the costs are 1%, 7%, and 16% lower for dr = 3%, 9%, and 15% respectively. This underlines the importance of integrating and optimizing the electricity and heating sectors together, as these results are not trivial. Duration Curves for Electricity Import/Export The load on the supplying transmission line(s) is an important aspect to monitor in a municipal energy system. In this case study the connection with the national grid is modelled as a single transmission line with unlimited capacity. Figure 8 plots the duration curves of electricity import/export for the municipal energy system of Piteå according to the EnergyPLAN scenarios Demand2020 and Balanced2020 and the optimized solutions at 60% and 80% emission reduction for the middle range values of pel = 40 EUR/MWh, pbio = 35 EUR/MWh, and dr = 9%. The leftmost point of the duration curve for the power import/export is Pel,maximp and rightmost one is Pel,maxexp with a negative sign. Figure 9 presents the corresponding installed capacities of PV, WindON, and WindOFF. The EnergyPLAN Demand2020 scenario was the starting point for the optimization cases (see Section 3), featuring installed WindON capacity is 145 MW. Pel,maximp is 165 MW and electricity import exceeds 100 MW for about 400 h, while Pel,maxexp is 86 MW. The EnergyPLAN Balanced2020 scenario achieves 42.5% emission reductions compared to Demand2020 by installing a total local electricity generation capacity of 305 MW, resulting, as intended, in an annually balanced import/export of electric energy. The decrease of Pel,maximp is relatively small in the optimized cases, between 8.5% and 16.4% compared to the Demand2020 scenario. This is due to low wind speed during some cold winter days with high electricity demand, so that large installed capacities only have a minor impact on the import peak while having a major one on the export peak. Accordingly, only storage solutions can significantly lower Pel,maximp without causing a disproportional increase of Pel,maxexp. In fact, a significant increase of Pel,maxexp can be observed for all cases, from 135% to 500% compared to Demand2020 scenario the higher are the installed capacities. The real capacity of transmission lines eventually pose a limitation to the expansion of such large-scale local electricity generation, which is technically possible in many of the Nordic municipalities as the technical wind potential is vast as is the available land area. The differences in Pel,maximp between the cases in which the individual heating sector is supplied by heat pumps (60% emission reduction) or by biomass boilers (80% emission reduction) are limited as the two different electricity demands are paired with different amounts of installed capacities. It is worth noting, however, that the duration of high electricity import is significantly shortened compared to Demand2020 scenario, and that the integration between electricity and heating sector has a more limited effect on the growth of Pel,maximp. Near-Zero Electricity Import/Export Balance Condition and CO2 Emissions Some solutions within the set of the optimal trade-offs are close to the near-zero electricity import/export balance condition, as discussed in the scenarios simulated with EnergyPLAN for Piteå (Section 3). Table 5 offers a comparison between the EnergyPLAN simulation results for Demand2020 and Balanced2020 scenarios and the optimal solutions closest to the near-zero balance condition from the results presented in Section 3 (with pel = 40 EUR/MWh, pbio = 35 EUR/MWh and dr = 9%). The renewable electricity generation technology used in these optimal solutions is WindON exclusively, with at most a small PV capacity. The optimal results obtained for integrated electricity and heating sectors at this near-zero balance condition provide a solution with the individual heating sector totally supplied by heat pumps. The comparison shows that optimal Ctot values are 7.8% (electricity only) and 9.8% (integrated electricity and heating) lower than Balanced2020 scenario Ctot. Both optimized solutions provide emission reductions of about 36% with a cost increase of 4.6% (electricity only) or 2.5% (integrated electricity and heating) compared to Demand2020 scenario. The optimal solutions also result in lower Pel,maximp compared to Demand2020 and Balanced2020 scenarios. Conclusions This paper exploited the benefits of combining EnergyPLAN, an energy system analysis and simulation tool, with a multi-objective evolutionary algorithm to investigate the optimal alternatives for the development of the integrated electricity and heating sectors of a municipal energy system in the Nordic context. Total annual costs and CO2 emissions were defined as objective functions to be minimized, sensitivity analyses on key economic parameters were performed and the impact on electricity import/export was also analyzed. The results for the case study about Piteå (Norrbotten, Sweden), chosen as a representative municipality in the Nordic context, showed that CO2 emission reductions of at least 60% can be achieved without a considerable increase of total annual costs. Cost increases are always lower when electricity and heating sectors are integrated, since lower installed renewable electricity capacities are required to achieve given emission reductions. CO2 emission reductions higher than 60% can be achieved either by increased local renewable electricity generation or by replacing HP with BioB. After the introduction of biomass boilers, only larger renewable electricity generation capacities (usually WindOFF at this point, since WindON and solar PV, in spite of the low 12% capacity factor in the Nordic region, may have already reached their maximum installed capacities depending on the economic conditions) can further reduce emissions, with steeper cost increases. At 60% emission reductions, peak electricity import reduces by a maximum of 8%, while peak electricity export increases by about 400%, posing a challenge on the available transmission lines. At highest emission reductions peak electricity import Pel,maximp can be reduced by up to 38%. The results obtained for Piteå municipality are of course case dependent. However, the conclusions can be extended to the entire Nordic region as the majority of the population is concentrated in municipalities similar to Piteå. By utilizing available potentials for WindON and solar PV, as well as by implementing a heat pump strategy for the heating demand not served by district heating, significant reductions of carbon emissions can be achieved without noteworthy increases of total annual energy system costs. For coastal areas, in economic conditions with high electricity prices and low discount rates, offshore wind capacities can feasibly contribute to emission reductions. A biomass heating strategy is indicated if reductions of peak electricity import are targeted, although the increase of peak electricity export related to the large expansion of local electricity generation also suggests the use of storage solutions. The methodology of interfacing EnergyPLAN with a MOEA, together with an extensive sensitivity analysis on uncertain economic parameters, creates a wide set of optimal combinations of renewable electricity generation and heating options minimizing total annual costs and CO2 emission reductions. With this set of combinations, local decision-makers can now be made aware about the whole spectrum of optimal trade-off solutions and take more informed decisions for promoting a pathway toward a sustainable energy system for their municipality. Future studies shall investigate the integration of electricity and heating sectors with different electricity storage options, including stationary and mobile storage, with peak electricity import/export and total annual system costs to be minimized. In the model of the heating sector, it would be important to consider district heating technology options such as large-scale heat pumps, solar thermal energy and thermal storage. Finally, analyzing impacts of policy developments, including CO2 targets, CO2 prices, and subsidies for technologies, and evaluating the potentials for local value and job creation are other areas for possible future work. Author Contributions: R.F.: conceptualization; data curation; formal analysis; investigation; methodology; software; roles/writing-original draft. E.E.: supervision; validation; writing-review and editing. A.T.: formal analysis; methodology; software; supervision; validation; writing-review and editing. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by the Interreg Nord funded project Arctic Energy-Low Carbon Self-Sufficient Community (ID: 20200589). Conflicts of Interest: The authors declare no conflict of interest.
8,932
2020-01-07T00:00:00.000
[ "Engineering" ]
Near-ideal van der Waals rectifiers based on all-two-dimensional Schottky junctions The applications of any two-dimensional (2D) semiconductor devices cannot bypass the control of metal-semiconductor interfaces, which can be severely affected by complex Fermi pinning effects and defect states. Here, we report a near-ideal rectifier in the all-2D Schottky junctions composed of the 2D metal 1 T′-MoTe2 and the semiconducting monolayer MoS2. We show that the van der Waals integration of the two 2D materials can efficiently address the severe Fermi pinning effect generated by conventional metals, leading to increased Schottky barrier height. Furthermore, by healing original atom-vacancies and reducing the intrinsic defect doping in MoS2, the Schottky barrier width can be effectively enlarged by 59%. The 1 T′-MoTe2/healed-MoS2 rectifier exhibits a near-unity ideality factor of ~1.6, a rectifying ratio of >5 × 105, and high external quantum efficiency exceeding 20%. Finally, we generalize the barrier optimization strategy to other Schottky junctions, defining an alternative solution to enhance the performance of 2D-material-based electronic devices. Raman spectroscopy was employed to characterize the components and interface coupling quality of the 1T′-MoTe2/MoS2 Schottky junctions in Supplementary Fig. 2b. Firstly, the frequency difference between A1g and E 1 2g modes in Supplementary Fig. 2b is ~17.5 cm -1 , which also proves the MoS2 film is a single layer 1,2 . Secondly, compared to the isolated MoS2, the E 1 2g and A1g peaks of the MoS2 in the overlapped region have been softened and stiffen to varying degrees. This feature indicates that there is a strong interlayer coupling effect at the 1T′-MoTe2/1H-MoS2 interface. Otherwise, these peaks won't shift 3 . The out-of-plane E 1 2g peak in the overlapped region exhibited a prominent redshift (~0.7 cm -1 ) relative to that of the isolated MoS2, which can be ascribed to the thermal lattice mismatch of the strong interlayer coupling effect (or vdWs force) between the top and bottom layers 4 . The blue-shift of ~0.5 cm -1 of the in-plane vibrational mode A1g is attributed to the occupation of anti-bonding states in the conduction band of MoS2 by the electron concentration transfer from MoS2 to 1T′-MoTe2 5 . The occupation of anti-bonding states reduces the total electronic energy of the system, enhancing the Mo-S bonds and eventually stiffening the Raman mode. To further characterize the electronic structure of the 1T′-MoTe2, a KPFM was employed to characterize the surface potential in Supplementary Fig. 3c. The contact potential difference (CPD) between the AFM tip (Pt/Ir coated tips) and the sample is defined as 1, 6 : where (5.2-5.6 eV), , and are the work functions of the tip and sample, and the elementary charge, respectively. Bases on the work function of 5.10 eV of Au substrate, the work function of other materials can be calculated according to the following formula: The work function of 1T′-MoTe2 film is extracted as approximately 4.83 eV. Besides, the surface potential of the metallic 1T′-MoTe2 is independent of the film thickness in Supplementary Fig. 3c, which is completely different from semiconducting TMDCs 7 . The principle of the SVSH is that the SVs are healed spontaneously by the sulfur adatom clusters on the monolayer MoS2 surface through an acid-induced hydrogenation process in Supplementary Fig. 5a. To clarify the atomic structure variation, STEM was employed to determine the defect concentration variation of monolayer MoS2 before and after healing at the atomic scale. As the intensity of STEM images is directly related to the atomic number (Zcontrast), SVs (1S) and sulfur adatom clusters can be visualized and differentiated from the three-fold coordinated two sulfur atoms (Supplementary Fig. 5b and 5c). Besides, to quickly and efficiently distinguish these lattice defects, the interference of Mo atoms was filtered in filtered STEM images without Mo atoms in Supplementary Fig. 5b and S5c bottom. Disordered regions, in which contrast is significantly lower than the nearest six S atom sites, can be considered as SVs. While disordered regions, which contrast is significantly higher than the nearest six S atom sites, refer to as sulfur clusters. This contrast fluctuation was also confirmed by the extracted Z-value mapping in Supplementary Fig. 5d. Compared with the as-prepared MoS2, both the SV and sulfur cluster concentrations of the healed MoS2 showed a significant decrease in Supplementary Fig. 5b and S5c. Thus, the reduction of SV concentration in healed MoS2 can be attributed to the acid-induced SVSH effect of the PEDOT:PSS solution 6 . According to a large number of data statistics, the S/Mo ratios in monolayer MoS2 before and after treatment are ~1.85 and ~1.92, respectively. While the 1T′-MoTe2/MoS2 metal-semiconductor was being constructed, the 1T′-MoTe2 and MoS2 field-effect transistors were also being simultaneously constructed to remove the competing interferences from other electrode contacts in Supplementary Fig. 7a. The linear current-voltage relationships in Supplementary Fig. 7b-7c also show that the Cr electrodes and MoS2 can remain Ohmic contacts both before and after the SVSH. The removal of SVs makes the threshold voltage of the MoS2 FET close to zero in Supplementary Fig. 7d, illustrating the electron concentration is significantly lowered. The following formula is used to quantitatively calculate the electron concentration N2D of monolayer MoS2 10 : where Ci =1.15×10 -4 Fm -2 is the gate capacitance of the 300 nm SiO2 dielectric layer, VG is the gate voltage, VTH is the threshold voltage, and q is the elementary charge. With the gate voltage sweeping from positive to negative, the 1T′-MoTe2/as-prepared MoS2 Schottky junction will transform the behavior from Ohmic to rectifying in Supplementary Fig. 8a. However, the 1T′-MoTe2/healed MoS2 Schottky junction shows obvious rectifying behavior in the full gate voltage regime between -60 and 60 V in Supplementary Fig. 8b, after the SVs of the CVD-grown MoS2 healed by the SVSH. The discrepancy of the reverse-biased currents between the 1T′-MoTe2/as-prepared MoS2 and 1T′-MoTe2/healed MoS2 Schottky junctions is significantly large in Supplementary Fig. 8c left. Under reverse bias, enlarging the Schottky barrier width can transform the charge injection style from thermionic emission to thermionic field emission (also call thermally assisted tunneling) in Fig. 3g. While the discrepancy of the forward-biased currents is very small in Supplementary Fig. 8c right. The gate-tunable variation trends of both the forward and reverse currents extracted from the output curves are similar to that of the transfer curves in Supplementary Fig. 8d, indicating that the rectifying behaviors are not measured accidentally but reliable. More detailed explanations for the discrepancy can be obtained in Fig. 3f-g. The specific experiment in Supplementary Fig. 10 shows that even if the PEDOT:PSS treatment heals the SVs of the covered MoS2 of the 1T′-MoTe2/MoS2 diode, the enhancement behavior of the rectifying performance won't be greatly changed. Since the rectifying ratio of ~39 of the as-prepared (A1-A2) diodes at VD= ±2 V and VG= 0 V is smaller than that of ~370 of the healed (B1-B2) diodes in Supplementary Fig. 10g. Furthermore, by the secondary PEDOT:PSS treatment, the rectifying ratio of the as-prepared (A1-A2) diode was increased from ~39 to ~279, which is comparable to that of ~370 of the healed (B1-B2) diode ( Supplementary Fig. 10h). The possible reason why this contact fluctuation of the covered MoS2 is independent of the rectifying performance is that, in such Schottky junctions with a large Schottky barrier height of ~0.5 eV, the electron concentration of the covered monolayer MoS2 won't be affected by the PSS-induced SVSH effect but is mainly determined by the work function of the metal electrode 1T′-MoTe2 (Fig. 3f-g). In general, the thickness of 0.85 nm of monolayer MoS2 is much less than the depletion region (>2.9 nm) width of the 1T′-MoTe2/MoS2 Schottky junctions 14 . Next, the PL spectrum was also characterized to reconfirm the decrease in the electron concentration of MoS2 in the overlapped depletion region. Compared to the monolayer MoS2 supported on the SiO2 insulating substrate, the PL spectrum intensity and peak position of the overlapped region is substantially reduced and blue-shifted of ~20 meV in Supplementary Fig. 14a. This blue-shift in PL peak position is mainly due to the obvious difference in electron concentration between MoS2 in the depletion region and MoS2 on the insulating substrate. Whether supported by SiO2 or MoTe2, the PL spectra of monolayer MoS2 can be broken down into B excitons, intrinsic excitons (X), and trions (X -) by peak fitting in Supplementary Fig. 14b. A trion in n-type monolayer MoS2 is mainly composed of two electrons and one hole, and its component is largely positive with the degree of the electron concentration 10,11 . Different
1,975.4
2021-03-09T00:00:00.000
[ "Physics" ]
Fast Nonstationary Noise Tracking Based on Log-Spectral Power MMSE Estimator and Temporal Recursive Averaging Estimation of the noise power spectral density (PSD) plays a critical role in most existing single-channel speech enhancement algorithms. In this paper, we present a novel noise PSD tracking algorithm, which employs a log-spectral power minimum mean square error (MMSE) estimator. This method updates the noise PSD estimate by performing a temporal recursive averaging of log-spectral MMSE estimate of the current noise power to reduce the risk of speech leakage into noise estimate. A smoothing parameter used in the recursive operation is adjusted by speech presence probability (SPP). In this method, a spectral nonlinear weighting function is derived to estimate the noise spectral power which depends on the a priori and the a posteriori signal-to-noise ratio (SNR). An extensive performance comparison has been carried out with several state-of-the-art noise tracking algorithms, i.e., Minimum Statistics (MS), modified minima controlled recursive averaging algorithm (MCRA-2), MMSE-based method, and SPP-based method. It is clear from experimental results that the proposed algorithm exhibits more excellent noise tracking capability under various nonstationary noise environments and SNR levels. When employed in a speech enhancement framework, improved speech enhancement performance in terms of the segmental SNR (segSNR) improvements and three objective composite metrics is observed. I. INTRODUCTION Speech is one of the most important forms of human communication, which plays an important role in many applications such as mobile communications, digital hearing aids and human-computer interactions. However, in practical scenarios, clean speech signals will always, to some extent, be degraded by surrounding interference noises. In most situations, the interfering noise is usually nonstationary. The nonstationary interference noise will bring great challenges to speech signal processing applications. In humancomputer interaction (e.g., automatic speech recognition), for instance, the degraded speech leads to a significant decrease of recognition accuracy. As a consequence, noise suppression technology [1]- [7] is of great importance, the aim of The associate editor coordinating the review of this manuscript and approving it for publication was Bora Onat. which is to suppress the disturbing noise component in noisy speech while preserving the original quality and intelligibility of clean speech. Single-channel noise suppression approaches [8]- [10] based on short-time Fourier transform (STFT, a sequence of Fourier transforms of a windowed signal) are often used to achieve this. Noise power spectral density (PSD) is defined as the noise power per unit bandwidth. Noise PSD estimation is a crucial component in designing single-channel speech enhancement algorithms [11]- [17]. An underestimation of the noise PSD leads to an unnecessary amount of residual noise in the enhanced signal, while an overestimation introduces speech distortions, which may result in a loss of speech intelligibility. A conventional noise PSD method is to exploit a voice activity detector (VAD) [18]- [21] to identify speech pause periods, and then the noise PSD estimate is updated during speech absence. Although this is effective for highly VOLUME 7, 2019 This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ stationary noise, it often fails at low SNR (where SNR is the ratio of signal power to the noise power) scenarios, especially when the noise is nonstationary. In past decades, a significant amount of work has been done to solve this problem. In general, most state-of-the-art methods for noise PSD estimation can be divided into four main groups [22], i.e., Minimum Statistics (MS) methods [23], [24], time-recursive averaging methods [25]- [27], subspace decomposition algorithms [28], [29], and other techniques based on Bayesian estimation principle [30]- [32]. In the first group of algorithms, the noise PSD is tracked via Minimum Statistics (MS) algorithms [23], [24], which rely on two assumptions that the noise and the speech are statistically independent, and that the power of the noisy speech signal frequently decays to the power level of the noise signal (e.g., in speech pauses). The noise PSD is estimated as the tracked minimum of the smoothed noisy spectrum within a finite time window. The expectation of the minima is smaller than the mean value of the spectral power, thus a bias compensation factor is derived to correct the bias [24]. Since MS method will result in speech leakage into noise PSD estimate when the time window is short, a sufficiently long time window is required to reduce the amount of speech leakage. Unfortunately, if the time window is chosen too long, fast noise level changes will be tracked with a rather large delay. Thus a trade-off is necessary, a typical size of window is in the order of 1 s. As the minimum value in a window is used, the noise PSD will always be underestimated or tracked with a large delay in case of increasing noise power level. In the second category of algorithms, the noise PSD estimate is updated by recursively averaging the previous estimated noise PSD and the current noisy speech power spectrum, in which the smoothing factors are controlled by the speech presence probability (SPP). The representative methods of this class include minima controlled recursive averaging (MCRA) method [25] and its two modifications, i.e., improved MCRA (IMCRA) [26] and MCRA-2 [27]. The main distinction between MCRA, IMCRA and MCRA-2 is reflected in the way the SPP is calculated. In MCRA, the SPP is determined by the ratio of the smoothed noisy speech power spectrum to its local minimum obtained by minimum statistics technique [24], and for that reason this method is referred as the minima controlled recursive averaging (MCRA) algorithm. The presence of speech is detected when the ratio is above a certain fixed threshold. MCRA-2 employs the continuous spectral minimum-tracking algorithm [33] to obtain the minimum and is not constrained within a search window. Moreover, unlike the fixed threshold in MCRA, frequencydependent thresholds are used in MCRA-2 to calculate the SPP. In IMCRA method, the SPP estimation is based on a Gaussian statistical model and obtained from the ratio of the likelihood functions of speech presence and speech absence. The derivation of IMCRA method involves two iterations of smoothing and minimum tracking. The first iteration provides a simple speech-presence detector for each frequency bin, while the second iteration of smoothing excludes high-energy speech components, thus allowing for smaller windows in minima tracking. However, since these approaches are proposed on the basis of the MS principle [24], they still show a considerable tracking delay in case of the increasing noise power level. In the third family of methods, the decomposed noise-only subspace is used to update the noise PSD estimation. A famous subspace decomposition based approach, called subspace noise tracking (SNT) algorithm was proposed in [28]. The SNT is based on eigenvalue decompositions of correlation matrices that are constructed using time series of noisy discrete Fourier transform (DFT) coefficients. An improvement of this method, called minimum subspace noise tracking (MSNT) algorithm [29], exploits the limitedrank structure of the clean speech signal. MSNT combines the subspace structure and the minimum statistics tracking to estimate noise PSD. In comparison to, e.g., MS-based noise PSD trackers, the subspace decomposition based noise tracking algorithms allow for the faster noise tracking for many nonstationary noises [34]. However, the improved noise tracking performance of the subspace based noise trackers is accompanied by a significant increase in the computational complexity. In the fourth group of methods, the derivation of the noise spectral power estimators is based on Bayesian estimation principle and assumed statistical model. In [30], [31], minimum mean square error (MMSE) estimator derived by minimizing the mean square error (MSE) of spectral power is used to estimate the instantaneous noise power and a firstorder recursive smoothing technique is employed to update the noise PSD estimate. However, for noise power estimation, the simple bias compensation in [30] is motivated heuristically, whereas the bias compensation in [31] is derived rigorously based on assumed signal model. The SPP-based approach [32] is a further modification of the MMSE-based approach [31]. In the SPP-based method, the noise PSD estimate is obtained by the sum of the previous noise PSD estimate weighted by the conditional probability of speech presence and the periodogram of noisy speech weighted by the conditional probability of speech absence. These MMSE algorithms [31], [32] achieve fast noise spectral power tracking and are demonstrated to have a more robust noise estimation performance [34]. More recently, a model-based noise PSD estimation method was reported in [35] and [36], where different codebooks were trained for different noise and speech types. This model-based method [35] performs best for noise-types for which the algorithm is trained. However, since the number of models increases with the product of the codebook, this might lead to an intractable computational complexity. Although the spectral mean-square error (MSE) distortion metric is mathematically tractable and also leads to good results in [30]- [32], it appears to be not perceptually meaningful. In fact, human ear has a logarithmic response to sound (whether speech or noise) intensity changes [37] and it is argued that a distortion metric based on the MSE of the log-spectral is perceptually more relevant, and more appropriate for speech processing [38]. Based on such facts it was presented in [3] and [13] to estimate speech spectral amplitude by minimizing the log-spectral MSE. Recently, an algorithm was presented in [39] to track the speech and noise in the logpower spectral domain. Motivated by these facts, the noise is naturally regarded as ''target'' signal (not speech signal in [3], [13]), and we therefore exploit this distortion metric for noise estimation and develop a noise spectral power estimator that minimizes the MSE of log-spectral power. Moreover, speech estimators [3], [13] focus on reconstructing the instantaneous speech spectral amplitude, while noise tracking algorithms are interested in estimating the noise PSD (expectation of instantaneous noise spectral power). In this algorithm, the noise PSD estimation is obtained by recursively averaging the log-spectral MMSE estimate of the current noise power. The smoothing parameter is adjusted by the speech presence probability determined by the smoothed posteriori SNR. For the noise spectral power estimate, we derive a nonlinear spectral weighting function, which relies on the a priori and the a posteriori SNR. In this work, we consider the standard ''decision-directed'' (DD) estimator for the a priori SNR estimation. Experimental results show that for different nonstationary noises the proposed noise PSD tracker achieves a more accurate and rapid noise PSD estimate, and a better speech enhancement performance in terms of both the segmental SNR [22], [40] and three composite measures [41]. The remainder of this paper is organized as follows. Section II explains the used notation, and the signal model employed to derive the noise spectral power estimator. In Section III, we propose to employ Log-spectral MMSE estimate of noise power to recursively update the noise PSD estimate, which reduces the probability of speech leakage. Section IV gives a detailed derivation of the proposed Log-spectral MMSE noise power estimator. In Section V, we evaluate the performance of the proposed algorithm and make comparisons with four state-of-the-art methods, MS [24], MCRA-2 [27], MMSE-based algorithm [31], and SPP-based algorithm [32], in terms of tracking performance, and overall performance in a noise suppression framework. Conclusions are finally presented in Section VI. II. SIGNAL MODEL AND NOTATION Let y(n) denotes a noisy speech signal, which consists of a clean speech signal x(n) contaminated with additive noise signal d(n), i.e., y(n) = x(n) + d(n), where n is the discrete time index. The noisy signal y(n) is segmented into overlapping frames, followed by windowing with a square-root-Hann window. Subsequently, each frame is transformed by applying the short-time Fourier transform (STFT). The noisy speech signal in the time-frequency domain is expressed as where X (l, k) and D(l, k) represent the complex STFT coefficients of the clean speech and additive noise term, respectively. Furthermore, l is the frame index and k is the frequency index. It is assumed that X (l, k) and D(l, k) are conditionally independent across time and frequency, and obey zeromean complex Gaussian distributions with model parameters , respectively, where E {·} denotes the statistical expectation operator. λ x (l, k) and λ d (l, k) denote the PSDs (or variances) of the speech and the noise signals, respectively. In the sequel, the indexes l and k will be omitted for simplicity, whenever it is possible. The STFT coefficients can be represented in terms of their amplitude and phase, denoted as Y = Re jα , X = Ae jβ , and D = Ne jθ . We will call N 2 the (instantaneous) noise spectral power. Further, we use the terms a priori SNR ξ and the a posteriori SNR γ , defined as respectively. A hat symbol is used to denote the estimated quantities of variables, e.g.,N 2 is an estimator of noise spectral power N 2 . III. TEMPORAL RECURSIVE SMOOTHING OF NOISE LOG-SPECTRAL POWER MMSE ESTIMATION The temporal-recursive averaging algorithms, MCRA [25], IMCRA [26], obtain the noise PSD estimation by recursively smoothing the noisy speech spectral power R 2 [32], i.e., As the minimum values in a long time window are used to avoid speech leakage into noise PSD estimate, these methods show a slow response to fast increases in noise level [42]. In this paper, the noise PSD is estimated by recursively averaging a (instantaneous) noise spectral power estimator N 2 instead of noisy spectral power R 2 , given bŷ Compared to recursive averaging technique with fixed smoothing factor, SPP-based recursive averaging technique is a more general and widely used method. Similar to MCRA and IMCRA, the time-varying smoothing parameter α N (l, k) is also adjusted by an estimatep(l, k) of SPP where α n (0 < α n < 1) is a smoothing parameter which usually has a value range of [0.8, 0.95] as suggested in [22] and is empirically set to 0.8 in this work. Utilizing noise spectral power estimateN 2 instead of noisy spectral power R 2 has the benefit of reducing the amount of speech component leaking into noise PSD estimate. Therefore, an extremely accurate SPP estimator is not necessary. ForN 2 the Logspectral MMSE estimator of the noise power N 2 is exploited. Different from IMCRA, this work uses a simpler estimation method forp(l, k) that allows for faster tracking. A. SPEECH PRESENCE PROBABILITY ESTIMATION Since the noise PSD estimate is updated with noise spectral power estimateN 2 , the risk of speech leakage is reduced. Accordingly, there is no need to design an extremely accurate SPP estimator. In this work, we employ a very simple SPP estimator, which depends on the smoothed posteriori SNR. Considering the correlation of speech presence in the neighboring time-frequency points [43], we calculate the smoothed posteriori SNR over a time-frequency region where M = (2 k + 1) · ( l + 1) is the number of neghboring time-frequency points which are averaged. k and l denote number of the adjacent frequency bins and successive time frames, respectively, set to 1 and 2. Then, the smoothed posteriori SNR is compared against a threshold to decide speech present regions as follows where (k) is the threshold, which controls the trade-off between the update speed of noise PSD estimation and the amount of speech leakage. The higher the value, the faster the tracking speed, but the higher the risk of speech leakage. The speech presence probability I (l, k) is smoothed over time using the following first-order recursion: where α p (0 < α p < 1) is a smoothing parameter, set to 0.2 in our experiment as adopted in [25]. The smoothing parameter α N is obtained by substituting (8) into (5). Here, using averaged priori SNR reduces random fluctuations inp(l, k), at the same time fast react to changing noise levels is achieved (minimum tracking is abandoned). Additionally, similar to MCRA-2, we exploit frequency-dependent thresholds (k) instead of the fixed threshold in MCRA method, set to where K is the window length as well as STFT length. IV. LOG-SPECTRAL POWER MMSE ESTIMATOR A. DERIVATION OF THE WEIGHTING FUNCTION To estimate the noise PSD, in this section we derive an estimator of the noise spectral powerN 2 , which minimizes the MSE of the log-spectral power, given bŷ In [3] the MMSE estimator of the speech spectral magnitude in logarithmic domain was derived by exploiting moment generating function. Similar to [3], the moment generating function of log N 2 given Y , i.e., log N 2 |Y , is exploited to derive the noise spectral power estimator according to (9). Let P = log N 2 , then the moment generating function of P given Y takes the form By exploiting the first derivation of M P|Y (µ) at µ = 0, the estimator in (9) is obtained aŝ Therefore, we need to evaluate the moment generating function M P|Y (µ) and then to obtain the estimatorN 2 using (11). By applying Bayes' theorem, M P|Y (µ) can be expressed as Under the assumed complex Gaussian distributions, f (Y |n, θ) and f (n, θ) are given by By substituting (13) and (14) into (12) with λ = λ x λ d λ x +λ d and where (·) is the gamma function, (·) is the confluent hypergeometric function [44, Eq. 9.210.1], and η satisfies the relation The first derivative of M P|Y (µ) at µ = 0 in (11) is then given by According to the basic derivative rules, the part 1 in (17) is given by VOLUME 7, 2019 For the derivation of the part 2, we can obtain the derivative of (µ + 1) through the derivative of log (µ + 1). Exploiting the series expansion of log (µ + 1) [44, Eq. 8.342.1], we obtain d dµ where c is the Eulers constant. For the computation of part 3, utilizing [44, Eq. 9.210.1] and derivative rules, it can be written as Now, summing the results of (18), (19) and (20), and followed by utilizing (11), (16) The noise spectral power estimation is obtained from the noisy speech through a multiplicative nonlinear weighting function which depends only on the a priori and the a posteriori SNR. The weighting function is defined as After estimating noise spectral powerN 2 with (21), the noise PSD estimation is updated via (4) and (5) aŝ B. PRIORI SNR ESTIMATION FOR NOISE PSD ESTIMATION It is observed from (22) that the weighting function takes the priori and posteriori SNRs as parameters. As these parameters are unknown in practice, it is necessary to make an estimation. We have known that noise PSD tracking performance depends on the particular priori SNR estimator used. For the a priori SNR estimate, the DD approach and the ML approach are proposed in [2]. The DD approach is based on a heuristic knowledge and is widely accepted in literature. In this work, the standard DD priori SNR estimator is exploited to estimate the a priori SNR used for noise PSD estimate: where ξ min = −15 dB is the minimum value allowed for the priori SNR ξ , α NS is the smoothing factor,λ d is the estimated noise PSD, and 2 (l − 1, k) is the speech spectral power estimate obtained in the previous frame. The smoothing factor α NS typically lies in the range [0.9, 0.99] [45] and is set to 0.98 in this work. C. SAFETY NET Moreover, as in MMSE-based algorithm [31], in order to ensure that the noise PSD estimator continues to work properly in the extreme situation where the noise power level abruptly changes from one level to another, an effective and simple safety-net presented in [42] is adopted. In the safetynet, some memory resources are required to store the previous 0.8 seconds of the smoothed periodogram S(l, k) of noisy speech |Y (l, k)| 2 , where S(l, k) is given by S(l, k) = 0.1 S(l − 1, k) + 0.9 |Y (l, k)| 2 . The minima S min (l, k) of S(l, k) is used as a reference value. Then, the noise PSD estimationλ d (l, k) obtained with (23) is checked whether it fulfills the condition:λ d (l, k)/S min (l, k) < 1.5. If that happens, the final noise PSD estimation is updated bŷ λ d (l, k) = max 1.5 · S min (l, k),N 2 (l, k) . V. EXPERIMENTAL RESULTS AND DISCUSSION In this section, several comparisons and experiments are carried out to evaluate the performance of noise PSD trackers and demonstrate the superiority of proposed algorithm over other four state-of-the-art methods. Performance evaluations are conducted on the NOIZEUS database, which contains 30 IEEE sentences produced by three female and three male speakers [22], [46]. Clean speech signals are corrupted by five distinct types of noise sources at five input SNR levels, namely -5, 0, 5, 10, and 15 dB. The noise sources are modulated white Gaussian noise, babble noise from NOISEX-92 database [47], passing car noise, passing train noise, and traffic noise. The modulated white Gaussian noise is obtained through modulating white Gaussian noise by the following function where n is the discrete-time index, f s the sampling frequency, and f mod = 0.2 Hz denotes the modulation frequency. The passing car noise, passing train noise, and traffic noise are taken from Freesound database [48]. Speech and noise signals used in our experiments are sampled at a frequency of f s = 8 kHz. All noise PSD trackers employ a overlapping square-root-Hann window for spectral analysis and synthesis. The window length as well as the DFT length is K = 256 samples (32 ms), and the amount of the overlap between successive frames is 50%. In section V-A, we first compare the noise estimation accuracy of all noise trackers in five different noise environments. Subsequently, in section V-B the noise PSD estimators are integrated into a noise suppression framework and the speech enhancement performance is compared. Finally, the computational complexity is analyzed in section V-C. A. NOISE ESTIMATION ACCURACY The noise estimation accuracy is measured using the averaged logarithmic spectral error distance between the estimated noise PSDλ d (l, k) and the ideal reference noise PSD λ d (l, k). The ideal reference noise PSD λ d (l, k) is calculated VOLUME 7, 2019 by employing a recursive temporal smoothing of noise periodograms [28], [34] i.e., (26) with a smoothing parameter α d = 0.9 [28], [34]. The averaged logarithmic spectral error distance (LogErr) is defined as follows [28], [34] (27) where L and K indicate the number of signal-frames and frequency bins respectively. The lower LogErr value, the better the tracking capability. To illustrate the noise tracking performance of the proposed method in comparison to four competing trackers, we consider an example where three speech signals obtained from one female and two male speakers are concatenated and is degraded by modulated white Gaussian noise at an overall SNR of 0 dB. In Fig. 1, the estimated noise PSDs are shown for proposed method and four competing noise estimators together with ideal reference noise PSD. The clean and noisy speech signals are shown in Fig. 1(a) and Fig. 1(b), respectively. Fig. 1(c) exhibits the results of noise PSD estimation at frequency bin k = 36. This frequency bin index corresponds to the DFT band centered around 1125 Hz. Fig. 1(d) displays the estimated noise PSDs averaged over all frequency bins. It is observed that the proposed noise estimator tracks the increases and decreases in noise level much better than other four approaches. As expected, MS is not capable of tracking the changes when noise PSD increases. MCRA-2 is based on the minimum-tracking principle and therefore also shows a relatively large delay in tracking the increasing noise PSD. Compared to MS and MCRA-2, the MMSE and SPP algorithms perform better, but still have a tracking delay when the noise PSD rises. In Fig. 2, we show a second example where the same speech signal is corrupted by noise originating from passing train at an overall SNR of 0 dB. It is observed again that the proposed method exhibits better performance of handling both fast increases and decreases in noise level than other four reference approaches. For a rapidly increasing noise PSD, i.e., in the time-interval from 0-4 seconds, the proposed algorithm has a shortest tracking delay. When the noise is decreasing, e.g., in the time-span from 5 till 8 seconds, the proposed method, MS, MMSE and SPP exhibit similar performance. Compared to MS, the MCRA-2 algorithm is slightly better in tracking the increasing noise level, but it has the tendency to overestimate the noise PSD when the noise is decreasing. Fig. 3 shows another example where four different speech signals spoken by two male and two female speakers are concatenated and is corrupted by modulated white noise at an SNR of 0 dB. It is evident from Fig. 3 that the proposed noise tracker shows a better tracking performance than other competing methods. The quantitative evaluation results of noise tracking performance of all noise PSD estimators are given in Fig. 4 in terms of LogErr measure. It can be observed from the results in Fig. 4 that the proposed algorithm clearly outperforms other four competing methods in terms of LogErr for almost all noise sources and SNR levels, except for babble noise at 10 and 15 dB input SNR, where MMSE performs slightly better. As the proposed method can quickly update the noise PSD estimate, the superiority in terms of tracking performance is obvious especially at low SNR conditions. However, with the increase of SNR, the proposed tracker updates noise estimate quickly which may lead to overestimation of noise, and shows an increase in terms of LogErr. B. NOISE SUPPRESSION PERFORMANCE In order to investigate the impact of noise PSD trackers on noise suppression performance, the estimated noise PSDs are then incorporated into a DFT domain-based single channel speech enhancement system. The block diagram of the standard DFT-based single channel speech enhancement framework is depicted in Fig. 5. For the speech estimator, this work employs an MMSE amplitude estimator, which is derived under the assumption that the speech DFT coefficients follow a generalized-Gamma distribution with distribution 80994 VOLUME 7, 2019 parameters γ = 1 and ν = 0.6 [4]. In this speech enhancement system, we estimate the priori SNR using the decisiondirected approach with a smoothing parameter α dd = 0.98. The speech enhancement performance is evaluated in terms of the segSNR metric and three composite objective metrics. The segmental SNR (segSNR) is defined as follows [22], [40] where L and N denote the number of frames in the signal and the frame length, respectively, and (x) = min{max(x, −10), 35}. For the segSNR computation, only the signal segments containing speech are taken into account. The segSNR values are limited in the range of [−10dB, 35dB] thereby avoiding the need for a speech/silence detector. The segSNR measure results obtained with different noise PSD tracking algorithms are given in Table 1. It is found that the proposed noise PSD tracker yields larger segSNR improvements than all other algorithms for almost each noise source except for babble noise. For babble noise, MMSE obtains slightly higher segSNR values in case of input SNR more than 5 dB. However, the segSNR measure, which is widely used to evaluate noise reduction performance of speech enhancement algorithms, yields a poor correlation coefficients with subjective measure. For this, three composite objective metrics are employed to evaluate the enhancement performance. The three composite objective metrics are C sig , C bak , and C vol , which are obtained by linearly combining existing widely used measures, segSNR, weighted-slope spectral (WSS) distance [49], perceptual evaluation of speech quality (PESQ) [50], log likelihood ration (LLR) and Itakura-Saito (IS) distance measure [51]. The three composite metrics are given below [41]: (29) C sig , C bak and C vol are designed to provide the high correlations with three subjective measures, i.e., Mean opinion score (MOS) predictor of speech distortion (SIG), MOS predictor of background intrusiveness (BAK), and MOS predictor of overall speech quality (OVRL). The scores of three composite objective metrics obtained with all noise PSD estimation methods are shown in Figs. 6-8. Since the three composite measures provide very high correlation coefficients with subjective measures, especially C ovl measure has the highest correlation with the real subjective test, the evaluation results of composite measures are more important than segSNR measure. From the scores in Figs. 6-8, we observe that the proposed noise estimator is clearly superior to other noise tracking methods for all noise types and SNR conditions, except for babble noise at 15 dB. Figs. 9 and 10 present the enhanced waveforms and spectrograms obtained with different noise estimators for a speech example which is degraded by the traffic noise at 5 dB input SNR. In this way, the enhancement performance of speech enhancement algorithm combined with different noise estimators can be seen more directly. Fig. 9(a)-(b) and Fig. 10(a)-(b) show waveforms and spectrograms of clean speech and noisy speech, respectively. Fig. 9(c)-(f) display the enhanced speech waveforms obtained using four competing noise trackers, and the respective spectrograms are shown in Fig. 10(c)-(f). Fig. 9(h) and Fig. 10(g) show the enhanced waveform and spectrogram using proposed algorithm, respectively. Additionally, Fig. 9(g) also shows the estimated noise PSDs together with ideal reference noise PSD. Clearly, the proposed method performs better than other four competing algorithms. In general, the proposed approach shows a good tradeoff between noise suppression and speech distortion as it obtains higher segSNR and higher three composite measures. C. COMPUTATIONAL COMPLEXITY ANALYSIS To investigate the computational complexity of proposed algorithm and other four competing algorithms, we compare the execution time of Matlab implementations of these algorithms in this section [32]. The Matlab implementations of these methods run on a PC with a Intel Core i7-7700 processor. Table 2 shows the execution times of all five methods, normalized by the execution time of the proposed method. It is observed that the proposed algorithm exhibits a higher computational complexity than other methods. The computational complexity of the proposed method is mainly determined by the computation of the nonlinear weighting function (22), as exponential operation of the special exponential integral function needs to be computed. However, in a practical system, all nonlinear weighting functions can be computed offline for the relevant range of the parameters and stored in a lookup table. In this way, the noise PSD tracker can be implemented with significantly reduced execution time (normalized execution time: 0.52). The computation complexity is not an issue then. In addition, since more and more computational power will be available with improved technology, this problem will be easily solved. Notice, that the numbers as given in Table 2 are rough estimates since there will be some changes depending on the implementation details. The number in Table 2 reflects all processing steps of the proposed algorithm. VI. CONCLUSION A crucial component of single-channel speech enhancement algorithms is the estimation of noise PSD. This paper develops a novel algorithm for noise PSD estimation. In this method, a nonlinear weighting function of the log-spectral power MMSE estimator is derived to estimate instantaneous noise spectral power, which depends on the a priori and the a posteriori SNR. Then, the noise PSD estimation is updated by performing a temporal recursive averaging of log-spectral MMSE estimation of the current noise power. The smoothing parameter in the temporal recursive smoothing operation is adjusted by a simple estimate of speech presence probability. Experimental results of LogErr measure demonstrate that the proposed algorithm achieves faster and more accurate noise PSD tracking. Additionally, evaluation results of segSNR and three composite measures (C sig , C bak , C ovl ) show that the enhancement performance of proposed method is clearly superior to other competing methods in the presence of various noise sources and levels. The overall performance improvements of the proposed noise tracker come with an increase of computational complexity, which is mainly determined by the nonlinear weighting function computation. However, in a practical system, all weighting function can be evaluated offline and stored in a lookup table, thus the proposed method can be implemented with a significant decrease in computational complexity. As a result, the proposed method leads to a better tradeoff between the computational complexity and the overall performance. The techniques developed in this paper are of importance for many applications, such as hearing aids, speaker identification, human-computer interactions and many others. ACKNOWLEDGMENT Authors thank a lot the Circuit and Systems (signal processing) Group at Delft University of Technology for providing the matlab code of the MMSE speech estimator with generalized Gamma priors. His research interests include the low-power loss chip design, speech signal processing, speech coding, speech enhancement, and audio/image deep learning algorithm with application to the AI processing chip design.
7,628.6
2019-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Detuned Plasmonic Bragg Grating Sensor Based on a Defect Metal-Insulator-Metal Waveguide A nanoscale Bragg grating reflector based on the defect metal-insulator-metal (MIM) waveguide is developed and numerically simulated by using the finite element method (FEM). The MIM-based structure promises a highly tunable broad stop-band in transmission spectra. The narrow transmission window is shown to appear in the previous stop-band by changing the certain geometrical parameters. The central wavelengths can be controlled easily by altering the geographical parameters. The development of surface plasmon polarition (SPP) technology in metallic waveguide structures leads to more possibilities of controlling light at deep sub-wavelengths. Its attractive ability of breaking the diffraction limit contributes to the design of optical sensors. Introduction Surface plasmon polaritions (SPPs) are mixed electromagnetic waves confining at the metal surface, which results from electromagnetic waves coupling to free electron oscillations. There are two specific kinds of plasmonic structures: insulator-metal-insulator (IMI) and metal-insulator-metal (MIM). However, the MIM waveguides are promising for nanoscale applications owing to their attractive feature of light confinement beyond the diffraction limit. Although MIM has more transmission loss, it can be neglected in nano-scale devices [1,2]. In the past few years, several novel photonic devices based on SPPs, such as triangular waveguides [3], absorption switches [4], reflectors [5][6][7], and absorbers [8] have been investigated. Particularly, the MIM Bragg reflectors have a wide range of applications in optical communication fields such as optical filters [9], which have been theoretically proposed and experimentally demonstrated, single-cavity and multi-cavity structures filters [10], and tunable channel drop filters [11]. Due to their unique feature of subwavelength of confinement, optical sensors are another important application of communication that can be controlled by the width [12,13], effective refractive index [14], force, and so on. Some devices based on optical waveguides [15], directional couplers [16], were demonstrated few years ago. In order to measure the change in refractive index within a single sensing spot, several authors have proposed a nanoplasmonic ring hole interferometric sensor [17]. In addition, an ultra-compact loop-stub biosensor and structure of metal nanoslit arrays have been proposed [18,19]. In order to improve the sensing performance straightforwardly, Zhengqi Liu presents a sensor based on the suspended plasmonic crystal [20], a nanostructured X-shaped plasmonic sensor X-shaped plasmonic sensor has been designed for investigation phase interrogation [21]. A mechanical sensor has been presented to achieve trapping and releasing of light and high-speed rainbow trapping and releasing [22,23]. In this paper, we present and numerically analyze a Bragg grating based on a MIM structure, mainly studying the dependence of its transmission spectra on the geometrical parameters of the grating. An easily tuned stop-band has appeared in the spectra, and, after introducing a defect in the center of the grating, an open window was obtained in the stop-band transmission spectra. We investigate the sensing characteristics of the graded MIM plasmonic Bragg grating, and the tunability of the proposed structure shows a promising future for various applications. Structures and Theoretical Analysis The designed sensor based on MIM structure is shown schematically in Figure 1, where the gap filled with air is inserted to separate silver. We regard the structure as two integrated grating with the same material. The main parameters of the structure are the depth, width of the grating, two widths of the waveguide and the distance between two segments, h1 (h2), w1 (w2), h, H, w11 (w21) denote them, respectively. Chen has already figured out that as effective n = (w1 + w11)/w1 rises, the dispersion curve approaches that of the structured metal wires [24]. In this paper, we assume w1 = w11, w2 = w21, and H stays at the same in all conditions that we will discuss. The light can be coupled into the Bragg grating sensor by nanofiber, and the output light can be detected by microscopy [25]. We use COMSOL (COMSOL Inc., Stockholm, Sweden) to realize our simulations. Input and output are set as port 1 and port 2, respectively, and transverse magnetic (TM) modes are incident from port 1. The calculated area is divided by Yee's mesh with a size of 2 nm. The FEM with scattering boundary condition is employed to investigate the transmission characteristics of the structure. The dispersion relation of the fundamental plasmonic mode TM0 in the MIM waveguide is given by [26]: where εm and εi are the dielectric constants of the silver and air, respectively. k0 is the wave vector of light in vacuum. The neff = β/k0 can be calculated by Equation (1). The real part of neff as a function of ω and λ is shown in Figure 2. When ω is determined, Re (neff) will change little with the increase of the incident wavelength λ. The dispersion relation of the fundamental plasmonic mode TM 0 in the MIM waveguide is given by [26]: where ε m and ε i are the dielectric constants of the silver and air, respectively. k 0 is the wave vector of light in vacuum. The n eff = β/k 0 can be calculated by Equation (1). The real part of n eff as a function of ω and λ is shown in Figure 2. When ω is determined, Re (n eff ) will change little with the increase of the incident wavelength λ. For its low absorption, silver is chosen in the MIM plasmon whose equivalent permittivity is given by the well-known Drude model [27]: where ε∞ = 3.7 is the dielectric constant at the infinite frequency, γ = 2.73 × 10 13 Hz is the electron collision frequency, ωp = 1.38 × 10 16 Hz is the bulk plasma frequency, and ω stands for the angular frequency of the incident electromagnetic radiation. For the central wavelength of the stop-band, the well-known Bragg condition formulating the Bragg wavelength is as follows: where neff1 and neff2 are the MIM waveguide mode effective indices of larger and smaller segments of the Bragg grating, w1 (w2)and w11 (w21) represent the adjacent width of grating in a period, m is an integer, and λ0 is the Bragg wavelength. From the equation above, we know that the central wavelength of Bragg grating can be easily tuned by controlling the value of w1 (w2) and w11 (w21) or altering the difference between the effective indices inside the grating. We start our discussion with keeping grating one equal to grating two. The two-dimensional numerical simulations are carried out in the configurations using the FEM. Results and Discussion In order to realize the sensing application of the proposed MIM plasmonic Bragg grating, we optimize the structure parameters by investigating their effects on the transmission spectra. Figure 3a shows the transmission spectra of the MIM-based Bragg grating with different grating groove depth, while w1 = w2 = 200 nm, n = 8 are fixed. With the depth increasing, the central wavelength displays a red shift varying from 1 μm to 1.05 μm, and the band-gap gets widened dramatically. There are some sidelobes exhibited out of the bandgap, which is probably owing to the light scattering at the abruptly disappearing boundary at the end of the Bragg gratings. It is obvious that, with the increase of the Bragg period number, the dip gets deeper, which is similar to the fiber Bragg grating, but the Bragg wavelength stays almost unchanged, as shown in Figure 3b. However, the total period number of the MIM-based grating is smaller than that of the fiber brag grating (FBG), due to the effective index modulation strength of the MIM grating being much higher than the FBG. Two reasons are listed to decrease the total period number: First, transmission loss will be lower after long-distance metal absorption; second, the more compact structures can be achieved, which is For its low absorption, silver is chosen in the MIM plasmon whose equivalent permittivity is given by the well-known Drude model [27]: where ε 8 = 3.7 is the dielectric constant at the infinite frequency, γ = 2.73ˆ10 13 Hz is the electron collision frequency, ω p = 1.38ˆ10 16 Hz is the bulk plasma frequency, and ω stands for the angular frequency of the incident electromagnetic radiation. For the central wavelength of the stop-band, the well-known Bragg condition formulating the Bragg wavelength is as follows: where n eff1 and n eff2 are the MIM waveguide mode effective indices of larger and smaller segments of the Bragg grating, w 1 (w 2 ) and w 11 (w 21 ) represent the adjacent width of grating in a period, m is an integer, and λ 0 is the Bragg wavelength. From the equation above, we know that the central wavelength of Bragg grating can be easily tuned by controlling the value of w 1 (w 2 ) and w 11 (w 21 ) or altering the difference between the effective indices inside the grating. We start our discussion with keeping grating one equal to grating two. The two-dimensional numerical simulations are carried out in the configurations using the FEM. Results and Discussion In order to realize the sensing application of the proposed MIM plasmonic Bragg grating, we optimize the structure parameters by investigating their effects on the transmission spectra. Figure 3a shows the transmission spectra of the MIM-based Bragg grating with different grating groove depth, while w 1 = w 2 = 200 nm, n = 8 are fixed. With the depth increasing, the central wavelength displays a red shift varying from 1 µm to 1.05 µm, and the band-gap gets widened dramatically. There are some sidelobes exhibited out of the bandgap, which is probably owing to the light scattering at the abruptly disappearing boundary at the end of the Bragg gratings. It is obvious that, with the increase of the Bragg period number, the dip gets deeper, which is similar to the fiber Bragg grating, but the Bragg wavelength stays almost unchanged, as shown in Figure 3b. However, the total period number of the MIM-based grating is smaller than that of the fiber brag grating (FBG), due to the effective index modulation strength of the MIM grating being much higher than the FBG. Two reasons are listed to decrease the total period number: First, transmission loss will be lower after long-distance metal absorption; second, the more compact structures can be achieved, which is expected for highly integrated circuit structures. Obviously, as the period number goes down, the dip gets shallower and attenuation gets lower. In order to clearly distinguish the Bragg wavelength at transmission dip, we choose n = 8 for the following simulation, if not specifically mentioned. In addition, it is important to point out that a great decrease in band-pass transmission efficiency will also be caused by a large period number. Sensors 2016, 16, 784 4 of 9 expected for highly integrated circuit structures. Obviously, as the period number goes down, the dip gets shallower and attenuation gets lower. In order to clearly distinguish the Bragg wavelength at transmission dip, we choose n = 8 for the following simulation, if not specifically mentioned. In addition, it is important to point out that a great decrease in band-pass transmission efficiency will also be caused by a large period number. As we know, the width modulation can be achieved by altering structure parameters. Thus, we investigate parameter effects on the transmission property. From the Bragg condition, the central wavelength of the stop-band depends on the value of structure parameter w1 and w2 with the effective indices remaining unchanged. We choose n = 8 for both gratings, w1 equals to w2, changing from 170 nm to 290 nm, and increasing 30 nm at a time. As can be seen from Figure 4a, the Bragg wavelength increases with the expansion of the width, which is in accordance with Equation (3). The transmission spectra vs. wavelength with different width w2 have been depicted in Figure 4b, where w1 is fixed at 80 nm and other parameters stay the same. As the value of w2 increases from 80 nm to 320 nm, the first band-gaps of four spectra overlap at around 700 nm, but the central wavelength of the second one exhibits a red-shift and the band gap becomes wider, which also agrees well with Equation (3). Obviously, most linear responses in Figure 4c,d clearly depict the central wavelength as a function of the grating width in the two conditions mentioned above. Then, we insert a nano-cavity in the center of the structure to better understand the characteristics of the MIM-based grating, as been depicted in Figure 5a, where n = 6, w1 = w2 = 200 nm is used for simulation, with other parameters staying unchanged as at the beginning of our discussion. After introducing a defect into this structure, a narrow transmission peak appears in the center of the band-gap, which is illustrated in the solid orange line in Figure 5b. The solid blue line, depicted as the transmission spectra of the precious structure without defect, is also shown in order to be compared. Figure 5b shows that the wavelength of the new electromagnetic mode is 1 μm, which is exactly the central wavelength of the rejection band without defect when we choose d = 400 nm. The Q factor of the structure with a defect, Q = λ0/Δλ, where λ0 and Δλ represent the central resonance wavelength and the full width at half-maximum (FWHM) of the defect mode, As we know, the width modulation can be achieved by altering structure parameters. Thus, we investigate parameter effects on the transmission property. From the Bragg condition, the central wavelength of the stop-band depends on the value of structure parameter w 1 and w 2 with the effective indices remaining unchanged. We choose n = 8 for both gratings, w 1 equals to w 2 , changing from 170 nm to 290 nm, and increasing 30 nm at a time. As can be seen from Figure 4a, the Bragg wavelength increases with the expansion of the width, which is in accordance with Equation (3). The transmission spectra vs. wavelength with different width w 2 have been depicted in Figure 4b, where w 1 is fixed at 80 nm and other parameters stay the same. As the value of w 2 increases from 80 nm to 320 nm, the first band-gaps of four spectra overlap at around 700 nm, but the central wavelength of the second one exhibits a red-shift and the band gap becomes wider, which also agrees well with Equation (3). Obviously, most linear responses in Figure 4c,d clearly depict the central wavelength as a function of the grating width in the two conditions mentioned above. Then, we insert a nano-cavity in the center of the structure to better understand the characteristics of the MIM-based grating, as been depicted in Figure 5a, where n = 6, w 1 = w 2 = 200 nm is used for simulation, with other parameters staying unchanged as at the beginning of our discussion. After introducing a defect into this structure, a narrow transmission peak appears in the center of the band-gap, which is illustrated in the solid orange line in Figure 5b. The solid blue line, depicted as the transmission spectra of the precious structure without defect, is also shown in order to be compared. Figure 5b shows that the wavelength of the new electromagnetic mode is 1 µm, which is exactly the central wavelength of the rejection band without defect when we choose d = 400 nm. The Q factor of the structure with a defect, Q = λ 0 /∆λ, where λ 0 and ∆λ represent the central resonance wavelength and the full width at half-maximum (FWHM) of the defect mode, respectively, describes the ratio of the energy stored in the defect at resonance to the energy escaping from the cavity per cycle of oscillation. According to the theory mentioned above, we can figure out that the Q factor for the cavity in Figure 5b is approximately 20. respectively, describes the ratio of the energy stored in the defect at resonance to the energy escaping from the cavity per cycle of oscillation. According to the theory mentioned above, we can figure out that the Q factor for the cavity in Figure 5b is approximately 20. Before starting the discussion about width, we look into graded height of the Bragg grating first. Here, we only change the height of grating two (h2). In Figure 6, we can see that the transmission maximizes at the same Bragg wavelength, approximately around 1200 nm, but the peak value goes down sharply as grating gets deeper, and when h2 = 40 nm, three times deeper than respectively, describes the ratio of the energy stored in the defect at resonance to the energy escaping from the cavity per cycle of oscillation. According to the theory mentioned above, we can figure out that the Q factor for the cavity in Figure 5b is approximately 20. Before starting the discussion about width, we look into graded height of the Bragg grating first. Here, we only change the height of grating two (h2). In Figure 6, we can see that the transmission maximizes at the same Bragg wavelength, approximately around 1200 nm, but the peak value goes down sharply as grating gets deeper, and when h2 = 40 nm, three times deeper than Before starting the discussion about width, we look into graded height of the Bragg grating first. Here, we only change the height of grating two (h 2 ). In Figure 6, we can see that the transmission maximizes at the same Bragg wavelength, approximately around 1200 nm, but the peak value goes down sharply as grating gets deeper, and when h 2 = 40 nm, three times deeper than h 1 , it almost disappears. Meanwhile, the Q factor drops quickly, from 30 to 20, which is partly caused by the great energy loss in the transmission process. Therefore, the values we choose that h 1 and h 2 are equal to 10 nm are reasonable choices for better understanding our structure. Sensors 2016, 16, 784 6 of 9 h1, it almost disappears. Meanwhile, the Q factor drops quickly, from 30 to 20, which is partly caused by the great energy loss in the transmission process. Therefore, the values we choose that h1 and h2 are equal to 10 nm are reasonable choices for better understanding our structure. Then, the sensing characteristic of the presented structure is investigated by changing the width of the grating. As expected, the transmission window in the stop-band shows a red-shift when we increase the width together, as depicted in Figure 7a. However, in Figure 7b, once one width is fixed and another altered, there will be no peak, which means no wavelength can be sensed. As we can see from Figure 8, the spectra are red-shifted, and the peak of the curve tends towards longer wavelengths with an increase of the defect length or refractive index of the material to be sensed. Q factor shows an increasing trend due to λ0 keeping unchanged. Importantly, stronger sensitivity of the sensor is obtained. For example, the Q factor for the defect cavity is found to be Then, the sensing characteristic of the presented structure is investigated by changing the width of the grating. As expected, the transmission window in the stop-band shows a red-shift when we increase the width together, as depicted in Figure 7a. However, in Figure 7b, once one width is fixed and another altered, there will be no peak, which means no wavelength can be sensed. h1, it almost disappears. Meanwhile, the Q factor drops quickly, from 30 to 20, which is partly caused by the great energy loss in the transmission process. Therefore, the values we choose that h1 and h2 are equal to 10 nm are reasonable choices for better understanding our structure. Then, the sensing characteristic of the presented structure is investigated by changing the width of the grating. As expected, the transmission window in the stop-band shows a red-shift when we increase the width together, as depicted in Figure 7a. However, in Figure 7b, once one width is fixed and another altered, there will be no peak, which means no wavelength can be sensed. As we can see from Figure 8, the spectra are red-shifted, and the peak of the curve tends towards longer wavelengths with an increase of the defect length or refractive index of the material to be sensed. Q factor shows an increasing trend due to λ0 keeping unchanged. Importantly, stronger sensitivity of the sensor is obtained. For example, the Q factor for the defect cavity is found to be As we can see from Figure 8, the spectra are red-shifted, and the peak of the curve tends towards longer wavelengths with an increase of the defect length or refractive index of the material to be sensed. Q factor shows an increasing trend due to λ 0 keeping unchanged. Importantly, stronger sensitivity of the sensor is obtained. For example, the Q factor for the defect cavity is found to be around 24 in Figure 8a with w 1 = w 2 = 200 nm and h 1 = h 2 = 10 nm. It is higher than the value in Figure 5b. Figure 9a exhibits the electric field at peak wavelength λ = 1010 nm. Obviously, the defect leads to the appearance of the resonance mode, whose energy mostly concentrates on the central defect area as expected. The contour profile of the normalized electric field distributions corresponding to transmission dips and passes for the defect length d = 200 nm are depicted in Figure 9b,c. Sensors 2016, 16, 784 7 of 9 around 24 in Figure 8a with w1 = w2 = 200 nm and h1 = h2 = 10 nm. It is higher than the value in Figure 5b. Figure 9a exhibits the electric field at peak wavelength λ = 1010 nm. Obviously, the defect leads to the appearance of the resonance mode, whose energy mostly concentrates on the central defect area as expected. The contour profile of the normalized electric field distributions corresponding to transmission dips and passes for the defect length d = 200 nm are depicted in Figure 9b,c. around 24 in Figure 8a with w1 = w2 = 200 nm and h1 = h2 = 10 nm. It is higher than the value in Figure 5b. Figure 9a exhibits the electric field at peak wavelength λ = 1010 nm. Obviously, the defect leads to the appearance of the resonance mode, whose energy mostly concentrates on the central defect area as expected. The contour profile of the normalized electric field distributions corresponding to transmission dips and passes for the defect length d = 200 nm are depicted in Figure 9b,c. Conclusions To sum up, we studied the sensing characteristics of a structure based on a MIM waveguide. We investigated the influence of period and height of two gratings for better structure parameters and also looked into transmission spectra of a proposed structure with different widths. The defect is introduced in the middle of the MIM Bragg grating mentioned above, and the effects of width and graded-altering height on the transmission spectra are discussed, respectively. The other important parameters of the structure, such as the effective index and the length of the defect are investigated in detail. As expected, a majority of the spectra that we looked into are red-shifted in our specific discussion. More importantly, the Q factor shows a bit of an increase when we change the parameters, and the characteristics exhibited in the spectra lead to the conclusion that we can choose the wavelength to be sensed by altering structure parameters. In addition, the proposed structures with sensitivity and ultra-narrow light reflection response could be significant for applications of ultra-compact plasmonic devices, such as the active polarization-adjusted multispectral color filtering, displaying and imaging, spectral selective light reflection, and high performance plasmonic sensing in the fields of gas detection, medical diagnostics. All these analyses would provide guidelines to the design of optical filters, sensors and other devices.
5,593
2016-05-28T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Transcriptome-Wide Characterization of piRNAs during the Developmental Process of European Honey-Bee Larval Guts piRNAs play pivotal roles in maintaining genome stability, regulating gene expression, and modulating development and immunity. However, there are few piRNA-associated studies on honey-bees, and the regulatory role of piRNAs in the development of bee guts is largely unknown. Here, the differential expression pattern of piRNAs during the developmental process of the European honey-bee (Apis mellifera) larval guts was analyzed, followed by investigation of the regulatory network and the potential function of differentially expressed piRNAs (DEpiRNAs) in regulating gut development. A total of 843 piRNAs were identified in the larval guts of A. mellifera; among these, 764 piRNAs were shared by 4- (Am4 group), 5- (Am5 group), and 6-day-old (Am6 group) larval guts, while 11, 67, and one, respectively, were unique. The first base of piRNAs in each group had a cytosine (C) bias. Additionally, 61 up-regulated and 17 down-regulated piRNAs were identified in the “Am4 vs. Am5” comparison group, further targeting 9, 983 genes, which were involved in 50 GO terms and 142 pathways, while two up-regulated and five down-regulated piRNAs were detected in the “Am5 vs. Am6” comparison group, further targeting 1, 936 genes, which were engaged in 41 functional terms and 101 pathways. piR-ame-742536 and piR-ame-856650 in the “Am4 vs. Am5” comparison group as well as piR-ame-592661 and piR-ame-31653 in the “Am5 vs. Am6” comparison group were found to link to the highest number of targets. Further analysis indicated that targets of DEpiRNAs in these two comparison groups putatively regulate seven development-associated signaling pathways, seven immune-associated pathways, and three energy metabolism pathways. Moreover, the expression trends of five randomly selected DEpiRNAs were verified based on stem-loop RT-PCR and RT-qPCR. These results were suggestive of the overall alteration of piRNAs during the larval developmental process and demonstrated that DEpiRNAs potentially modulate development-, immune-, and energy metabolism-associated pathways by regulating the expression of corresponding genes via target binding, further affecting the development of A. mellifera larval guts. Our data offer a novel insight into the development of bee larval guts and lay a basis for clarifying the underlying mechanisms. Introduction Piwi-interacting RNAs (piRNAs) are types of small non-coding RNAs (ncRNAs), with a length distribution from 24 nucleotide (nt) to 32 nt [1]. Accumulating evidence shows that piRNAs play critical roles in suppressing transposons and maintaining genome stability [2,3]. Dissimilar to miRNA, piRNAs are transcribed from single-stranded RNA via a dicer-independent mechanism and function by interacting with P-element-induced Source of sRNA-seq Data The larval guts of A. m. ligustica that were 4-, 5-, and 6-days-old (Am4, Am5, and Am6 groups) were previously prepared. Each of the aforementioned three groups contained three larval guts, and there were three biological replicas that were used in this experiment [22]. Am4, Am5, and Am6 groups were subjected to cDNA library construction and deep sequencing using small RNA-seq (sRNA-seq) technology where 38, 011, 613; 43, 967, 518 and 39, 523, 034 raw reads were produced, and after quality control, 32, 524, 933; 36, 113, 035 and 27, 691, 488 clean tags were gained, respectively. The Pearson correlation coefficients between three different biological replicas within each group were above 98.22% [23]. The raw data that were generated from sRNA-seq were deposited in the NCBI SRA database under the BioProject number: PRJNA408312. Identification and Investigation of piRNAs A. m. ligustia piRNAs were identified according to our previously described protocol: (1) The clean reads were mapped to the reference genome of A. mellifera (Assembly Amel_4.5), and the mapped clean reads were further aligned to GeneBank and Rfam (11.0) databases to remove small ncRNA including rRNA, scRNA, snoRNA, snRNA, and tRNA (2) miRNAs were filtered out from the remaining clean reads; and (3) sRNAs with a length distribution from 24 nt to 33 nt were screened out based on the length characteristics of piRNAs, and only those that aligned to a unique position were retained as candidate piRNAs. Next, first base bias of piRNAs in each group was summarized on the basis of the prediction result. Target Prediction and Analysis of DEpiRNAs The expression level of each piRNA was normalized to tags per million (TPM) following the formula TPM = T × 10 6 /N (T denotes clean reads of piRNA, N denotes clean reads of total sRNA). The fold change of the expression level of each piRNA between two different groups was determined following the formula: (TPM in Am5)/(TPM in Am4) or (TPM in Am6)/(TPM in Am5). On basis of the standard of p-value ≤ 0.05 and |log 2 (Fold change)| ≥ 1, DEpiRNAs in "Am4 vs. Am5" and "Am5 vs. Am6" comparison groups were screened out. TargetFinder software was used to predict the target genes of DEpiRNAs [24]. The targets were aligned to the GO (https://www.geneontology.org/, (accessed on 7 October 2022)) and KEGG (https://www.genome.jp/kegg/, (accessed on 7 October 2022)) databases using the BLAST tool to obtain corresponding annotation. Construction and Analysis of Regulatory Network of DEpiRNAs The gut of insects including the honey-bee is not only a key organ for food digestion and nutrition absorption [25], but also a pivotal position for immune defense and host-pathogen interactions [26]. In addition, the developmental process of the bee gut was suggested to be accompanied with the development of immune and energy metabolism [22]. Therefore, DEpiRNAs that are relevant to immune and energy metabolism pathways as well as corresponding regulatory networks were further investigated in this current work. Following the KEGG pathway annotations, the target genes annotated in development-, immune-, and energy metabolism-associated signaling pathways were further surveyed, respectively, and the threshold for screening the targeted binding relationship was set as a binding free energy of less than −15 kcal/mol; the regulatory networks were constructed based on the targeting relationship between DEpiRNAs and genes, followed by visualization utilizing Cytoscape software [27] with default parameters. Validation of DEpiRNAs by Stem-Loop RT-PCR The total RNA from 4-, 5-, and 6-day-old A. m. ligustica larval guts were extracted using a FastPure ® Cell/Tissue Total RNA Isolation Kit V2 (Vazyme, Nanjing, China). The concentration and purity of RNA were checked with a Nanodrop 2000 spectrophotometer (Thermo Fisher, Waltham, MA, USA). A total of five DEpiRNAs were randomly selected for stem-loop RT-PCR validation, including four (piR-ame-1146560, piR-ame-1183555, piRame-387266, and piR-ame-856650) from the "Am4 vs. Am5" comparison group and one (piR-ame-592661) from the "Am5 vs. Am6" comparison group. Specific stem-loop primers and forward primers (F) as well as universal reverse primers (R) were designed using DNAMAN software and then synthesized by Sangon Biotech Co., Ltd. (Shanghai, China). According to the instructions of HiScript ® 1st Strand cDNA Synthesis Kit, cDNA was synthesized by reverse transcription using stem-loop primers and used as templates for PCR of DEpiRNA. Reverse transcription was performed using a mixture of random primers and oligo (dT) primers, and the resulting cDNA were used as templates for PCR of the reference gene snRNA U6. The PCR system (20 µL) contained 1 µL of diluted cDNA, 10 µL of PCR mix (Vazyme, Nanjing, China), 1 µL of forward primers, 1 µL of reverse primers, and 7 µL of diethyl pyrocarbonate (DEPC) water. The PCR was conducted on a T100 thermocycler (Bio-Rad, Hercules, CA, USA) under the following conditions: predenaturation step at 95 • C for 5 min; 40 amplification cycles of denaturation at 95 • C for 10 s, annealing at 55 • C for 30 s, and elongation at 72 • C for 1 min; followed by a final elongation step at 72 • C for 10 min. The amplification products were detected on 1.8% agarose gel electrophoresis with Genecolor (Gene-Bio, Shenzhen, China) staining. Verification of DEpiRNAs by RT-qPCR The RT-qPCR was carried out following the protocol of SYBR Green Dye kit (Vazyme, Nanjing, China). The reaction system (20 µL) included 1.3 µL of cDNA, 1 µL of forward primers, 1 µL of reverse primers, 6.7 µL of DEPC water, and 10 µL of SYBR Green Dye. RT-qPCR was conducted on an Applied Biosystems QuantStudio 3 system (Thermo Fisher, Waltham, MA, USA) following the conditions: pre-denaturation step at 95 • C for 5 min, 40 amplification cycles of denaturation at 95 • C for 10 s, annealing at 60 • C for 30 s, and elongation at 72 • C for 15 s, followed by a final elongation step at 72 • C for 10 min. The reaction was performed using an Applied Biosystems QuantStudio 3 Real-Time PCR System (Themo Fisher). All the reactions were performed in triplicate. The relative expression of piRNA was calculated using the 2 −∆∆Ct method [28]. Detailed information about the primers that were used in this work is shown in Table S1. Statistical Analysis Statistical analyses were conducted with SPSS software (IBM, Amunque, NY, USA) and GraphPad Prism 7.0 software (GraphPad, San Diego, CA, USA). Data were presented as the mean ± standard deviation (SD). Statistics analysis was performed using Student's t-test. Significant (p < 0.05) GO terms and KEGG pathways were filtered by performing Fisher's exact test with R software 3.3.1 [29,30]. Identification and Characterization of piRNAs in A. m. ligustica Larval Guts A total of 843 piRNAs were identified in the larval guts of A. m. ligustica; among these, 764 piRNAs were shared by Am4, Am5, and Am6 groups, while 11, 67, and one, respectively, were unique. Additionally, the first base of piRNAs in Am4, Am5, and Am6 groups had a C bias ( Figure 1A). Further investigation showed that the length distribution of the identified piRNAs in Am4, Am5, and Am6 groups were from 24 nt to 33 nt ( Figure 1B), similar to the findings in other animals such as the Mongolian horse and Scylla paramamosain [31,32]. similar to the findings in other animals such as the Mongolian horse and Scylla paramamosain [31,32]. Target Prediction and Annotation of DEpiRNA DEpiRNAs in the "Am4 vs. Am5" comparison group can target 9, 983 genes, which could be annotated to 20 biological process-related GO terms such as cellular processes and RNA biosynthetic processes, 11 molecular function-related GO terms such as cation channel activity and cation binding, and 19 cellular component-related GO terms such as cell and membrane parts ( Figure 3A). DEpiRNAs in the "Am5 vs. Am6" comparison group can target 1, 936 genes, and these targets could be annotated to a total of 41 GO terms, including cation transport, cation channel activity, and cells. (Figure 3B). Target Prediction and Annotation of DEpiRNA DEpiRNAs in the "Am4 vs. Am5" comparison group can target 9, 983 genes, which could be annotated to 20 biological process-related GO terms such as cellular processes and RNA biosynthetic processes, 11 molecular function-related GO terms such as cation channel activity and cation binding, and 19 cellular component-related GO terms such as cell and membrane parts ( Figure 3A). DEpiRNAs in the "Am5 vs. Am6" comparison group can target 1, 936 genes, and these targets could be annotated to a total of 41 GO terms, including cation transport, cation channel activity, and cells. (Figure 3B). In addition, the target genes of DEpiRNA in the "Am4 vs. Am5" comparison group could be annotated to 142 pathways such as Wnt signaling pathway, propanoate metabolism, and Hippo signaling pathway ( Figure 4A). Those in the "Am5 vs. Am6" comparison group can be annotated to 101 pathways including the Hippo signaling pathway, RNA degradation, and phototransduction ( Figure 4B). In addition, the target genes of DEpiRNA in the "Am4 vs. Am5" comparison group could be annotated to 142 pathways such as Wnt signaling pathway, propanoate metabolism, and Hippo signaling pathway ( Figure 4A). Those in the "Am5 vs. Am6" comparison group can be annotated to 101 pathways including the Hippo signaling pathway, RNA degradation, and phototransduction ( Figure 4B). Investigation of Regulatory Network between DEpiRNAs and Target Genes In the "Am4 vs. Am5" comparison group, 54 up-regulated piRNAs could target 9, 398 genes, while 14 down-regulated piRNAs could target 3, 606 genes; each of these DEpiRNAs can target more than two genes, with piR-ame-742536 and piR-ame-856650 binding to the highest number of target genes (1, 421 and 1, 437). Additionally, two upregulated piRNAs in the "Am5 vs. Am6" comparison group could target 604 genes, whereas four down-regulated piRNAs could target 1, 473 genes. Each of these DEpiRNAs Investigation of Regulatory Network between DEpiRNAs and Target Genes In the "Am4 vs. Am5" comparison group, 54 up-regulated piRNAs could target 9, 398 genes, while 14 down-regulated piRNAs could target 3, 606 genes; each of these DEpiR-NAs can target more than two genes, with piR-ame-742536 and piR-ame-856650 binding to the highest number of target genes (1, 421 and 1, 437). Additionally, two up-regulated piRNAs in the "Am5 vs. Am6" comparison group could target 604 genes, whereas four down-regulated piRNAs could target 1, 473 genes. Each of these DEpiRNAs can target more than two genes, with piR-ame-592661 and piR-ame-31653 linking to the highest number of target genes (447 and 839). The regulatory network was constructed and analyzed, and the results showed that 202 and 58 target genes in the above-mentioned two comparison groups were involved in seven development-associated signaling pathways such as Hippo, Notch, and mTOR signaling pathways, whereas 255 and 39 targets were engaged in seven immune-associated pathways including endocytosis, the Jak/STAT signaling pathway, and ubiquitin-mediated proteolysis ( Figure 5A). Additionally, 33 and 12 targets were found to be enriched in three energy metabolism pathways, namely sulfur metabolism, nitrogen metabolism, and oxidative phosphorylation ( Figure 5B). Detailed information about the targeting relationship between DEpiRNAs and genes relative to the development, immune, and energy metabolism pathways are shown in Table S3. Stem-Loop RT-PCR and RT-qPCR Verification of DEpiRNA The stem-loop RT-PCR results indicated that fragments with an expected size (about 60-80 bp) were amplified from five randomly selected five DEpiRNAs (Figure 6), which verified the expression of these DEpiRNAs in the A. m. ligustica larval gut. Further, RT-qPCR results suggested that the expression trend of these five DEpiRNAs were consistent with sRNA-seq datasets, confirming the reliability of our transcriptome data (Figure 7). naling pathways, whereas 255 and 39 targets were engaged in seven immune-associated pathways including endocytosis, the Jak/STAT signaling pathway, and ubiquitin-mediated proteolysis ( Figure 5A). Additionally, 33 and 12 targets were found to be enriched in three energy metabolism pathways, namely sulfur metabolism, nitrogen metabolism, and oxidative phosphorylation ( Figure 5B). Detailed information about the targeting relationship between DEpiRNAs and genes relative to the development, immune, and energy metabolism pathways are shown in Table S3. group, green circles indicate target genes of DEpiRNAs in the "Am5 vs. Am6" comparison group, while pink squares indicate signaling pathways. Stem-Loop RT-PCR and RT-qPCR Verification of DEpiRNA The stem-loop RT-PCR results indicated that fragments with an expected size (about 60 -80 bp) were amplified from five randomly selected five DEpiRNAs (Figure 6), which verified the expression of these DEpiRNAs in the A. m. ligustica larval gut. Further, RT-qPCR results suggested that the expression trend of these five DEpiRNAs were consistent with sRNA-seq datasets, confirming the reliability of our transcriptome data (Figure 7). The stem-loop RT-PCR results indicated that fragments with an expected size (about 60 -80 bp) were amplified from five randomly selected five DEpiRNAs (Figure 6), which verified the expression of these DEpiRNAs in the A. m. ligustica larval gut. Further, RT-qPCR results suggested that the expression trend of these five DEpiRNAs were consistent with sRNA-seq datasets, confirming the reliability of our transcriptome data (Figure 7). Discussion Here, sRNA-seq datasets derived from 4-, 5-, and 6-day-old A. mellifera larval guts were used following two major considerations: (1) the larval stage of the honey-bee lasts for 6 days, 1-and 2-day-old larvae are very small and manual transfer is likely to cause larval death. It was found after artificial transfer of 3-day-old bee larvae to 24-well culture plates that the larvae can maintain a high survival rate up to 6-days-old in a constant temperature and humidity chamber under lab conditions [33]. (2) we had already performed deep sequencing of 4-, 5-, and 6-day-old A. m. ligustica larval guts using sRNA-seq, and deciphered the differential expression profile of miRNAs and the putative roles of DEpiRNAs in the regulation of larval gut development based on the obtained high-quality sequencing data [23]. Our team previously predicted 596 piRNAs in the A. m. ligustica workers' midguts based on bioinformatics [34]. In this current work, 843 piRNAs were identified in the larval guts of A. m. ligustica, with a length distribution among 24 nt~33 nt. Further analysis showed that as many as 519 (61.57%) piRNAs were shared by the workers' midguts and larval guts of A. m. ligustica, whereas the numbers of unique piRNAs were 324 and 77. It is inferred that the shared piRNAs are likely to play a fundamental role in various developmental stages of A. m. ligustica larval guts, while the unique piRNAs may play different roles in different developmental stages. In view of the limited information on bee piRNA, the piRNAs that were identified in the present study further enrich the reservoir of piRNAs in European honey-bee and offer a valuable genetic resource for related studies on other bee species. In animals, piRNAs were verified to participate in the regulation of growth, development, and embryogenesis. Based on the overexpression and knockdown of piRNA-3312, Guo et al. [35] found that piRNA-3312 targeted the gut esterase 1 gene to decrease the pyrethroid resistance of Culex pipiens pallens. Praher et al. [36] found that piRNAs were significantly differentially expressed in the early developmental stages of Nematostella vectensis, indicative of the regulatory role of piRNAs in development. Here, 78 and seven DEpiRNAs were observed in "Am4 vs. Am5" and "Am5 vs. Am6" comparison groups, respectively, indicating that the process of the larval gut of A. m. ligustica was accompanied by the differential expression of piRNAs, and these DEpiRNAs may be engaged in regulating development of A. m. ligustica larval gut. DEpiRNAs in the "Am4 vs. Am5" and "Am5 vs. Am6" comparison groups were found to target 9, 983 genes. In addition, 1, 936 target genes were involved in metal ion transport and calcium ion transport terms relative to biological processes, membrane parts, and membrane terms relative to cellular components, and cation channel activity and ion channel activity relative to molecular function. Additionally, the targets of DEpiRNAs in the aforementioned two comparison groups were involved in four and three development-associated terms such as metabolic processes and development processes and three and two immune-associated terms such as immune system processes and response to stimulus. Targets in these two comparison groups were engaged in 142 and 101 KEGG pathways, including fatty acid metabolism and propanoate metabolism relative to metabolism, mRNA surveillance pathway and RNA degradation relative to genetic information processing, and lysosomes and endocytosis relative to cellular processes. Further analysis indicated that the targets were engaged in seven and seven development-related pathways such as the Hippo signaling pathway and Wnt signaling pathway, seven and seven immune-related pathways such as endocytosis and the Jak/STAT signaling pathway, as well as three and three energy metabolism-related pathways such as nitrogen metabolism and sulfur metabolism. These results demonstrate that DEpiRNAs exerted a potential regulatory function in the A. m. ligustica larval guts by affecting many biological processes including development, immune defense, and energy metabolism. The immune system in insects is composed of the humoral immune system dominated by several signaling pathways such as Imd/Toll, JAK/STAT, JNK, and insulin, and the cellular immune system is represented by phagocytosis, melanization, autophagy, and apoptosis [43]. The JAK/STAT signaling pathway is not only implicated in regulating cell growth, differentiation, apoptosis, and inflammatory immunity but also participates in gut immunity via the modulation of intestinal stem cell proliferation and epithelial cell renewal [43,44]. Here, we observed that the JAK/STAT signaling-pathway-related genes were targeted by 54 DEpiRNAs (piR-ame-742536, piR-ame-1183555, and piR-ame-1233036, etc.) in the "Am5 vs. Am6" comparison group and one DEpiRNA (piR-ame-1243913) in the "Am5 vs. Am6" comparison group, suggestive of the involvement of corresponding DEpiRNAs in the regulation of immune defense in the larval guts. Tomato yellow leaf curlvirus has been proven to enter whitefly Bemisia tabaci midgut epithelial cells through receptor-mediated clathrin-dependent endocytosis [45]. Zhang et al. [46] reported that the inhibition of endocytosis induced the proliferation of Drosophila intestinal stem cells and massive gut hyperplasia, which further affected intestinal development and lifespan. In this study, endocytosis-associated genes were found to be targeted by 60 DEpiRNAs in the "Am4 vs. Am5" comparison group including piR-ame-14055 and piR-ame-456655, and five DEpiRNAs in the "Am5 vs. Am6" comparison group such as piR-ame-31653 and piR-ame-1173337, indicating that corresponding DEpiRNAs potentially regulated endocytosis during the development of larval guts. Together, these results demonstrated that DEpiRNAs as potential regulators participated in the development of A. m. ligustica larval guts. More efforts are required to elucidate the regulatory function of these DEpiRNAs. Several groups confirmed the feasibility and reliability of performing functional investigation of piRNAs following the technical platform similar to miRNAs, e.g., the expression and knockdown of a piRNA through feeding or injecting mimics and inhibitors [35,37]. In the near future, on the basis of findings in this work, we will further select key DEpiRNAs followed by expression and knockdown via feeding corresponding mimics and inhibitors to uncover their functions in the development of larval guts. Conclusions Taken together, 843 piRNAs were, for the first time, identified in the A. m. ligustica larval guts, and the first base of A. m. ligustica piRNAs had a C bias. A total of 78 piRNAs were differentially expressed in 5-day-old larval guts compared with 4-day-old larval guts, while only seven DEpiRNAs were detected in the 6-day-old larval gut compared with 5-day-old larval guts. Additionally, these DEpiRNAs could target 9, 983 and 1, 936 genes, respectively, which were engaged in 50 and 41 functional terms such as the developmental process and immune system process and 142 and 101 KEGG pathways such as the Wnt signaling pathway and endocytosis. Moreover, some DEpiRNAs may modulate the expression of corresponding target genes in the A. m. ligustica larval guts, further affecting sulfur metabolism, nitrogen metabolism, oxidative phosphorylation, endocytosis, and ubiquitin-mediated proteolysis as well as Hippo, Notch, mTOR, and Jak/STAT signaling pathways. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13101879/s1, Table S1: Primers that were used in this study, Table S2: Detailed information about the identified DEpiRNAs in A. mellifera larval guts, Table S3: Detailed information about the targeting relationship between DEpiRNAs and genes relative to development, immune, and energy metabolism pathways.
5,215
2022-09-09T00:00:00.000
[ "Biology", "Environmental Science" ]
Fostering Cross-Disciplinary Earth Science Through Datacube Analytics . Introduction The term "Big Data" is a contemporary shorthand characterizing data which are too large, fast-lived, heterogeneous, or complex to get understood and exploited. Technologically, this is a cross-cutting challenge affecting both storage and processing, data and metadata, servers and clients as well as mash-ups. Further, making new, substantially more powerful tools available for simple use by non-experts while not constraining complex tasks of experts just adds to the complexity. All this holds for many application domains, but specifically so for the field of Earth Observation (EO). With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. The stated goal is to enable users to "ask any question, any time, on any volume" thereby enabling them to "build their own product on the go". In the field of EO, one of the most influential initiatives towards this goal is EarthServer [9][18] which has demonstrated new directions for flexible, scalable EO services based on innovative NoSQL technology. Researchers from Europe, the US, and Australia have teamed up to rigorously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image timeseries) and may unite an unlimited number of single images. Independent from whatever efficient data structuring a server network may perform internally on the millions of hyperspectral images and hundreds of climate simulations, users will always see just a few datacubes they can slice and dice. EarthServer has established a slate of services for such spatio-temporal datacubes based on the scalable array engine, rasdaman, which enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. All services strictly rely on the open OGC data and service standards for "Big Geo Data", the Web Coverage Service (WCS) suite. In particular, the Web Coverage Processing Service (WCPS) geo raster query language has proven instrumental as a client data programming language which can be hidden behind appealing visual interfaces. Actually, EarthServer has advanced these standards based on the experiences gained. The OGC WCS standards suite in its current, comprehensive state has been largely shaped by EarthServer which provides the Coverages, WCS, and WCPS standards editor and working group chair. The feasibility evidence provided by EarthServer has contributed to the uptake of WCS by open-source and commercial implementers; meantime, OGC WCS has entered the adoption process for ISO and INSPIRE. Phase 1 of EarthServer has ended in 2014 [9]; independent experts characterized the outcome, based on "proven evidence", that rasdaman will "significantly transform the way that scientists in different areas of Earth Science will be able to access and use data in a way that hitherto was not possible". And "with no doubt" this work "has been shaping the Big Earth Data landscape through the standardization activities within OGC, ISO and beyond". In Phase 2 which started in May 2015 this is being advanced even further: from the 100 TB databases achieved in Phase 1, the next frontier will be crossed by building Petabyte datacubes for ad-hoc querying and fusion ( Figure 1). In this contribution we present status and intermediate results of Earth-Server and outline its impact on the international standards landscape. Further, we highlight opportunities established through technological advance and how future services can cope better with the Big Data challenge in EO. The remainder of this contribution is organized as follows. In Section 2.1.2, the concepts of the OGC datacube and its services are introduced. An initial set of services in the federation is presented in Section 2.1.3, followed by an introduction to the underlying technology platform and an evaluation in Section 2.1.4. Section 2.1.5 concludes the plot and presents an outlook. Standards-Based Modelling of Datacubes (which, due to their success, are meantime also under adoption by ISO and INSPIRE). xxxxxxx EarthServer relies on the OGC "Big Earth Data" standards of OGC, WCS and WCPS, for any kind of access; additionally, WMS is offered. In th server, all such requests uniformly get mapped to an array query language (see Section "Datacube Analytics Technology" below). Advanced visual clients enable point-and-click interfaces effectively hiding the query language syntax, except when experts want to make use of it. Additionally, access through expert tools like python notebooks is under finalization. At the heart of the EarthServer conceptual model is the concept of coverages as digital representations of space/time varying phenomena as per ISO 19123 (which is identical to OGC Abstract Topic 6) [24]. Practically speaking, coverages encompass regular and irregular grids, point clouds, and general meshes (Fig. 2). The notion of coverages [48][6] [8] has proven instrumental in unifying spatio-temporal regular and irregular grids, point clouds, and meshes so that such data can be accessed and processed through a simple, yet flexible and interoperable service paradigm. By separating coverage data and service model, any service -such as WMS, WFS, SOS and WPS -can provide and consume coverages. That said, the Web Coverage Service (WCS) standard offers the most comprehensive, streamlined functionality [32]. This modular suite of specifications starts with fundamental data access in WCS Core and has various extensions adding optionally implementable functionality facets, up to server-side analytics based on the Web Coverage Processing Service (WCPS) geo datacube language [5]. Below we introduce the OGC coverage data and service model with an emphasis on practical aspects and illustrate how they enable high-performance, scalable implementations. Coverage Data Model According to the common geo data model used by OGC, ISO, and others, objects with a spatial (possibly temporal) reference are referred to as features. A special type of features are coverages whose associated values vary over space and/or time, such as an image where each coordinate leads to an individual color value. Complementing the (abstract) coverage model of ISO 19123 on which it is based, the (concrete) OGC coverage data and service model [6] establishes verifiable interoperability, down to pixel level, through the OGC conformance tests. While concrete, the coverage model still is independent from data format encodingssomething which is of particular importance as it allows a uniform handling metadata, and individual mappings to the highly diverse metadata handling of the various data formats. The OGC coverage model (and likewise WCS) meantime is supported by most of the respective tools, such as open-source MapServer, GeoServer, OPeNDAP, and ESRI ArcGIS. In 2015, this successful coverage model has been extended to allow any kind of irregular grids, resulting in the OGC Coverage Implementation Schema (CIS) 1.1 [8] which is in the final stage of adoption at the time of this writing. Different types of axes are made available for composing a multi-dimensional grid in a simple plugand-play fashion. This effectively allows to concisely represent coverages ranging from unreferenced over regular grids to irregularly spaced axes (as often occurring in timeseries) and warped grids to ultimately algorithmically determined warpings, such as those defined by SensorML 2.0. Web Coverage Service The OGC service definition specifically built for deep functionality on coverages is the Web Coverage Service (WCS) suite of specifications. With WCS Core [4], spatio-temporal subsetting as well as format encoding is provided; this Core must be supported by all implementations claiming conformance. Figure 4 illustrates WCS subsetting functionality, Figure 5 shows the overall architecture of the WCS suite. Conformance testing of WCS implementations follows the same modularity approach and involves detailed checks, essentially down to the level of single cell (e.g, "pixel", "voxel") values [33]. Such results can conveniently be rendered through WebGL in a standard Web browser, or through NASA WorldWind ( Figure 6). The syntax is close to SQL/MDA (see below), but with a syntax flavor close to XQuery so as to allow integration with XPath and XQuery, which is being prepared by EarthServer (see the section on data/metadata integration further down). The Role of Standards As the hype dust settles down over "Big Data" the core contributing data structures and their particularities crystallize. In Earth Science data, these arguably are regular and irregular grids, point clouds, and meshes, reflected by the coverage concept. The unifying notion of coverages appears useful as an abstraction that is independent from data formats and their particularities while still capturing the essentials of spatio-temporal data. With CIS 1.1, description of irregular grids has been simplified by not looking at the grids, but at the axis characteristics. While many services on principle can receive or deliver coverages, the WCS suite is specifically designed to not only work on the level of whole (potentially large) objects, but can address inside objects as well as filter and process them, ranging up to complex analytics with WCPS. The critical role of flexible, scalable coverage services for spatio-temporal infrastructures is recognized far beyond OGC, as the substantial tool support highlights. This has prompted ISO and INSPIRE to also adopt the OGC coverage and WCS standards, which is currently under way. Also, ISO is extending the SQL standard with n-D arrays [25] [30]. The standards observing group of the US Federal Geographic Data Committee (FGDC) sees coverage processing a la WCS/WCPS as a future "mandatory standard". In parallel, work is continuing in OGC towards extending coverage world with further data format mappings and to add further relevant functionality, such as flexible XPath-based coverage metadata retrieval. Finally, research is being undertaken on embedding coverages into the Geo Semantic Web [37], also supporting W3C which has started studying coverages in the "Spatial Data on the Web" Working Group. A demonstration service for 1-D through 5-D coverages is available for studying the WCS / WCPS universe [43]. , and NASA all the aforementioned service partners have set up domain specific clients and data access portals which are continuously advanced and populated over the lifetime of the project so as to cross the Petabyte frontier for single services in 2017. Multiple service synergies will be explored which will allow users to query and analyze data stored at different project partner's infrastructure from a single entry point. An example of this is the LandSat service being developed jointly by MEEO and NCI. The specific data portals and access options are detailed in the following sections. Earth Observation Data Services The use of Earth Observation (EO) data is getting more and more challenging with the advent of the Sentinel era. The free, full and open data policy adopted for the Copernicus programme foresees access available to all users for the Sentinel data products. Terabytes of data and EO products are already generated every day from Sentinel-1 and Sentinel-2, and with the approaching launch of the Sentinel-3/-4/-5P/-5, the need of advanced access services is crucial to support the increasing data demand from the users. The Earth Observation Data Service, [1][20] offers dynamic and interactive access functionalities to improve and facilitate the accessibility to massive Earth Science data: key technologies for data exploitation (Multisensor Evolution Analysis [28], rasdaman [44], NASA Web World Wind [31]) are used to implement effective geospatial data analysis tools empowered with the OGC standard interfaces for Web Map Service (WMS) [35], Web Coverage Service (WCS) [34], and Web Coverage Processing Service (WCPS) [3]. With respect to the traditional data exploitation approaches, the EO Data Service supports on-line data interaction, restructuring the typical steps and moving to the end the download of the real data of interest for the users with a significant reduction of data transfer. The EO Data Service currently provides in excess of 100 TB of ESA and NASA EO products (e.g. vegetation indexes, land surface temperature, precipitation, soil moisture, etc.) to support Atmosphere, Land and Ocean applications. In the framework of the EarthServer-2 project, the Big Data Analytics tools will be enabled on datacubes of Copernicus Sentinel and Third Party Missions (e.g. Landsat8) data from products to support agile analytics on offer as much data from this new generation sensors with the total goal to offer 1PB of data through the service. Marine Science Data Service The marine data service (Marine Data Service) is focused on providing access to remote sensed ocean data. The data available are from ocean colour satellites. The marine research community is well accustomed to using satellite data. Satellite data provides many benefits over in-situ observations. The data have a global coverage and provide a consistent and accurate time series of data. The marine research community has recognized the benefit of long time series of data. Time series need to be consistent so that the data are comparable through the whole series. Remote sensed data have helped to provide this consistency. The ESA OC-CCI project (Sathyendranath et al. 2012) is producing a time series of remote sensed ocean colour parameters and associated uncertainty variables. Currently the available time series runs from 1997-2013 and represents one of fourteen subgroups of the overall ESA CCI project. With the creation of these large time series an increasingly technical challenge has emerged, how do users get benefit from these huge data volumes? The EarthServer project, through the use of a suite of technologies including rasdaman and several OGC standard interfaces, aims to address the issue of users having to transfer and store large data volumes by offering adhoc querying over the whole data catalog. Traditionally a marine researcher would simply select the particular temporal and spatial subset of the dataset they require from a web based catalog and download to their local disk. This system has worked well but is becoming less feasible due to the increases in data volume and the increase in non-specialists wanting access to the data. Take for example a researcher interested in finding the average monthly chlorophyll concentration for the North Sea for the period 2000-2010. Traditional methodologies would require the download of around several gigabytes of data. This represents a large time investment for the actual download as well as a cost associated with storage and processing required (Clements and Walker 2014). By making the same dataset available through the EarthServer project a research can simply write the analysis as a WCPS query and send that to the data service. The analysis is done at the data and only the result is downloaded, in this case around 100 KB. This example outline the clearest cut advantage, however there are more transient benefits that could improve the way that researchers interact with and use data. One example of this would be the testing of novel algorithms that require access to the raw light reflectance data. These data are used through existing algorithms to calculate derived products such as chlorophyll concentration, primary production and carbon sequestration. The marine data service currently provides in excess of 70 TB of data. Through the course of the project we will be expanding the data offering to include data from the ESA Sentinel 3 Ocean and Land Colour Instrument (OLCI) [11]. The aim is to offer as much data from the sensor as is available with the total goal to offer 1PB of data through the service. Climate Science Data Service The Climate Science Data Service is developed by the European Centre for Medium-Range Weather Forecasts (ECMWF). ECMWF hosts the Meteorological Archival and Retrieval System (MARS), the largest archive of meteorological data worldwide with currently more than 90 PB of data (ECMWF 2014). As a Numerical Weather Predication (NWP) centre, ECMWF primarily supports the meteorological community through wellestablished services for accessing, retrieving and processing data from the MARS archive. User outside the MetOcean domain, however, often struggle with the climate-specific conventions and formats, e.g., the GRIB data format. This limits the overall uptake of ECMWF data. At the same time, with data volumes in the range of Petabytes, data download for processing on users' local workstations is no longer feasible. ECMWF as a data provider has to find solutions to provide efficient web-based access to the full range of data while at the same time the overall data transport is minimized. Ideally, data access and processing takes place on the server and the user only downloads the data that is really needed. ECMWF's participation in EarthServer-2 aims at addressing exactly this challenge: to give users access to over one PB of meteorological and hydrological data and at the same time providing tools for on-demand data analysis and retrieval. The approach is to connect the rasdaman server technology with ECMWF's MARS archive, thereby enabling access to global reanalysis data via the OGC-based standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). This way, multidimensional gridded meteorological data can be extracted and processed in an interoperable way. The climate reanalysis service particularly addresses users outside the Met-Ocean domain more familiar with common Web and GIS standards. A WC(P)S for climate science data can be of benefit for developers or scientists building Web-applications based on large data volumes, but are unable to store all the data on their local discs. Technical data users, for example, can integrate a WCS request into their processing routine and further process the data. Commercial companies can easily build customised webapplications with data provided via a WCS. This approach is also strongly promoted by the EU's Copernicus Earth Observation programme which generates climate and environmental data as part of its operational services. Commercial companies can use the data to build value-added climate services for decision-makers or clients (see Figure 10). To showcase how simple it is to build a custom web application with the help of a WC(P)S, a demo web client visualizing ECMWF data with NASA WebWorldWind has been developed (see later). Available via http://earthserver.ecmwf.int/earthserver/worldwind it gives access to currently three datasets: ERA-interim 2 meter air temperature and total accumulated precipitation (Dick et al. 2011) as well as GloFAS river discharge forecast data (Alfieri et al. 2013). Two-dimensional global datasets can be mapped on the globe (Figure 11). An additional plotting functionality allows the retrieval of data points in time for individual coordinates. This is suitable for ERA-interim time-series data and hydrographs based on riverdischarge forecast data ( Figure 12). In summary, the WCS for Climate Data offers a facilitated on-demand access to ECMWF's climate reanalysis data for researchers, technical data users and commercial companies, within the MetOcean community and beyond. Planetary Science Data Service Planetary Science missions are largely based on Remote Sensing experiments, whose data are very much comparable with those from Earth Observation sensors. Data are thus relatively similar in terms of data structure and type: from panchromatic, to multispectral or hyperspectral data, as well as derived datasets such as stereo-derived topography, or laser altimetry, in terms of surface imaging , in addition to subsurface vertical radar sounding (Cantini et al., 2014), or atmospheric imaging and profiles. The vast majority of these data can be represented with raster models, thus they are suitable for use in array databases. Planetary raster data have never much suffered from being closed in archives during last decades: all remote sensing imagery returned by spacecrafts is available in the public domain, together with documentation (e.g. McMahon, 1996, Heather et al., 2013). Nevertheless, archived data are typically lower-level, unprocessed or partially processed images and cubes, not GIS-and science-ready products. In addition, they typically are analyzed as single data granules or with cumbersome processing and analyzing pipelines to be carried out by individual scientists, on own infrastructure. What is also slightly challenging for the access, integration and analysis of Planetary Science data is the wide range of bodies in terms of surface (or atmosphere) nature, experimental characteristics and Coordinate Reference Systems. The sheer volume of data, counted in few GB for entire missions (such as NASA Viking orbiters) until the 1980s, is now approaching the order of magnitude of tens to hundreds of TB. All these aspects tend to give a Big Data dignity to Planetary datasets, too. The Planetary Science Data Service (PSDS) of EarthServer, also known as PlanetServer (PlanetServer, 2016), focuses on complex multidimensional data, in particular hyperspectral imaging and topographic cubes and imagery. It is accessible via http://access.planetserver.eu. All of those data derive from public archives and are processed to the highest level with publicly available routines. In addition to Mars data , the use of OGC WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are also going to be covered and served. Derived parameters such as hyperspectral summary products and indices can be produced through WCPS queries, as well as derived imagery color combination products. One of the objectives of PlanetServer is to translate scientific questions into standard queries that can be posed to either a single granule/coverage, or an extremely large number of them, from local to global scale. The planetary and remote sensing and geodata communities at large could benefit from PlanetServer at different levels: from accessing its data and performing analyses with its web services, for research or education purposes; to using and adapting or iterating further the concepts and tools developed within PlanetServer. Cross-Service Federation Queries Among the features of the EarthServer platform, consisting of metadataenhanced rasdaman (see next subsection), is the capability to federate services. Technically, this is only a generalization of the service internal parallelization and distributed processing; externally, it achieves location transparency allowing users to send any query to any data center, regardless of which data are accessed and possibly combined, including across data center boundaries. This capability of the services has been demonstrated live at EGU 2015 where a nontrivial query required combination of climate data from ECMWF in the UK with LandSat 8 imagery at NCI Australia. This query was alternately sent to ECMWF and NCI; each of the receiving services forked a subquery to the service holding the data missing locally. The result was displayed in NASA WorldWind, allowing a visual assessment of equality of the results. Figure 15 shows part of the query, a visualization of the path the query fragments take, and the final result mapped to a virtual globe. Datacube Analytics Technology EarthServer uses a combination of Big Data storage, processing, and visualization technologies. In the backend, this is the rasdaman Array Database system which we introduce in the next section. Data / metadata integration plays a crucial role in the EarthServer data management approach and is presented next. Finally, the central visualization tool, the NASA WorldWind virtual globe, is presented. Array Databases as Datacube Platform The common engine underlying EarthServer is the rasdaman Array Database [7]. It extends SQL with support for massive multi-dimensional arrays, together with declarative array operators which are heavily optimized and parallelized [17] on server side. A separate layer adds geo semantics, such as knowledge about regular and irregular grids and coordinates, by implementing the OGC Web service interfaces. For WCS and WCPS, rasdaman acts as OGC reference implementation. On storage, arrays get partitioned ("tiled") into sub-arrays which can be stored in a database or directly in files. Additionally, rasdaman can access pre-existing archives by only registering files, without copying them. Figure 1 shows the overall architecture of rasdaman. Array Storage Arrays are maintained in either a conventional database (such as Postgre-SQL) or its own persistent store directly in any kind of file system. Additionally, rasdaman can tap into "external" files not under its control. Since rasdaman 9.3, an internal tiling of archive files (such as available with TIFF and NetCDF, for example) can be exploited for fine-grain reading. Under work is automated distribution of tiles based on various criteria, optionally including redundancy. A core concept of array storage in rasdaman is partitioning or tiling. Arrays are split into sub-arrays called tiles to achieve fast access. Tiling policy is a tuning parameter which allows adjusting partitions to any given query workload, measured or anticipated. As this mechanism turned out very powerful for users, its generality has been cast into a few strategies available to data designers ( Figure 17). Array Processing The rasdaman server ("rasserver") is the central workhorse. It can access data from various sources for multi-parallel, distributed processing. The rasdaman engine has been crafted from scratch, optimizing every single component for array processing. A series of highly effective optimizations is applied to queries, including: Query rewriting to find more efficient expressions of the same query; currently 150 rewriting rules are implemented. Query result caching is used to keep complete or partial query results in (shared) memory for reuse by subsequent queries; in particular, geographic or temporal overlap can be exploited. Array joins with optimized tile loading so as to minimize multiple loads when combining two arrays [10]. This is not only effective in a local situation, but also when tiles have to be transported between compute nodes or even data centers in case of a distributed join. After query analysis and optimization, the system fetches only the tiles required for answering the given query. Subsequent processing is highly parallelized. Locally, it assigns tiles to different CPUs and threads. In a cluster, query are split and parallelized across the nodes. The same mechanism is also used for distributing processing across data centers, where data transport becomes a particular issue. To maximize efficiency, rasdaman currently optimizes splitting along two criteria ( Figure 18): Send queries to where the data sit ("shipping code to data"). Generate subqueries that process as much as ever possible locally, minimizing the amount of data to be transported between nodes. This way, single queries have been successfully split across more than a thousand Amazon cloud nodes [17]. Figure 19 shows an experiment done on the rasdaman distributed query processing visualization workbench where nine Amazon nodes process a query on 1 TB processed in 212 ms. Tool integration Even though the WCS, WCS, and WCPS protocols are open, adopted standards, they are not necessarily appropriate for end usersfrom WMS we are used to have Web clients like OpenLayers and Leaflet which hide the request syntax, and the same holds for WCS requests and, although highlevel and abstract, the WCPS language. In the end, all these interfaces are most useful as client/server communication protocols where end users are hidden from the syntax through visual point-and-click interfaces (like OpenLayers and NASA WorldWind) or, alternatively, through their own, well-known tools (like QGIS and python). To this end, rasdaman already supports major GIS Web and programmatic clients, and more are under development. Among this list are MapServer, GDAL, EOxServer, OpenLayers, Leaflet, QGIS, and NASA WorldWind, C++, and Java. Python is in advanced development stage. The Role and Handling of Metadata Metadata can be of utmost importance for the utilization of datasets, as apart from textual descriptions and provenance traces, it may provide essential information on how it may be used (e.g. characteristics of equipment/process, reference systems, error margins). When data management crosses the boundaries of systems, institutions and scientific disciplines, metadata management becomes a complex process on its own. The Earth-Sciences landscape is an ample example where datasets, which are substantially "many", may be considered from a variety of standpoints, and be produced/consumed by heterogeneous processes in various disciplines with different needs and concepts. Focusing on coverages hosted behind WCS and WCPS services, where metadata heterogeneity is evident due to the liberal approach of the relevant specifications, the EarthServer 2 metadata management system addresses the challenge, by being metadata schema agnostic yet maintaining the ability to host and process composite metadata models , while meeting a number of supplementary requirements such as fault-tolerance, efficiency, scalability and capability of hosting billions of datasets. The system supports of two modes of operation, with quite distinct characteristics (a) in-situ operation (metadata are not relocated and services are offered on top of the original store's metadata retrieval ones) and (b) federated operation (metadata are gathered in a distributed store over which the full range of system services may be provided). The architecture (cf. Figure 20) consists of loosely coupled distributed services that interoperate through standards, WCS and WCPS being the prevailing ones. XPath is utilized for metadata retrieval/filtering, though over NoSQL technologies in order to achieve desired scalability, performance and functional characteristics. Full text queries are also supported. In federated processing mode, services are invoked using the WCPS or WCS-T standards. Other supported protocols include OpenSearch, OAI-PMH and CSW. Access to the combined processing and retrieval engine is provided via xWCPS2.0, a language that leverages the agile earth-data analytics layer with effective metadata retrieval and processing facilities, delivering an expressive querying tool that can interweave data and metadata in composite operations. Building on xWCPS1.0 (from EarthServer 1), it delivers enhanced FLWOR syntax based on XPath 2.0 specification and utilizes WCS-T protocol for pipeline implementation, delivering improved federated operation. In the following xWCPS2.0 example, coverages are located via their metadata (name of <field> is elevation in where clause) and results consist of xml elements (<result> in return clause), containing the outcome of an XPath expression and a WCPS evaluated attribute (attr): Virtual Globes as Datacube Interfaces Visual globes help users experiencing their data visually with the various aspects displayed in their native context. This allows data to be more easily understood and their impacts better appreciated. NASA is a pioneer in virtual globe technology, substantially preceding tools such as Google Earth. Our primary mission has always been to support the operational needs of the geospatial community through a versatile open source toolkit, versus a closed proprietary product. A particular feature of WorldWind is its modular and extensible architecture. WorldWind as an Application Programming Interface, API-centric Software Development Toolkit (SDK) can be plugged into any application that has spatial data needing to be experienced in the native context of a virtual globe. In EarthServer, the virtual globe paradigm is coupled with the flexible query mechanism of databases. Users can query rasdaman flexibly and have the results mapped to the globe. Rasdaman applications can add any 2-D, 3-D or 4-D information to the WorldWind geobrowser for any dynamically generated query result. This enables a direct interaction with massive databases, as the excerpt of interest is prepared in the server while WorldWind accomplishes sophisticated interactive visualization in the native context of Earth as observed from space. Beyond Earth, WorldWind is also used for Mars and Moon by PlanetServer. In the earlier sections, WorldWind has been heavily used as a visual frontend to the various thematic databases of EarthServer. Related Work A large, growing number of both open-source and proprietary implementations is supporting coverages and WCS (Fig. 3). Specifically, the most recent version (OGC Coverage Implementation Schema 1.0 and WCS 2.0) are known to be implemented by open-source rasdaman [44], GDAL, QGIS, OpenLayers, Leaflet, OPeNDAP, MapServer, GeoServer, GMU, NASA WorldWind, EOxServer as well as proprietary Pyxis, ERDAS, and ArcGIS. The most comprehensive tool is rasdamanalso OGC WCS Core Reference Implementationwhich implements WCS Core and all extensions, including WCPS. This large adoption basis of OGC's coverage standards promotes interoperability of EarthServer with other services, supporting the GEOSS "system of systems" approach [14]. Notably, rasdaman is part of the GCI (GEOSS Common Infrastructure) [21]. SciDB is an Array Database prototype under development [38] with no specific geo data support like OGC WCS interfaces. SciQL is a concept study adding arrays to a column store [49]. A performance comparison between rasdaman, SciQL, and SciDB shows that rasdaman excels by one, often several orders of magnitude in performance and also conveys better storage efficiency [29]. To the best of our knowledge, only rasdaman has publicly available services deployed [37]. The scalability potential of the WCS suite is proven by rasdaman cloud federations where single queries have been split across more than 1,000 cloud nodes [17]. At the time of this writing, rasdaman is being extended with support for coverages 1.1. Sensor Observation Service (SOS) supports delivery of sensor data [12] which can be imagery. However, there is rather limited functionality, and performance is reported as not entirely satisfactory. OGC WMTS exposes tiling to clients for maximizing performance [26]; on the downside, queries are fixed to retrieval of such tiles, i.e., there is no free subsetting and no processing. OGC WPS provides an API for arbitrary processing functionality, however, is not interoperable per se as stated already in the standard [47]. In ISO, an extension to SQL is in advanced stage which adds n-D arrays in a domain-independent manner [25]. SQL/MDA (for "Multi-Dimensional Arrays") has been initiated by the rasdaman team, which also has submitted the specification; see [30] for a condensed overview. Adoption is anticipated for summer 2017. Conclusion and Outlook Datacubes are a convenient model for presenting users with a simple, consolidated view on the massive amount of data files gathered -"a cube tells more than a million images". Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of individual images. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. Following the broadening of minds through the NoSQL wave, database research has responded to the Big Data deluge with new data models and scalability concepts. In the field of gridded data, Array Databases provide a disruptive innovation for flexible, scalable data-centric services on datacubes. EarthServer exploits this by establishing a federation of services of 3-D satellite image timeseries and 4-D climatological data where each node can answer queries on the whole network, in a federation implementing a "datacube mix & match". While in Phase 1 of EarthServer the 100 TB barrier has been transcended, in its Phase 2 it is attacking the Petabyte frontier. Aside from using the OGC "Big Geo Data" standards for its service interfaces, EarthServer keeps on shaping datacube standards in OGC, ISO, and INSPIRE. Current work involves implementation of the OGC coverage model version 1.1, supporting data centers in establishing rasdaman-based services, and enhancing further the data and processing parallelism capabilities of rasdaman.
7,852.6
2018-01-01T00:00:00.000
[ "Computer Science" ]
Measurement of |Vcb| using B0 to D*l nu decays The magnitude of the Cabibbo-Kobayashi-Maskawa matrix element Vcb has been measured using B0 to D*l nu decays recorded on the Z0 peak using the OPAL detector at LEP. The D* to D0 pi+ decays were reconstructed both in the particular decay modes D0 to K- pi+ and D0 to K- pi+ pi- and via an inclusive technique. The product of |Vcb| and the decay form factor of the B0 to D* l nu transition at zero recoil F(1) was measured to be F(1)|Vcb| = (37.1+-1.0+-2.0)x10^-3, where the uncertainties are statistical and systematic respectively. By using Heavy Quark Effective Theory calculations for F(1), a value of |Vcb| = (40.7+-1.1+-2.2+-1.6)x 10^-3 was obtained, where the third error is due to theoretical uncertainties in the value of F(1). The branching ratio Br(B0 to D* l nu) was also measured to be (5.26+-0.20+-0.46)%. Introduction Within the framework of the Standard Model of electroweak interactions, the elements of the Cabibbo-Kobayashi-Maskawa mixing matrix are free parameters, constrained only by the requirement that the matrix be unitary.The values of the matrix elements can only be determined by experiment.Heavy Quark Effective Theory (HQET) provides a means to extract the magnitude of the element V cb from particular semileptonic b decays, with relatively small theoretical uncertainties [1]. In this paper, the value of |V cb | is extracted by studying the decay rate for the process B0 → D * + ℓ − ν as a function of the recoil kinematics of the D * + meson1 [2][3][4].The decay rate is parameterised as a function of the variable ω, defined as the scalar product of the four-velocities of the D * + and B0 mesons.This is related to the square of the four-momentum transfer from the B0 to the ℓ − ν system, q 2 , by and ranges from 1, when the D * + is produced at rest in the B0 rest frame, to about 1.50.Using HQET, the differential partial width for this decay is given by where K(ω) is a known phase space term and F(ω) is the hadronic form factor for this decay [3]. Although the shape of this form factor is not known, its magnitude at zero recoil, ω = 1, can be estimated using HQET.In the heavy quark limit (m b → ∞), F(ω) coincides with the Isgur-Wise function [4] which is normalised to unity at the point of zero recoil.Corrections to F(1) have been calculated to take into account the effects of finite quark masses and QCD corrections, yielding the value and theoretical uncertainty F(1) = 0.913 ± 0.042 [5].Since the phase space factor K(w) tends to zero as ω → 1, the decay rate vanishes at ω = 1 and the accuracy of the extrapolation relies on achieving a reasonably constant reconstruction efficiency in the region near ω = 1.Previous measurements of |V cb | have been made using B mesons produced on the Υ(4S) resonance [6] and in Z 0 decays [7][8][9].These analyses used a linear or constrained quadratic expansion of F(ω) around ω = 1.An improved theoretical analysis, based on dispersive bounds and including higher order corrections, has since become available [10].This results in a parameterisation for F(ω) in terms of F(1) and a single unknown parameter ρ 2 constrained to lie in the range −0.14 < ρ 2 < 1.54, ρ 2 corresponding to the slope of F(ω) at zero recoil. The previous OPAL measurement [9] used the decay chain D * + → D 0 π + , with the D 0 meson being reconstructed in the exclusive decay channels D 0 → K − π + and D 0 → K − π + π 0 .In this paper, a new analysis is described in which only the π + from the D * + decay is identified, and no attempt is made to reconstruct the D 0 decay exclusively.This technique, first employed by DELPHI [8,11], gives a much larger sample of B0 → D * + ℓ − ν decays than the previous measurement, but also larger background, requiring a rather more complex analysis.The measurement of [9] is also updated to use the new parameterisation of F(ω) [10], and improved background models and physics inputs.In both cases, the initial number of B 0 mesons is determined from other measurements of B 0 production in Z 0 decays. The new reconstruction technique is described in Section 2, the determination of ω for each event in Section 3, the fit to extract F(1)|V cb | and ρ 2 in Section 4 and the systematic errors in Section 5.The updated exclusive measurement is discussed in Section 6 and the measurements are combined and conclusions drawn in Section 7. 2 Inclusive reconstruction of B0 → D * + ℓ − ν events The OPAL detector is well described elsewhere [12].The data sample used in this analysis consists of about 4 million hadronic Z 0 decays collected during the period 1991-1995, at centre-of-mass energies in the vicinity of the Z 0 resonance.Corresponding simulated event samples were generated using JETSET 7.4 [13] as described in [14]. Hadronic Z 0 decays were selected using standard criteria [14].To ensure the event was well contained within the acceptance of the detector, the thrust axis direction2 was required to satisfy | cos θ T | < 0.9.Charged tracks and electromagnetic calorimeter clusters with no associated tracks were then combined into jets using a cone algorithm [15], with a cone half angle of 0.65 rad and a minimum jet energy of 5 GeV.The transverse momentum p t of each track was defined relative to the axis of the jet containing it, where the jet axis was calculated including the momentum of the track.A total of 3 117 544 events passed the event selection. The reconstruction of B0 → D * + ℓ − ν events was then performed by combining high p and p t lepton (electron or muon) candidates with oppositely charged pions from the D * + → D 0 π + decay.Electrons were identified and photon conversions rejected using neural network algorithms [14], and muons were identified as in [16].Both electrons and muons were required to have momenta p > 2 GeV, transverse momenta with respect to the jet axis p t > 0.7 GeV, and to lie in the polar angle region | cos θ| < 0.9. The event sample was further enhanced in semileptonic b decays by requiring a separated secondary vertex with decay length significance L/σ L > 2 in any jet of the event.The vertex reconstruction algorithm and decay length significance calculation are described fully in [14].Together with the lepton selection, these requirements result in a sample which is about 90 % pure in bb events. An attempt was made to estimate the D 0 direction in each jet containing a lepton candidate.Each track (apart from the lepton) and calorimeter cluster in the jet was assigned a weight corresponding to the estimated probability that it came from the D 0 decay.The track weight was calculated from an artificial neural network, trained to separate tracks from b decays and fragmentation tracks in b jets [14].The network inputs are the track momentum, transverse momentum with respect to the jet axis, and impact parameter significances with respect to the reconstructed primary and secondary vertices (if existing).The cluster weights were calculated using their energies and angles with respect to the jet axis alone, the energies first being corrected by subtracting the energy of any charged tracks associated to the cluster [17]. Beginning with the track or cluster with the largest weight, tracks and clusters were then grouped together until the invariant mass of the group (assigning tracks the pion mass and clusters zero mass) exceeded the charm hadron mass, taken to be 1.8 GeV.If the final invariant mass exceeded 2.3 GeV, the jet was rejected, since Monte Carlo studies showed such high mass D 0 candidates were primarily background.For surviving jets, the momentum p D 0 of the group was used as an estimate of the D 0 direction, giving RMS angular resolutions of about 45 mrad in φ and θ.The D 0 energy was calculated as The selection of pions from D * + decays relies on the small mass difference of only 145 MeV [18] between the D * + and D 0 , which means the pions have very little transverse momentum with respect to the D 0 direction.Each track in the jet (other than the lepton) was considered as a slow pion candidate, provided it satisfied 0.5 < p < 2.5 GeV and had a transverse momentum with respect to the D 0 direction of less than 0.3 GeV.If the pion under consideration was included in the reconstructed D 0 , it was removed and the D 0 momentum and energy recalculated.The final selection was made using the reconstructed mass difference ∆m between D * + and D 0 mesons, calculated as where the D * + energy is given by E D * = E D 0 + E π and momentum by p D * = p D 0 + p π . A new secondary vertex was then iteratively reconstructed around an initial seed vertex formed by the intersection of the lepton and slow pion tracks.Every other track in the jet was added in turn to the seed vertex, and the vertex refitted.The track resulting in the lowest vertex fit χ 2 was retained in the seed vertex for the next iteration.The procedure was repeated until no more tracks could be added without reducing the vertex fit χ 2 probability to less than 1 %.The decay length L ′ between the primary vertex and this secondary vertex, and the associated error σ L ′ , were calculated as in [14].The pion candidate was accepted if the decay length satisfied −0.1 cm < L ′ < 2 cm and the decay length significance satisfied The resulting distributions of ∆m for opposite and same sign lepton-pion combinations are shown in Figure 1(a) and (b).The predictions of the Monte Carlo simulation are also shown, broken down into contributions from signal B0 → D * + ℓ − ν events, 'resonant' background containing real leptons and slow pions from D * + decay, and combinatorial background, made up of events with fake slow pions, fake leptons or both. In Monte Carlo simulation, about 45 % of opposite sign events with ∆m < 0.17 GeV are signal B0 → D * + ℓ − ν events, 14 % are resonant background and 41 % are combinatorial background.The resonant background is made up mainly of B − → D * + π − ℓ − ν, B0 → D * + π 0 ℓ − ν and Bs → D * + K 0 ℓ − ν decays.These are expected to be dominated by b semileptonic decays involving orbitally excited charm mesons (generically referred to as D * * ), e.g.B − → D * * 0 ℓ − ν followed by D * * 0 → D * + π − .These decays will be denoted collectively by B → D * + h ℓ − ν.Small contributions are also expected from b → D * + τ νX decays with the τ decaying leptonically, and b → D * + D − s X with the D − s decaying semileptonically (each about 1 % of opposite sign events).For same sign events with ∆m < 0.17 GeV, there is a small resonant contribution of about 6 % from events with a real D * + → D 0 π + where the D 0 decays semileptonically, and the rest is combinatorial background. The most important background, from B → D * + h ℓ − ν decays, comes from both charged B + and neutral B 0 and B s decays, whereas the signal comes only from B 0 decays.Therefore the charge Q vtx of the reconstructed secondary vertex containing the lepton and slow pion and its estimated error σ Qvtx were calculated, using where w i is the weight for track i of charge q i to come from the secondary vertex, and the sums are taken over all tracks in the jet [19,20].The weights were calculated in a similar way to those used for the D 0 direction reconstruction, using a neural network with the track momentum, transverse momentum and impact parameter significances with respect to the reconstructed primary and secondary vertices as inputs.The weights for the lepton and slow pion candidate tracks were set to one.The vertex charge distributions for opposite sign events with ∆m < 0.17 GeV are shown in Figure 1(c) and (d). The reconstructed vertex charge and error were used to divide the data into different classes enhanced or depleted in B + decays, thus reducing the effect of this background and increasing the statistical sensitivity.Three classes c were used-class 1 where the charge is measured poorly (σ Qvtx > 0.9), class 2 where the charge is measured well and is compatible with a neutral vertex (σ Qvtx < 0.9, |Q vtx | < 0.5) and class 3 where the charge is measured well and is compatible with a charged vertex (σ Qvtx < 0.9, |Q vtx | > 0.5). Reconstruction of ω The recoil variable ω was estimated in each event using the reconstructed four-momentum transfer to the ℓν system: together with equation 1.The B 0 and D * + energies E B 0 and E D * were estimated directly, whilst the momentum vectors p B 0 and p D * were estimated using the energies together with the reconstructed polar and azimuthal angles, as described in more detail below.Since the slow pion has very little momentum in the rest frame of the decaying D * + , the momentum (and hence energy) of the D * + was estimated by scaling the reconstructed slow pion momentum by m D * + /m π , as for the D 0 → K − π + π 0 channel in [9].A small correction (never exceeding 12 %) was applied, as a function of cos θ * , the angle of the slow pion in the rest frame of the D 0 .This procedure gave a fractional D * + energy resolution of 15 %.The polar and azimuthal angles of the D * + were reconstructed using weighted averages of the slow pion and D 0 directions, giving resolutions of about 22 mrad on both φ and θ. The energy of the B 0 was estimated using a technique similar to that described in [21], exploiting the overall energy and momentum conservation in the event to calculate the energy of the unreconstructed neutrino.First, the energy E bjet of the jet containing the B 0 was inferred from the measured particles in the rest of the event, by treating the event as a two-body decay of a Z 0 into a b jet (whose mass was approximated by the B 0 mass) and another object making up the rest of the event.Then, the total fragmentation energy E frag in the b jet was estimated from the measured visible energy in the b jet and the identified B 0 decay products: Finally, the B 0 energy was calculated as E B 0 = E bjet − E frag , giving an RMS resolution of 4.4 GeV. The b direction was estimated using a combination of two techniques.In the first, the momentum vector of the B 0 was reconstructed from its decay products: the neutrino energy being estimated from the reverse of the event visible momentum vector: p ν = −p vis .The visible momentum was calculated using all the reconstructed tracks and clusters in the event, with a correction for charged particles measured both in the tracking detectors and calorimeters [17].This direction estimate is strongly degraded if a second neutrino is present (e.g. from another semileptonic b decay in the opposite hemisphere of the event), and its error was parameterised as a function of the visible energy in the opposite hemisphere.The resulting resolution is typically between 40 and 100 mrad for both φ and θ. The second method of estimating the B 0 direction used the vertex flight direction-i.e. the direction of the vector between the primary vertex and the secondary vertex reconstructed around the lepton and slow pion as described in Section 2. The accuracy of this estimate is strongly dependent on the B 0 decay length, and was used only if the decay length significance L ′ /σ L ′ exceeded 3.After this cut, the angular resolution varies between about 15 and 100 mrad, and is worse for θ in the 1991 and 1992 data where accurate z information from the silicon microvertex detector was not available.The B0 → D * + ℓ − ν candidate was rejected if the two reconstruction methods gave θ or φ angles disagreeing by more than three standard deviations, which happened in 7 % of Monte Carlo signal events.Finally, the two B 0 direction estimates were combined according to their estimated uncertainties, giving average resolutions of 35 mrad on φ and 43 mrad on θ, including events where only the first method was used. The estimate of q 2 derived from the B 0 and D * + energies and angles was improved by applying the constraint that the mass µ of the neutrino produced in the B0 → D * + ℓ − ν decay should be zero.The neutrino mass is given from the reconstructed quantities by: The constraint was implemented by calculating q 2 using a kinematic fit, incorporating the measured values and estimated uncertainties of the B 0 and D * + energies and angles (the B 0 angular uncertainties varying event by event).This procedure improved the average q 2 resolution from 2.78 GeV 2 to 2.57 GeV 2 .In Monte Carlo simulation, 11 % of signal events were reconstructed with q 2 < 0, corresponding to an unphysical value of ω larger than ω max ≈ 1.5, and were rejected. The resulting distributions of reconstructed ω for various ranges of true ω (denoted ω ′ ) in simulated B0 → D * + ℓ − ν decays are shown in Figure 2(a-e).The average RMS ω resolution is about 0.12, but there are significant non-Gaussian tails.The resolution was parameterised (separately for the 1991-2 and 1993-5 data) as a continuous function R(ω, ω ′ ) giving the expected distribution of reconstructed ω for each true value ω ′ .The resolution function was implemented as the sum of two asymmetric Gaussians (i.e. with different widths either side of the peak) whose parameters were allowed to vary as a function of ω ′ .The convolution of this resolution function with the Monte Carlo ω ′ distribution is also shown in Figure 2(a-e), demonstrating that the resolution function models the ω distributions well. The efficiency to reconstruct B0 → D * + ℓ − ν decays ǫ(ω ′ ) is shown in Figure 2(f), together with a second order polynomial parameterisation.The efficiency varies with ω ′ , but is reasonably flat in the critical region near ω ′ = 1 where the extrapolation to measure F(1)|V cb | is carried out. Fit and results The values of F(1)|V cb | and ρ 2 were extracted using an extended maximum likelihood fit to the reconstructed mass difference ∆m, recoil ω and vertex charge class c of each event.Both opposite and same sign events with ∆m < 0.3 GeV and ω < ω max were used in the fit, the high ∆m and same sign events serving to constrain the combinatorial background normalisation and shapes in the opposite sign low ∆m region populated by the B0 → D * + ℓ − ν decays.Using the ∆m value from each event in the fit, rather than just dividing the data into low ∆m 'signal' and high ∆m 'sideband' mass regions, increases the statistical sensitivity as the signal purity varies considerably within the low ∆m region. The logarithm of the overall likelihood was given by where the sums of individual event log-likelihoods ln L a i and ln L b j are taken over all the observed M a opposite sign and M b same sign events in the data sample, and N a and N b are the corresponding expected numbers of events. The likelihood for each opposite sign event was given in terms of different types or sources of event by where N a s is the number of expected events for source s, f a s,c is the fraction of events in source s appearing in vertex charge class c, M s,c (∆m) is the mass difference distribution for source s in class c and P s (ω) the recoil distribution for source s.For each source, the vertex charge fractions f a s,c sum to one and the mass difference M s,c (∆m i ) and recoil P s (ω) distributions are normalised to one.The total number of expected events is given by the sum of the individual contributions: N a = 4 s=1 N a s .There are four opposite sign sources: (1) signal B0 → D * + ℓ − ν events, (2) B → D * + h ℓ − ν events where the D * + is produced via an intermediate resonance (D * * ), (3) other opposite sign background involving a genuine lepton and a slow pion from D * + decay and (4) combinatorial background.The sum of sources 2 and 3 are shown as 'resonant background' in Figure 1.A similar expression to equation 4 was used for L b j , the event likelihood for same sign events.In this case, only sources 3 and 4 contribute. The mass difference distributions M c,s (∆m) for sources 1-3 were represented by analytic functions, whose parameters were determined using large numbers of simulated events, as were the recoil distributions P s (ω) for sources 2 and 3.The fractions in each vertex charge class for sources 1-3 were also taken from simulation.For the signal (source 1), the product of the expected number of events N a (0.76 ± 0.16) % [22], see text Br(b → D * + τ − νX) (0.65 ± 0.13) % [18], see text Br( B0 (13.9 ± 0.9) % [18] Table 1: Input quantities used in the fits for F(1)|V cb | and ρ 2 .The values marked 'see text' are derived using methods explained in Section 4. and recoil distribution P 1 (ω) was given by convolving the differential partial decay width (equation 2) with the signal resolution function and reconstruction efficiency: where N Z is the number of hadronic Z 0 decays passing the event selection, R b ≡ Γ b b/Γ had is the fraction of hadronic Z 0 decays to bb, f B 0 the fraction of b quarks hadronising to a B0 and τ B 0 the B 0 lifetime.The factor of four accounts for the two b hadrons produced per Z 0 → bb event and the two identified lepton species (electrons and muons).The form factor F(ω ′ ) is given in [10] in terms of the normalisation F(1) and slope parameter ρ 2 .The efficiency function ǫ(ω ′ ) and resolution function R(ω, ω ′ ) were described in Section 3, and the known phase space factor K(ω ′ ) is given in [9].The assumed values of the numerical quantities are given in Table 1. Since the data are divided into different vertex charge classes enhanced and depleted in B + decays, the fit gives some information on the amount of B → D * + h ℓ − ν background.The predicted level of this background in the fit was therefore allowed to vary under a Gaussian constraint corresponding to the branching ratio of Br(b → D * + h ℓν) = (0.76 ± 0.16) %.The latter has been calculated from the measured branching ratio Br(b → D * + π − ℓνX) = (0.473 ± 0.095) % [22], assuming isospin and SU(3) flavour symmetry to obtain the corresponding b → D * + π 0 ℓν and b → D * + K 0 ℓν branching ratios.A scaling factor of 0.75 ± 0.25 was included for the last branching ratio to account for possible SU(3) violation effects reducing the branching ratio D * * + s → D * + K 0 compared to the expectation of 3 2 Br(D * * + → D * + π 0 ).The P 2 (ω) distribution for these events was taken from simulation, using the calculation of Leibovich et al. [23] to predict their recoil spectrum. The numbers N a,b 3 and P 3 (ω) distributions for the small background contributions covered by source 3 (both opposite and same sign) were taken from Monte Carlo simulation, with branching ratios adjusted to the values given in Table 1.The branching ratio for b → D * + τ − ν was derived using the inclusive branching ratio Br(b → τ X) = (2.6 ± 0.4) % [18], assuming a D * + is produced in a fraction 0.25 ± 0.03 of the time, as seen for the corresponding decays b → D * + ℓX and b → ℓX (ℓ = e, µ) [18].The rate of b → D * + D − s X was assumed to be dominated by the two body decay B0 → D * + D ( * )− s .The rate of real D * + decays in the same sign background depends on the production fractions of D * + in bb and cc events, which were taken from [24]. The parameters of the analytic functions describing the combinatorial background ∆m) and P 4 (ω)) were fitted entirely from the data, with only the choice of functional forms motivated by simulation.The shapes of the mass and recoil functions (including a small correlation between ∆m and ω) are constrained by the same sign sample (which is almost entirely combinatorial background), and are the same for each vertex charge class c.The opposite sign high ∆m region serves to normalise the number of combinatorial background events in the low ∆m region for each vertex charge class. The values of F(1)|V cb | and ρ 2 were extracted by maximising the total likelihood given by equation 3. The values of F(1)|V cb | and ρ 2 were allowed to vary, together with the level of B → D * + h ℓ − ν background and 13 auxiliary parameters describing the combinatorial background distributions.A result of F(1)|V cb | = (37.5 ± 1.2) × 10 −3 , ρ 2 = 1.12 ± 0.14 was obtained, where the errors are only statistical.The correlation between F(1)|V cb | and ρ 2 is 0.77.The distributions of reconstructed ω for opposite and same sign events with ∆m < 0.17 GeV, together with the fit results, are shown in Figure 3.The fit describes the data well, both in this region and the high ∆m region dominated by combinatorial background, for all three of the vertex charge classes. By integrating the differential partial decay width (equation 2) over all values of ω, the branching ratio Br( B0 → D * + ℓ − ν) was also determined to be where again the error is statistical only.This result is consistent with the world average once systematic errors are included. Many previous results have been obtained using a constrained quadratic expansion for the form factor: 2 , where a is a slope parameter to be determined by the fit, and b is constrained to b = 0.66a 2 − 0.11 [25].To allow comparison with such measurements, the fit was also performed with this parameterisation of F(ω), giving the results F(1)|V cb | = (36.9±1.2)×10−3 and a 2 = 0.88 ± 0.14, the correlation between F(1)|V cb | and a 2 being 0.79.The difference in the two curvature parameters ρ 2 and a 2 is in good agreement with the expectation of ρ 2 − a 2 ≈ 0.21 [10]. Systematic Errors Systematic errors arise from the uncertainties in the fit input parameters given in Table 1, the Monte Carlo modelling of the signal ω resolution, the recoil spectrum of b → D * * ℓν decays and selection efficiencies, and possible biases in the fitting method.The resulting systematic errors on the values of F(1)|V cb |, ρ 2 and Br( B0 → D * + ℓ − ν) are summarized in Table 2 and described in more detail below. Input quantities: The various numerical fit inputs were each varied according to the errors given in Table 1 and the fit repeated to assess the resulting uncertainties. b → D * * ℓν decays: The calculation of Leibovich et al. [23] was used to simulate the recoil spectrum of B → D * + h ℓ − ν decays, assumed to be produced via the semileptonic decay B 0 → D * * ℓν. Here D * * represents a P-wave orbitally excited charm meson.The calculation predicts the recoil spectra and relative rates of semileptonic decays involving both the narrow D 1 and D * 2 states and wide D * 0 and D * 1 states.All these decays are suppressed close to ω = 1 by an extra factor of (ω 2 − 1) when compared with the signal B0 → D * + ℓ − ν decays.This reduces the uncertainty due to the rate of B → D * + h ℓ − ν decays in the extrapolation of the signal recoil spectrum to ω = 1.Non-resonant B → D * + h ℓ − ν decays are not included in the model, but are not expected to contribute close to ω = 1.The differential decay rates in [23] are given in terms of five possible expansion schemes and several unknown parameters: a kinetic energy term η ke and the slopes of the Isgur-Wise functions for the narrow and wide D * * states τ1 and ζ1 .These parameters were varied within the allowed Error Source ∆(F(1) for the inclusive analysis.The fractional errors on F(1)|V cb | and Br( B0 → D * + ℓ − ν) are given, whereas the errors on ρ 2 are absolute. ranges −0.75 < η ke < 0.75 GeV, −2 < τ1 < −1 and −2 < ζ1 < 0 subject to the constraint that the ratio R = Γ( B → D * 2 ℓν)/(Γ( B → D 1 ℓν) lie within the measured range R = 0.37 ± 0.16 [22,26].This excludes the expansion schemes A ∞ and B ∞ of [23] and constrains the allowable values of η ke in the others.The fraction of B → D * * ℓ − ν decays involving the narrow D 1 and D * 2 states, which is not precisely predicted, was varied in the range 0.22 ± 0.06, obtained by comparing the measured rates for B 0 semileptonic decays involving D + , D * + , D 1 and D * 2 with the inclusive semileptonic decay rate [18,22,26].The systematic errors were determined as half the difference between the two parameter sets giving the most extreme variations in F(1)|V cb | and ρ 2 , and the central values were adjusted to half way between these two extremes.The values of both F(1)|V cb | and ρ 2 are most sensitive to variations in η ke , which is constrained by the measured value of R. b fragmentation: The effect of uncertainties in the average b hadron energy x E = E b /E beam was assessed in Monte Carlo simulation by reweighting the events so as to vary x E in the range 0.702 ± 0.008 [28], and repeating the fit.The largest of the variations observed using the fragmentation functions of Peterson, Collins and Spiller, Kartvelishvili and the Lund group [27] were taken as systematic errors. D 0 decay multiplicities: The signal reconstruction efficiency and vertex charge distributions are sensitive to the B 0 decay multiplicity, which depends only on the D 0 decay for the B 0 decay channels of interest.The systematic error was assessed by varying separately the D 0 charged and π 0 decay multiplicities in Monte Carlo simulation according to the measurements of Mark III [29].The branching ratio D 0 → K 0 , K0 was also varied according to its uncertainty [18].The resulting uncertainties on F(1)|V cb | and ρ 2 from each variation were added in quadrature to determine the total systematic errors. ω reconstruction: The modelling of the ω resolution depends on the description of the D * + and B 0 energy and angular distributions in the simulation.The reconstructed D * + and B 0 energy distributions in data and simulation were compared, and the means were found to differ by 0.04 and 0.13 GeV respectively.The opposite hemisphere missing energy was found to agree within 5 %.The corresponding systematics were assessed by shifting or scaling the data distributions and repeating the ω reconstruction and fit.The modelling of the angular resolution was checked by studying the agreement of the two angular estimators-the slow pion and D 0 directions for the the D * + , and the missing energy vector and vertex flight directions for the B 0 .The angular resolutions were found to be up to 5 % worse in data, and the systematic error was assessed by degrading the simulated resolution appropriately.Finally, the fraction of events with ω reconstructed in the physical region (ω < ω max ) was found to be 3.5 % smaller in the data, in both opposite and same sign charge samples.The reconstruction efficiency was corrected for this effect, and an additional systematic error of half the correction (1.7 %) assumed.The final systematic errors due to ω resolution modelling are dominated by the B 0 θ resolution. Lepton identification efficiency: The electron identification efficiency has been studied using control samples of pure electrons from e + e − → e + e − events and photon conversions, and found to be modelled to a a precision of 4.1 % [14].The muon identification efficiency has been studied using muon pairs producted in two-photon collisions and Z 0 → µ + µ − events, giving an uncertainty of 2.1 % [30]. Vertex tag efficiency: The fraction of hemispheres with identified leptons which also had a selected secondary vertex was found to be about 4 % less in data than in simulation.The overall fraction of vertex tagged hemispheres was also found to be about 4 % lower in data.These discrepancies were translated into systematic errors on the efficiency to tag a semileptonic b decay with a secondary vertex in either the same or the opposite hemisphere, in each case attributing the whole discrepancy to a mismodelling of b hadron decays.The resulting errors on the same and opposite hemisphere tagging efficiencies were taken to be fully correlated. Track reconstruction efficiency: The overall track reconstruction efficiency is known to be modelled to a precision of 1 % [14], and a similar uncertainty was found to be appropriate for the particular class of slow pion tracks from D * + decays.The systematic error was assessed by randomly removing 1 % of tracks in the simulation and repeating the fit. Tracking resolution: Uncertainties in the tracking detector resolution affect the efficiency, ω reconstruction and vertex charge distributions.The associated error was assessed in the simulation by applying a global 10 % degradation to all tracks, independently in the r-φ and r-z planes, as in [14].To verify the correctness of the statistical errors returned by the fit, it was performed on many separate subsamples, and the distribution of fitted results studied.Further checks on the data included performing the analysis separately for B 0 decays involving electrons and muons, dividing the sample according to the year of data taking, and varying the lepton transverse momentum cut.In all cases, consistent results were obtained. Including all systematic uncertaintities, the final result of the inclusive analysis is where the first error is statistical and the second systematic in each case. 6 Measurement using exclusively reconstructed D * + decays In this analysis, the D 0 from the B0 → D * + ℓ − ν, D * + → D 0 π + decay is reconstructed exclusively in the decay modes D 0 → K − π + ('3-prong') and D 0 → K − π + π 0 ('satellite'-where the π 0 is not reconstructed).The event selection, reconstruction and determination of ω are exactly the same as described in [9].The determination of the signal and background fractions, and the fit to extract F(1)|V cb | and ρ 2 are performed as in [9], but have been updated using the improved form factor parameterisation [10], the updated input parameters given in Table 1 and the b → D * * ℓν decay model [23] discussed in Section 5. The fit is performed on 814 3-prong and 1396 satellite candidates, of which 505 ± 44 and 754 ± 72 are attributed to B0 → D * + ℓ − ν signal decays.The result of the fit is where again the first errors are statistical and the second systematic.The statistical correlation between F(1)|V cb | and ρ 2 is 0.95.The distribution of reconstructed ω for selected candidates (both 3-prong and satellite) is shown in Figure 4.The branching ratio Br( B0 → D * + ℓ − ν) has also been determined to be Br( B0 → D * + ℓ − ν) = (5.11± 0.19 ± 0.49) %. The systematic errors arise from uncertainties in the background levels in the selected samples, as well as uncertainties in the Monte Carlo simulations.They have been evaluated following the procedures described in [9], and are summarised in Table 3.The selection efficiency error includes contributions from lepton identification efficiency, b fragmentation and detector resolution uncertanties, described in detail in [31].The largest change with respect to the previous result is due to the improved b → D * * ℓν modelling, with the suppression of this background at values of ω close to one.This reduces the statistical error, the systematic uncertainty due to the rate of such decays, and shifts the central value of F(1)|V cb | upwards as compared to [9]. Conclusions The CKM matrix element |V cb | has been measured by studying the rate of the semileptonic decay B0 → D * + ℓ − ν as a function of the recoil kinematics of both inclusively and exclusively reconstructed D * + mesons.The two results are combined, taking into account the statistical correlation of 18 % and correlated systematic errors from physics inputs and detector resolution.The results are: where the first result is statistical and the second systematic in each case.The statistical and systematic correlations between F(1)|V cb | and ρ 2 are 0.90 and 0.54 respectively.These results supersede our previous publication [9].They are consistent with other determinations of F(1)|V cb | at LEP [7,8] and the Υ(4S) resonance [6].The branching ratio is consistent with the world average result of (4.60 ± 0.27) % [18].The result for F(1)|V cb | is the most precise to date from any single experiment.where the uncertainties are statistical, systematic and theoretical respectively. Figure 1 : Figure 1: Reconstructed ∆m distributions for selected (a) opposite sign and (b) same sign lepton-pion combinations; reconstructed vertex charge Q vtx distributions for opposite sign events with ∆m < 0.17 GeV and (c) σ Qvtx > 0.9 and (d) σ Qvtx < 0.9.In each case the data are shown by the points with error bars, and the Monte Carlo simulation contributions from signal B0 → D * + ℓ − ν decays, other resonant D * + decays and combinatorial background are shown by the open, single and cross hatched histograms respectively.The vertex charge classes 1 (poorly reconstructed), 2 (well reconstructed neutral vertex) and 3 (well reconstructed charged vertex) are indicated. Figure 2 : Figure 2: Reconstructed ω distributions for various ranges of true ω (denoted ω ′ ) in Monte Carlo B0 → D * + ℓ − ν events, together with the prediction from the resolution function.Events reconstructed in the unphysical region with ω > ω max are rejected.The reconstruction efficiency as a function of true ω is also shown. Figure 3 : Figure 3: Distributions of reconstructed ω for (a) opposite sign and (b) same sign events with ∆m < 0.17 GeV.The data are shown by the points with error bars and the expectation from the fit result by the histograms.The contributions from signal B0 → D * + ℓ − ν, resonant and combinatorial backgrounds are indicated. Fit method: The entire fitting procedure was tested on a fully simulated Monte Carlo sample seven times bigger than the data, with true values of F(1)|V cb | = 32.5 × 10 −3 and ρ 2 = 1.3.The fit gave the results F(1)|V cb | = (31.8± 0.5) × 10 −3 and ρ 2 = 1.25 ± 0.07, consistent with the true values.For each variable, the larger of the deviation of the result from the true value and the statistical error were taken as a systematic errors due to possible biases in the fit.Additionally, the large Monte Carlo sample was reweighted to change the values of F(1)|V cb | and ρ 2 , and the fit correctly recovered the modified values. Figure 4 :Table 3 : Figure 4: Distribution of reconstructed ω for B0 → D * + ℓ − ν candidates in the exclusive analysis.The data are shown by the points with statistical error bars, and the fit result by the histogram.The contributions from signal B0 → D * + ℓ − ν, resonant and combinatorial backgrounds are indicated.
9,746.8
2000-03-10T00:00:00.000
[ "Physics" ]
Large-scale atomistic simulations of magnesium oxide exsolution driven by machine learning potentials: Implications for the early geodynamo The precipitation of magnesium oxide (MgO) from the Earth’s core has been proposed as a potential energy source to power the geodynamo prior to the inner core solidification. Yet, the stable phase and exact amount of MgO exsolution remain elusive. Here we utilize an iterative learning scheme to develop a unified deep learning interatomic potential for the Mg-Fe-O system valid over a wide pressure-temperature range. This potential enables direct, large-scale simulations of MgO exsolution processes at the Earth’s core-mantle boundary. Our results suggest that Mg exsolves in the form of crystalline Fe-poor ferropericlase as opposed to a liquid MgO component presumed previously. The solubility of Mg in the core is limited, and the present-day core is nearly Mg-free. The resulting exsolution rate is small yet nonnegligible, suggesting that MgO exsolution can provide a potentially important energy source, although it alone may be difficult to drive an early geodynamo. Introduction Chemical buoyancy due to the crystallization of the inner core is believed to have supplied energy to power the geodynamo in the last 0.5-1 billion years (Nimmo, 2015).Paleomagnetic records suggest the existence of a very early (3.4 Ga) magnetic field in the Earth's history prior to the inner core crystallization (Tarduno et al., 2010).The energy source of this early geodynamo is enigmatic.Radiogenic heat production in the core may not be sufficient to sustain an early dynamo (Frost et al., 2022).The basal magma ocean may be electrically conductive (Stixrude et al., 2020), but the scale and longevity of a convective basal magma ocean are uncertain. Recent studies propose that exsolution of oxides from the core upon cooling, such as MgO (O'Rourke and Stevenson, 2016) or SiO2 (Hirose et al., 2017), may be a viable mechanism to power an early dynamo. Experimental studies on metal-silicate partitioning suggest that the solubility of Mg is highly sensitive to temperature (Badro et al., 2016;Du et al., 2017).The high-temperature equilibration between the metallic and silicate melts during the core formation process may result in a few wt% of MgO dissolved in the core. Upon cooling, Mg is expected to precipitate out of the core as its solubility drops.However, the efficiency of this mechanism, especially for MgO oxide, remains controversial (Badro et al., 2016;Du et al., 2017). The precipitation rate of MgO has been widely estimated using Mg partitioning behaviors in the metalsilicate system (Badro et al., 2018;Du et al., 2019;Liu et al., 2019).This estimation, strictly speaking, is unjustified.In contrast to the core formation process, where metallic and silicate melts equilibrate, the precipitation process involves the equilibration between the metallic melt and exsolution, where the exact phase and chemistry of exsolution depend on bulk compositions and thermodynamic conditions (Helffrich et al., 2020).Previous estimates, however, implicitly assume that exsolved MgO is a component of liquid silicate (Badro et al., 2018;Du et al., 2019;Liu et al., 2019).This assumption is questionable, as MgO is more refractory than SiO2 which may exsolve out of the core in solid-sate (Hirose et al., 2017).Therefore, a careful examination of the Mg exsolution process is necessary. In this study, we combine enhanced sampling, feature selection, and deep learning to develop a unified machine learning potential (MLP) for the Mg-Fe-O system.This MLP is used to perform large-scale molecular dynamics simulations to study the exsolution of Mg from core fluids.Unlike previous computational studies based on free energy calculations (Davies et al., 2018;Wahl and Militzer, 2015;Wilson et al., 2023), this method does not prescribe the state of the exsolved phase (Sun et al., 2022).The results inform the stable state of MgO precipitation, Mg and O partitioning between core fluid and exsolution, and the efficiency of MgO exsolution in powering an early geodynamo. Development of Machine Learning Potential A machine learning potential (MLP) is a non-parametric model that approximates the Born-Oppenheimer potential energy surface.We follow the same approaches outlined in our previous work on Mg-Si-O (Deng et al., 2023) and Mg-Si-O-H (Peng and Deng, 2024) where details of the machine learning process can be found.To briefly summarize our approach, the MLP is trained on a set of configurations drawn from multithermal and multibaric (MTMP) simulations (Piaggi and Parrinello, 2019), which are used to efficiently sample the multi-phase configuration space.We use the structure factor of B1 MgO as the collective variable to drive the sampling, and an iterative learning scheme as described by (Deng et al., 2023) to efficiently select distinct samples from molecular dynamics trajectories.High-accuracy ab initio calculations are performed on the selected sample configurations to derive the corresponding energies, atomic forces, and stresses.The DeePMD approach is employed to train an MLP which takes a configuration (a structure of a given atomic arrangement) and predicts its energy, atomic forces, and stresses without iterating through the time-consuming self-consistent field calculation (Wang et al., 2018;Zhang et al., 2018).The details of DeePMD approach and density functional theory (DFT) calculations can be found at Supplementary Information (Text S1, Figure S1).Our MLP explores a wide compositional space, trained on Mg-Fe-O systems of varying Mg:Fe:O ratios, including the pure endmembers, Fe and MgO, as well as intermediate compositions denoted by (MgO)aFebOc , where a=0-64, b = 0-64, c = 0-16 with 2a+b≥ 64.The final training set consists of 4466 configurations generated at pressure up to 200 GPa and temperature up to 8000 K. Two-phase molecular dynamics simulation Two-phase simulations are performed on a pure MgO system to determine the melting point of B1 MgO.Alfè (2005) found that systems of 432 atoms are sufficient to yield converged melting points as those larger systems.Here, supercells of 432 atoms are constructed and then relaxed for 1000 steps at desired pressure and temperature conditions in the NPT ensemble.The relaxed cell is then used to perform NVT simulations at high temperatures far exceeding the melting temperatures, with the atoms of half the cell fixed and the force applied to these atoms set to 0. The resulting structure is half-molten and half-crystalline.We relax this structure again at the target pressure and temperature for 1000 steps to obtain the initial configuration for two-phase simulations.Simulations on the two-phase supercell of solid-liquid coexistence were then performed.If the whole cell is molten (or crystallized) at the end, the simulation temperature is above (or below) the melting point.The state of the system can be determined by analyzing the radial distribution functions, allowing us to pinpoint the upper and lower bounds of the melting point. Exsolution simulation We construct systems of various Mg:Fe:O ratios by substituting/removing Mg and/or O atoms of supercells of B1 MgO.Initial configurations are melted at 8000 K and 140 GPa under the NPT ensemble for ~10 ps. We inspect trajectories and radial distribution functions to ensure systems are fully molten and well relaxed. The resulting configurations are further used to perform simulations at 140 GPa and target temperatures under the NPT ensemble for up to several nanoseconds to simulate the exsolution process. Gibbs dividing surface To determine the composition of the two coexisting phases, for every frame we locate the Gibbs dividing surface (GDS) that separates the whole cell into an oxide region, a metallic region, and two interfaces in between (Sega et al., 2018;Willard and Chandler, 2010).For every snapshot, we calculate the coarsegrained instantaneous density field at point r by where $ is the position of the ith atom; s is the coarse-graining length, and is set as 2.5 Å here (Willard and Chandler, 2010).The instantaneous surface s(t) is defined as a contour surface of the instantaneous coarse-grained density.The proximity of ith atom to this surface is where (t) is the surface normal in the direction of the density gradient at that point.This instantaneous density field is then projected onto each atom and associated with the corresponding proximity.We find that ρ() follows the expected form where roxide and rmetal are the density of the oxide phase and metal phase, respectively; a0 and a1 are the positions of the Gibbs dividing surfaces; w is the thickness of the interface.Fitting ρ() to Eq. ( 3) yields the location of the Gibbs dividing surfaces, as well as the density of each phase.The metal and oxide phases are defined by $ > 2 and $ < −2, respectively.We count the number of atoms of metal and oxide phases of each frame.A long trajectory after the system is equilibrated and used to determine the average concentrations in each phase and associated standard deviations.For more details, the reader is referred to (Deng and Du, 2023;Sega et al., 2018;Willard and Chandler, 2010;Xiao and Stixrude, 2018). Benchmarks of the Machine Learning Potential We compare the energies, atomic forces, and stresses from the MLP to those from DFT calculations for 15078 configurations that are not included in the training set (Figure S2).The root-mean-square errors of the energies, atomic forces, and stresses are 6.34 meV/atom −1 , 0.27 eV/Å −1 , and 0.48 GPa, respectively. We perform two additional tests to further examine the reliability of the MLP.First, we perform MD simulations with supercells of B1 MgO solid, MgO liquid, and a Mg-Fe-O liquid mixture, respectively. These supercells are larger than the training configurations.The root-mean-square error of energy prediction by the MLP with respect to the DFT calculations is similar to the error in the testing sets (Figure S3).This verification test further proved the accuracy of energy prediction and also demonstrated the transferability of the MLP to structures larger than the train/test sets.Second, we calculate the melting point of B1 MgO at 140 GPa using the solid-liquid two-phase coexistence method with a supercell of 432 atoms. For both DFT and MLP, the system crystalizes at 7700 K and melts at 7800 K, suggesting a melting point of 7750±50 K at 140 GPa and further validating the robustness of the MLP.The melting temperature is also consistent with previous studies (Alfè, 2005;Du and Lee, 2014). System convergence of the exsolution simulation The robust MLP of Mg-Fe-O system allows for large-scale exsolution simulations.We first examine the convergence of Fe liquid composition with respect to the simulation cell size by performing exsolution simulations at 5000 K and 140 GPa with five Mg-Fe-O liquid mixtures, where ratios of Mg, O and Fe atoms are fixed as 2:2:3, i.e., Mg64O64Fe96, Mg512O512Fe768, Mg1728O1728Fe2592, Mg2304O2304Fe3456, Mg3136O3136Fe4704. For all simulation, the system quickly demixes to form MgO-rich and Fe-rich region, and subsequently, MgO-rich region spontaneously crystallizes to form ferropericlase while metallic phase remains liquid.The resulting atomic fraction of Mg, O in the metallic phase converges when system size reaches 2000 atoms (Figure S4).Large systems also yield better statistics and thus the smaller uncertainties in the atomic fraction.Based on this test, all the partitioning results reported here are derived from simulations performed with systems of more than 2000 atoms to ensure convergence and robust statistics. Exsolution process In all simulations considered, exsolution spontaneously occurs within a few picoseconds at 4000 K to a few nanoseconds at 5500 K. Exsolutions are all solid ferropericlase with small amounts of FeO.The interfaces between exsolution and Fe liquid are typically irregular as they form spontaneously without interference. Taking the exsolution simulation of Mg2088Fe3456O2520 liquid at 140 GPa and 5500 K as an example (Figure 1; Supplementary movie 1).It starts with a homogeneous liquid (Figure 1a), and quickly demixes to form patches of MgO-rich liquid and Fe-rich liquid with a continuous drop of potential energy.Within around 250 ps, MgO-rich patches and Fe-rich patches conglomerate, respectively, dividing the whole cell into two regions: one enriched in MgO and the other Fe.MgO-rich region remains liquid for another 750 ps until a sudden crystallization occurs to form ferropericlase (Figure 1b).The crystallization is a rapid process accompanied by a significant drop in potential energy.Quickly after ferropericlase crystallizes, the potential energy plateaus.The element exchange between ferropericlase and residual Fe liquid continues within the interface region.We analyze the trajectories at this stage, calculate the Gibbs dividing surface, and determine the average composition of each phase for the last 100 ps.The chemical compositions of both phases are shown in Figure 1c.The metallic liquid is oxygen rich and magnesium poor.The exsolved ferropericlase is of B1 structure and is nearly stochiometric (Mg0.974Fe0.026)O.Similar analyses have been applied to all other exsolution simulations, and the compositions of simulation products are summarized in Table S1.with the separation of ferropericlase and metallic liquid.The sudden drop of internal energy corresponds to the crystallization of ferropericlase. Element partitioning We further analyze the element partitioning between the exsolved ferropericlase and Fe liquid considering two dissociation reactions: MgO ox = Mg met + O met and FeO ox = Fe met + O met , where superscripts ox and met indicate oxide and metal, respectively.We also consider Mg exchange reaction with MgO ox + Fe met = FeO ox + Mg met , but the fit is poor, as also reported in other studies (Badro et al., 2018;Liu et al., 2019) standard thermodynamic model with the non-ideality described by the epsilon formalism of (Ma, 2001). The phase relation between ferropericlase and Fe liquid has been studied mostly at low pressures and low temperatures (Asahara et al., 2007;Frost et al., 2010;Ozawa et al., 2008).Unfortunately, Mg contents in metallic melts were not reported and only 2 7 were reported in these studies.Thus, we only include 2 7 of these experiments in the fitting (Texts S2, S3; Tables S2). Our calculated 2 34 is slightly larger than that reported by a recent ab initio calculation (Wilson et al., 2023) where the pure B1 MgO is assumed as the exsolved phase, while our exsolution simulations show that precipitates contain small amounts of FeO (Figure 1, Table S1).The incorporation of FeO in the exsolved phase likely changes the free energy of the system, leading to different in 2 34 .Wahl and Militzer (2015) also perform ab initio simulations on the Mg-Fe-O system but focus on high temperatures close to the solvus closure and do not report Mg partitioning results at the conditions overlapping this study, which precludes a direct comparison.Compared with the previously determined 2 34 between Fe liquid and silicate melt (Badro et al., 2018;Du et al., 2019;Liu et al., 2019), 2 34 between Fe liquid and solid ferropericlase shows similar temperature dependence but is overall approximately one order of magnitude lower (Figure 2a), indicating a low Mg content in Fe liquid when equilibrated with ferropericlase.This is expected as MgO preferentially enters ferropericlase when silicate melt crystalizes (Boukaré et al., 2015). Oxygen partitioning between ferropericlase and liquid Fe is strongly controlled by temperature, in agreement with previous experiments (Asahara et al., 2007;Frost et al., 2010;Ozawa et al., 2008) and calculations (Davies et al., 2018). 2 7 between Fe liquid and silicate melt derived by (Liu et al., 2019) generally aligns with 2 7 between Fe liquid and solid ferropericlase, especially at high temperatures.Our 2 7 can be well fitted with previous experimental data to a unified thermodynamic model (Asahara et al., 2007;Ozawa et al., 2008), except for the four data points reported by (Frost et al., 2010) (Text S3; Table S3).At around 30-70 GPa and with similar oxygen contents in the liquid Fe, 2 7 of (Frost et al., 2010) are around half log unit higher than those of (Ozawa et al., 2008) and our extrapolated results.The source of this discrepancy is unknown but may arise from the carbon contamination. 2 7 reported by an early DFT calculation (Davies et al., 2018) is around 0.3-0.6 log unit higher than those of (Ozawa et al., 2008) and our results at similar conditions.We note that Davies et al. (2018) calculated the chemical potential of FeO for defect-free (Mg,Fe)O.Yet, both our simulations and previous studies (Karki and Khanduja, 2006;Van Orman et al., 2003) support the existence of defects in ferropericlase at high temperatures, which may lower the free energy of the host mineral and enrich FeO in ferropericlase, leading to a reduced 2 7 .(Badro et al., 2018), (Liu et al., 2019), and (Du et al., 2019), respectively.(b) Experimental studies include (Ozawa et al., 2008) (O08, downward triangle), (Asahara et al., 2007) (A07, diamond), (Frost et al., 2010) (F10, upward triangle).DFT study includes (Davies et al., 2018) (D18, square).All previous results are normalized to 140 GPa for a direct comparison using the best-fit pressure dependence, , where P/T are experimental/calculation pressure/temperature, and 7 is a fitted constant (Table S3).L19 (dashed line) indicates the O exchange coefficient of silicate-melt calibrated by (Liu et al., 2019).Uncertainties of the exchange coefficients of this study are roughly represented by the symbol size. Exsolution rate and geodynamo Earth's accretion and differentiation in its early history likely resulted in a core much hotter than it is today. The precipitation of light elements due to the secular cooling of the core may have provided a vital energy source to drive the geodynamo.The energetics of the exsolution-powered dynamo hinge on the cooling rate and the exsolution rate.Here, we adopt a core thermal evolution model proposed by O'Rourke et al. (2017) where the CMB temperature (TCMB) drops from around 5000 K to around 4100 K over the first ~3.8 billion years (Gyr) with a cooling rate of ~230 K Gyr -1 .Given this thermal history of the core, the phase of exsolution and the associated exsolution rate can be further determined using the element partitioning models, along with knowledge of the initial core composition.All previous modeling of Mg exsolution from the core assume that MgO exsolve as a component of silicate melts.However, our simulations show that MgO should exsolve as a component of crystalline ferropericlase, at least when light elements other than Mg and O are absent (Badro et al., 2018;Du et al., 2019;Liu et al., 2019;Mittal et al., 2020).Here, we first examine the Mg exsolution and its potential to drive the early geodynamo for an Mg-and O-bearing core, and then we discuss the effects of additional light elements. To model Mg exsolution from a core fluid with only Mg and O as light elements, we first determine the saturation conditions under which Mg precipitates.Previous N-body simulations and metal-silicate equilibrium experiments suggest that the Earth's core following its formation may contain 1.6-5 wt% O (Fischer et al., 2017;Liu et al., 2019;Rubie et al., 2015).The corresponding saturation magnesium concentration in the core is 0.04-0.19wt% at 140 GPa, as determined using the 2 34 between metal and ferropericlase, which is significantly lower than that determined by 2 34 between metal and silicate melt (Liu et al., 2019) (Figure 3a).This difference is expected, as the former 2 34 value is about one order of magnitude smaller than the latter (Figure 2a).Hence, our work implies that a substantial amount of Mg may have already been exsolved by the time the core cools to a TCMB of 5000 K. Further cooling reduces the Mg solubility in the core, with concentrations approaching 0.02-0.003wt% at a TCMB of 4000 K.This suggests a diminishingly small amount of Mg in the present-day outer core.We use these saturation magnesium conditions as the initial core composition in our exsolution modeling. Despite the contrasting Mg solubilities in the core, the compositions of the exsolutions are similar and exhibit a comparable trend with temperature.Specifically, the exsolved phase in both models becomes increasingly FeO-rich with cooling.At 4000 K, the exsolution contains up to 20 wt% FeO (Figure 3b). Throughout the thermal history, TCMB is lower than the solidus of exsolved ferropericlase (Deng et al., 2019), indicating that exsolutions remain solid. The resulting exsolution rates decrease with temperature, with the values dropping from 1.4-5.6 ×10 -6 K -1 at 5000 K to 0.2-1.0×10 -6 K -1 at 4000 K. Exsolution rates of ferropericlase are approximately one order of magnitude smaller than those predicted for silicate melt exsolution (Figure 3c).Exsolutions are depleted in iron and enriched in Mg.As a result, they are lighter than the outer core fluid and thus provide the buoyancy flux that may sustain an exsolution-driven dynamo (O' Rourke and Stevenson, 2016).Converting the exsolution rate to the magnetic field intensity is model dependent, however.The upper bound of the exsolution rates (1-5.6 ×10 -6 K -1 ) derived here are similar to the previous reports (Badro et al., 2016;Du et al., 2017).While Du et al. (2019) conclude that this exsolution rate is not sufficient to power the early geodynamo alone, Badro et al. (2018) use a scaling law that relates the exsolution rate to dipolar magnetic field intensity ( DEFGHI6 J;K8L6 ) and argue that MgO exsolution can well produce the dipolar magnetic field intensity at Earth's surface consistent with observations.We follow (Badro et al., 2018) to convert the exsolution rate to DEFGHI6 J;K8L6 (Figure 3d).The results show that DEFGHI6 J;K8L6 generated by the upper bound exsolution rate is broadly consistent with the paleo-intensities records dating back to 3.4 Gyr (Tarduno et al., 2010), and that generated by the lower bound rate is overall smaller than the observations and thus may not be sufficient (Tarduno et al., 2015).Overall, we find that MgO exsolution alone may be difficult to power the early geodynamo, but it is nevertheless an important energy source. While the exact composition of the core remains unknown, it may contain other light elements, such as S, Si, C, and H (Hirose et al., 2021).As the core cools, the solubility of these light elements tends to decrease, leading to their exsolution.For example, in a core composed solely of Si, O, and Fe, the exsolved phase would likely be solid SiO2 (Hirose et al., 2017;Zhang et al., 2022).The study by Helffrich et al. (2020) on the joint solubility of Mg, O, and Si in liquid Fe suggests that the presence of Si enhances the retention of Mg in metal, thereby reducing the extent of MgO exsolution.It is crucial to note, however, that their thermodynamic model is based on data from the silicate melt-Fe system and the SiO2-Fe system without considering ferropericlase.As a result, in their model, MgO is implicitly treated as a component of liquid rather than as solid ferropericlase.Adding further complexity, instead of precipitating separate MgO and SiO2 solids, a Mg-Fe-Si-O system may yield exsolutions of MgSiO3 bridgmanite or post-perovskite.Indeed, bridgmanite and post-perovskite with low iron content are quite refractory, with melting temperatures exceeding the TCMB assumed here and thus may form stable exsolution phases (Deng et al., 2023;Zerr and Boehler, 1993).Whether bridgmanite, post-perovskite, solid SiO2, B1 MgO, or liquid is the stable exsolution phase depends on their free energies and is still open to question.Consequently, a comprehensive re-evaluation of the phase relations in the Mg-Si-O-Fe system and more broadly, in the Mg-Si-O-C-H-S-Fe system, which considers exsolutions as solids, is warranted.This study marks a first attempt to demonstrate the significance of solid exsolutions and the substantially different behaviors they exhibit during exsolution.element partitioning models from this study with crystalline ferropericlase as the exsolved phase (solid lines) and those from a recent study with silicate melts as the exsolved phase (dashed lines) (Liu et al., 2019), respectively.Red and green denote initial oxygen concentration in the core of 5 wt% and 1.6 wt% at 5000 K, respectively. Conclusion We developed a machine learning potential of ab initio quality for Mg-Fe-O system using the iterative training scheme, which enables large-scale atomistic simulations of Mg exsolution processes at 4000-5500 K and 140 GPa without any ad hoc assumptions regarding the stable exsolution phase.The exsolved phase is solid Fe-poor ferropericlase across all the thermodynamic conditions considered.Using the Gibbs dividing surface method, we analyze simulation trajectories, obtain the chemical composition of exsolved phases and liquid phases, and determine Mg and O exchange coefficients.The results show that partitioning of Mg into the exsolved phase is significantly enhanced when compared to scenarios where the exsolved phase is assumed to be liquid, as in previous studies (Badro et al., 2018;Du et al., 2019;Liu et al., 2019;Mittal et al., 2020).The resulting small Mg exchange coefficients suggest a reduced Mg solubility in the core.Assuming a reasonable initial core composition with 1.6-5 wt% oxygen, the MgO exsolution rate may be insufficient to generate the dipolar magnetic field at the Earth's surface with intensities that align with the paleomagnetic record. Though not the focus of this study, it is noteworthy that our oxygen exchange coefficients are smaller than the previous ab initio results, indicating a reduced transport of FeO from ferropericlase into the core fluid (Davies et al., 2018), with implications for the dynamics of long-term core-mantle interaction.Moreover, solid exsolution may encapsulate distinctive core-characteristic signatures and transport them into the certain regions of the overlaying mantle (Helffrich et al., 2018), offering a valuable window to probe the core-mantle interaction (Deng and Du, 2023) Introduction Chemical buoyancy due to the crystallization of the inner core is believed to have supplied energy to power the geodynamo in the last 0.5-1 billion years (Nimmo, 2015).Paleomagnetic records suggest the existence of a very early (3.4 Ga) magnetic field in the Earth's history prior to the inner core crystallization (Tarduno et al., 2010).The energy source of this early geodynamo is enigmatic.Radiogenic heat production in the core may not be sufficient to sustain an early dynamo (Frost et al., 2022).The basal magma ocean may be electrically conductive (Stixrude et al., 2020), but the scale and longevity of a convective basal magma ocean are uncertain. Recent studies propose that exsolution of oxides from the core upon cooling, such as MgO (O'Rourke and Stevenson, 2016) or SiO2 (Hirose et al., 2017), may be a viable mechanism to power an early dynamo. Experimental studies on metal-silicate partitioning suggest that the solubility of Mg is highly sensitive to temperature (Badro et al., 2016;Du et al., 2017).The high-temperature equilibration between the metallic and silicate melts during the core formation process may result in a few wt% of MgO dissolved in the core. Upon cooling, Mg is expected to precipitate out of the core as its solubility drops.However, the efficiency of this mechanism, especially for MgO oxide, remains controversial (Badro et al., 2016;Du et al., 2017). The precipitation rate of MgO has been widely estimated using Mg partitioning behaviors in the metalsilicate system (Badro et al., 2018;Du et al., 2019;Liu et al., 2019).This estimation, strictly speaking, is unjustified.In contrast to the core formation process, where metallic and silicate melts equilibrate, the precipitation process involves the equilibration between the metallic melt and exsolution, where the exact phase and chemistry of exsolution depend on bulk compositions and thermodynamic conditions (Helffrich et al., 2020).Previous estimates, however, implicitly assume that exsolved MgO is a component of liquid silicate (Badro et al., 2018;Du et al., 2019;Liu et al., 2019).This assumption is questionable, as MgO is more refractory than SiO2 which may exsolve out of the core in solid-sate (Hirose et al., 2017).Therefore, a careful examination of the Mg exsolution process is necessary. In this study, we combine enhanced sampling, feature selection, and deep learning to develop a unified machine learning potential (MLP) for the Mg-Fe-O system.This MLP is used to perform large-scale molecular dynamics simulations to study the exsolution of Mg from core fluids.Unlike previous computational studies based on free energy calculations (Davies et al., 2018;Wahl and Militzer, 2015;Wilson et al., 2023), this method does not prescribe the state of the exsolved phase (Sun et al., 2022).The results inform the stable state of MgO precipitation, Mg and O partitioning between core fluid and exsolution, and the efficiency of MgO exsolution in powering an early geodynamo. Development of Machine Learning Potential A machine learning potential (MLP) is a non-parametric model that approximates the Born-Oppenheimer potential energy surface.We follow the same approaches outlined in our previous work on Mg-Si-O (Deng et al., 2023) and Mg-Si-O-H (Peng and Deng, 2024) where details of the machine learning process can be found.To briefly summarize our approach, the MLP is trained on a set of configurations drawn from multithermal and multibaric (MTMP) simulations (Piaggi and Parrinello, 2019), which are used to efficiently sample the multi-phase configuration space.We use the structure factor of B1 MgO as the collective variable to drive the sampling, and an iterative learning scheme as described by (Deng et al., 2023) to efficiently select distinct samples from molecular dynamics trajectories.High-accuracy ab initio calculations are performed on the selected sample configurations to derive the corresponding energies, atomic forces, and stresses.The DeePMD approach is employed to train an MLP which takes a configuration (a structure of a given atomic arrangement) and predicts its energy, atomic forces, and stresses without iterating through the time-consuming self-consistent field calculation (Wang et al., 2018;Zhang et al., 2018).The details of DeePMD approach and density functional theory (DFT) calculations can be found at Supplementary Information (Text S1, Figure S1). Two-phase molecular dynamics simulation Two-phase simulations are performed on a pure MgO system to determine the melting point of B1 MgO.Alfè (2005) found that systems of 432 atoms are sufficient to yield converged melting points as those larger systems.Here, supercells of 432 atoms are constructed and then relaxed for 1000 steps at desired pressure and temperature conditions in the NPT ensemble.The relaxed cell is then used to perform NVT simulations at high temperatures far exceeding the melting temperatures, with the atoms of half the cell fixed and the force applied to these atoms set to 0. The resulting structure is half-molten and half-crystalline.We relax this structure again at the target pressure and temperature for 1000 steps to obtain the initial configuration for two-phase simulations.Simulations on the two-phase supercell of solid-liquid coexistence were then performed.If the whole cell is molten (or crystallized) at the end, the simulation temperature is above (or below) the melting point.The state of the system can be determined by analyzing the radial distribution functions, allowing us to pinpoint the upper and lower bounds of the melting point. Exsolution simulation We construct systems of various Mg:Fe:O ratios by substituting/removing Mg and/or O atoms of supercells of B1 MgO.Initial configurations are melted at 8000 K and 140 GPa under the NPT ensemble for ~10 ps. We inspect trajectories and radial distribution functions to ensure systems are fully molten and well relaxed. The resulting configurations are further used to perform simulations at 140 GPa and target temperatures under the NPT ensemble for up to several nanoseconds to simulate the exsolution process. Gibbs dividing surface To determine the composition of the two coexisting phases, for every frame we locate the Gibbs dividing surface (GDS) that separates the whole cell into an oxide region, a metallic region, and two interfaces in between (Sega et al., 2018;Willard and Chandler, 2010).For every snapshot, we calculate the coarsegrained instantaneous density field at point r by where $ is the position of the ith atom; s is the coarse-graining length, and is set as 2.5 Å here (Willard and Chandler, 2010).The instantaneous surface s(t) is defined as a contour surface of the instantaneous coarse-grained density.The proximity of ith atom to this surface is where (t) is the surface normal in the direction of the density gradient at that point.This instantaneous density field is then projected onto each atom and associated with the corresponding proximity.We find that ρ() follows the expected form where roxide and rmetal are the density of the oxide phase and metal phase, respectively; a0 and a1 are the positions of the Gibbs dividing surfaces; w is the thickness of the interface.Fitting ρ() to Eq. ( 3) yields the location of the Gibbs dividing surfaces, as well as the density of each phase.The metal and oxide phases are defined by $ > 2 and $ < −2, respectively.We count the number of atoms of metal and oxide phases of each frame.A long trajectory after the system is equilibrated and used to determine the average concentrations in each phase and associated standard deviations.For more details, the reader is referred to (Deng and Du, 2023;Sega et al., 2018;Willard and Chandler, 2010;Xiao and Stixrude, 2018). Benchmarks of the Machine Learning Potential We compare the energies, atomic forces, and stresses from the MLP to those from DFT calculations for 15078 configurations that are not included in the training set (Figure S2).The root-mean-square errors of the energies, atomic forces, and stresses are 6.34 meV/atom −1 , 0.27 eV/Å −1 , and 0.48 GPa, respectively. We perform two additional tests to further examine the reliability of the MLP.First, we perform MD simulations with supercells of B1 MgO solid, MgO liquid, and a Mg-Fe-O liquid mixture, respectively. These supercells are larger than the training configurations.The root-mean-square error of energy prediction by the MLP with respect to the DFT calculations is similar to the error in the testing sets (Figure S3).This verification test further proved the accuracy of energy prediction and also demonstrated the transferability of the MLP to structures larger than the train/test sets.Second, we calculate the melting point of B1 MgO at 140 GPa using the solid-liquid two-phase coexistence method with a supercell of 432 atoms. For both DFT and MLP, the system crystalizes at 7700 K and melts at 7800 K, suggesting a melting point of 7750±50 K at 140 GPa and further validating the robustness of the MLP.The melting temperature is also consistent with previous studies (Alfè, 2005;Du and Lee, 2014). System convergence of the exsolution simulation The robust MLP of Mg-Fe-O system allows for large-scale exsolution simulations.We first examine the convergence of Fe liquid composition with respect to the simulation cell size by performing exsolution simulations at 5000 K and 140 GPa with five Mg-Fe-O liquid mixtures, where ratios of Mg, O and Fe atoms are fixed as 2:2:3, i.e., Mg64O64Fe96, Mg512O512Fe768, Mg1728O1728Fe2592, Mg2304O2304Fe3456, Mg3136O3136Fe4704. For all simulation, the system quickly demixes to form MgO-rich and Fe-rich region, and subsequently, MgO-rich region spontaneously crystallizes to form ferropericlase while metallic phase remains liquid.The resulting atomic fraction of Mg, O in the metallic phase converges when system size reaches 2000 atoms (Figure S4).Large systems also yield better statistics and thus the smaller uncertainties in the atomic fraction.Based on this test, all the partitioning results reported here are derived from simulations performed with systems of more than 2000 atoms to ensure convergence and robust statistics. Exsolution process In all simulations considered, exsolution spontaneously occurs within a few picoseconds at 4000 K to a few nanoseconds at 5500 K. Exsolutions are all solid ferropericlase with small amounts of FeO.The interfaces between exsolution and Fe liquid are typically irregular as they form spontaneously without interference. Taking the exsolution simulation of Mg2088Fe3456O2520 liquid at 140 GPa and 5500 K as an example (Figure 1; Supplementary movie 1).It starts with a homogeneous liquid (Figure 1a), and quickly demixes to form patches of MgO-rich liquid and Fe-rich liquid with a continuous drop of potential energy.Within around 250 ps, MgO-rich patches and Fe-rich patches conglomerate, respectively, dividing the whole cell into two regions: one enriched in MgO and the other Fe.MgO-rich region remains liquid for another 750 ps until a sudden crystallization occurs to form ferropericlase (Figure 1b).The crystallization is a rapid process accompanied by a significant drop in potential energy.Quickly after ferropericlase crystallizes, the potential energy plateaus.The element exchange between ferropericlase and residual Fe liquid continues within the interface region.We analyze the trajectories at this stage, calculate the Gibbs dividing surface, and determine the average composition of each phase for the last 100 ps.The chemical compositions of both phases are shown in Figure 1c.The metallic liquid is oxygen rich and magnesium poor.The exsolved ferropericlase is of B1 structure and is nearly stochiometric (Mg0.974Fe0.026)O.Similar analyses have been applied to all other exsolution simulations, and the compositions of simulation products are summarized in Table S1.with the separation of ferropericlase and metallic liquid.The sudden drop of internal energy corresponds to the crystallization of ferropericlase. Element partitioning We further analyze the element partitioning between the exsolved ferropericlase and Fe liquid considering two dissociation reactions: MgO ox = Mg met + O met and FeO ox = Fe met + O met , where superscripts ox and met indicate oxide and metal, respectively.We also consider Mg exchange reaction with MgO ox + Fe met = FeO ox + Mg met , but the fit is poor, as also reported in other studies (Badro et al., 2018;Liu et al., 2019) standard thermodynamic model with the non-ideality described by the epsilon formalism of (Ma, 2001). The phase relation between ferropericlase and Fe liquid has been studied mostly at low pressures and low temperatures (Asahara et al., 2007;Frost et al., 2010;Ozawa et al., 2008).Unfortunately, Mg contents in metallic melts were not reported and only 2 7 were reported in these studies.Thus, we only include 2 7 of these experiments in the fitting (Texts S2, S3; Tables S2). Our calculated 2 34 is slightly larger than that reported by a recent ab initio calculation (Wilson et al., 2023) where the pure B1 MgO is assumed as the exsolved phase, while our exsolution simulations show that precipitates contain small amounts of FeO (Figure 1, Table S1).The incorporation of FeO in the exsolved phase likely changes the free energy of the system, leading to different in 2 34 .Wahl and Militzer (2015) also perform ab initio simulations on the Mg-Fe-O system but focus on high temperatures close to the solvus closure and do not report Mg partitioning results at the conditions overlapping this study, which precludes a direct comparison.Compared with the previously determined 2 34 between Fe liquid and silicate melt (Badro et al., 2018;Du et al., 2019;Liu et al., 2019), 2 34 between Fe liquid and solid ferropericlase shows similar temperature dependence but is overall approximately one order of magnitude lower (Figure 2a), indicating a low Mg content in Fe liquid when equilibrated with ferropericlase.This is expected as MgO preferentially enters ferropericlase when silicate melt crystalizes (Boukaré et al., 2015). Oxygen partitioning between ferropericlase and liquid Fe is strongly controlled by temperature, in agreement with previous experiments (Asahara et al., 2007;Frost et al., 2010;Ozawa et al., 2008) and calculations (Davies et al., 2018). 2 7 between Fe liquid and silicate melt derived by (Liu et al., 2019) generally aligns with 2 7 between Fe liquid and solid ferropericlase, especially at high temperatures.Our 2 7 can be well fitted with previous experimental data to a unified thermodynamic model (Asahara et al., 2007;Ozawa et al., 2008), except for the four data points reported by (Frost et al., 2010) (Text S3; Table S3).At around 30-70 GPa and with similar oxygen contents in the liquid Fe, 2 7 of (Frost et al., 2010) are around half log unit higher than those of (Ozawa et al., 2008) and our extrapolated results.The source of this discrepancy is unknown but may arise from the carbon contamination. 2 7 reported by an early DFT calculation (Davies et al., 2018) is around 0.3-0.6 log unit higher than those of (Ozawa et al., 2008) and our results at similar conditions.We note that Davies et al. (2018) calculated the chemical potential of FeO for defect-free (Mg,Fe)O.Yet, both our simulations and previous studies (Karki and Khanduja, 2006;Van Orman et al., 2003) support the existence of defects in ferropericlase at high temperatures, which may lower the free energy of the host mineral and enrich FeO in ferropericlase, leading to a reduced 2 7 .(Badro et al., 2018), (Liu et al., 2019), and (Du et al., 2019), respectively.(b) Experimental studies include (Ozawa et al., 2008) (O08, downward triangle), (Asahara et al., 2007) (A07, diamond), (Frost et al., 2010) (F10, upward triangle).DFT study includes (Davies et al., 2018) (D18, square).All previous results are normalized to 140 GPa for a direct comparison using the best-fit pressure dependence, , where P/T are experimental/calculation pressure/temperature, and 7 is a fitted constant (Table S3).L19 (dashed line) indicates the O exchange coefficient of silicate-melt calibrated by (Liu et al., 2019).Uncertainties of the exchange coefficients of this study are roughly represented by the symbol size. Exsolution rate and geodynamo Earth's accretion and differentiation in its early history likely resulted in a core much hotter than it is today. The precipitation of light elements due to the secular cooling of the core may have provided a vital energy source to drive the geodynamo.The energetics of the exsolution-powered dynamo hinge on the cooling rate and the exsolution rate.Here, we adopt a core thermal evolution model proposed by O'Rourke et al. (2017) where the CMB temperature (TCMB) drops from around 5000 K to around 4100 K over the first ~3.8 billion years (Gyr) with a cooling rate of ~230 K Gyr -1 .Given this thermal history of the core, the phase of exsolution and the associated exsolution rate can be further determined using the element partitioning models, along with knowledge of the initial core composition.All previous modeling of Mg exsolution from the core assume that MgO exsolve as a component of silicate melts.However, our simulations show that MgO should exsolve as a component of crystalline ferropericlase, at least when light elements other than Mg and O are absent (Badro et al., 2018;Du et al., 2019;Liu et al., 2019;Mittal et al., 2020).Here, we first examine the Mg exsolution and its potential to drive the early geodynamo for an Mg-and O-bearing core, and then we discuss the effects of additional light elements. To model Mg exsolution from a core fluid with only Mg and O as light elements, we first determine the saturation conditions under which Mg precipitates.Previous N-body simulations and metal-silicate equilibrium experiments suggest that the Earth's core following its formation may contain 1.6-5 wt% O (Fischer et al., 2017;Liu et al., 2019;Rubie et al., 2015).The corresponding saturation magnesium concentration in the core is 0.04-0.19wt% at 140 GPa, as determined using the 2 34 between metal and ferropericlase, which is significantly lower than that determined by 2 34 between metal and silicate melt (Liu et al., 2019) (Figure 3a).This difference is expected, as the former 2 34 value is about one order of magnitude smaller than the latter (Figure 2a).Hence, our work implies that a substantial amount of Mg may have already been exsolved by the time the core cools to a TCMB of 5000 K. Further cooling reduces the Mg solubility in the core, with concentrations approaching 0.02-0.003wt% at a TCMB of 4000 K.This suggests a diminishingly small amount of Mg in the present-day outer core.We use these saturation magnesium conditions as the initial core composition in our exsolution modeling. Despite the contrasting Mg solubilities in the core, the compositions of the exsolutions are similar and exhibit a comparable trend with temperature.Specifically, the exsolved phase in both models becomes increasingly FeO-rich with cooling.At 4000 K, the exsolution contains up to 20 wt% FeO (Figure 3b). Throughout the thermal history, TCMB is lower than the solidus of exsolved ferropericlase (Deng et al., 2019), indicating that exsolutions remain solid. The resulting exsolution rates decrease with temperature, with the values dropping from 1.4-5.6 ×10 -6 K -1 at 5000 K to 0.2-1.0×10 -6 K -1 at 4000 K. Exsolution rates of ferropericlase are approximately one order of magnitude smaller than those predicted for silicate melt exsolution (Figure 3c).Exsolutions are depleted in iron and enriched in Mg.As a result, they are lighter than the outer core fluid and thus provide the buoyancy flux that may sustain an exsolution-driven dynamo (O'Rourke and Stevenson, 2016).Converting the exsolution rate to the magnetic field intensity is model dependent, however.The upper bound of the exsolution rates (1-5.6 ×10 -6 K -1 ) derived here are similar to the previous reports (Badro et al., 2016;Du et al., 2017).While Du et al. (2019) conclude that this exsolution rate is not sufficient to power the early geodynamo alone, Badro et al. (2018) use a scaling law that relates the exsolution rate to dipolar magnetic field intensity ( DEFGHI6 J;K8L6 ) and argue that MgO exsolution can well produce the dipolar magnetic field intensity at Earth's surface consistent with observations.We follow (Badro et al., 2018) to convert the exsolution rate to DEFGHI6 J;K8L6 (Figure 3d).The results show that DEFGHI6 J;K8L6 generated by the upper bound exsolution rate is broadly consistent with the paleo-intensities records dating back to 3.4 Gyr (Tarduno et al., 2010), and that generated by the lower bound rate is overall smaller than the observations and thus may not be sufficient (Tarduno et al., 2015).Overall, we find that MgO exsolution alone may be difficult to power the early geodynamo, but it is nevertheless an important energy source. While the exact composition of the core remains unknown, it may contain other light elements, such as S, Si, C, and H (Hirose et al., 2021).As the core cools, the solubility of these light elements tends to decrease, leading to their exsolution.For example, in a core composed solely of Si, O, and Fe, the exsolved phase would likely be solid SiO2 (Hirose et al., 2017;Zhang et al., 2022).The study by Helffrich et al. (2020) on the joint solubility of Mg, O, and Si in liquid Fe suggests that the presence of Si enhances the retention of Mg in metal, thereby reducing the extent of MgO exsolution.It is crucial to note, however, that their thermodynamic model is based on data from the silicate melt-Fe system and the SiO2-Fe system without considering ferropericlase.As a result, in their model, MgO is implicitly treated as a component of liquid rather than as solid ferropericlase.Adding further complexity, instead of precipitating separate MgO and SiO2 solids, a Mg-Fe-Si-O system may yield exsolutions of MgSiO3 bridgmanite or post-perovskite.Indeed, bridgmanite and post-perovskite with low iron content are quite refractory, with melting temperatures exceeding the TCMB assumed here and thus may form stable exsolution phases (Deng et al., 2023;Zerr and Boehler, 1993).Whether bridgmanite, post-perovskite, solid SiO2, B1 MgO, or liquid is the stable exsolution phase depends on their free energies and is still open to question.Consequently, a comprehensive re-evaluation of the phase relations in the Mg-Si-O-Fe system and more broadly, in the Mg-Si-O-C-H-S-Fe system, which considers exsolutions as solids, is warranted.This study marks a first attempt to demonstrate the significance of solid exsolutions and the substantially different behaviors they exhibit during exsolution.element partitioning models from this study with crystalline ferropericlase as the exsolved phase (solid lines) and those from a recent study with silicate melts as the exsolved phase (dashed lines) (Liu et al., 2019), respectively.Red and green denote initial oxygen concentration in the core of 5 wt% and 1.6 wt% at 5000 K, respectively. Conclusion We developed a machine learning potential of ab initio quality for Mg-Fe-O system using the iterative training scheme, which enables large-scale atomistic simulations of Mg exsolution processes at 4000-5500 K and 140 GPa without any ad hoc assumptions regarding the stable exsolution phase.The exsolved phase is solid Fe-poor ferropericlase across all the thermodynamic conditions considered.Using the Gibbs dividing surface method, we analyze simulation trajectories, obtain the chemical composition of exsolved phases and liquid phases, and determine Mg and O exchange coefficients.The results show that partitioning of Mg into the exsolved phase is significantly enhanced when compared to scenarios where the exsolved phase is assumed to be liquid, as in previous studies (Badro et al., 2018;Du et al., 2019;Liu et al., 2019;Mittal et al., 2020).The resulting small Mg exchange coefficients suggest a reduced Mg solubility in the core.Assuming a reasonable initial core composition with 1.6-5 wt% oxygen, the MgO exsolution rate may be insufficient to generate the dipolar magnetic field at the Earth's surface with intensities that align with the paleomagnetic record. Though not the focus of this study, it is noteworthy that our oxygen exchange coefficients are smaller than the previous ab initio results, indicating a reduced transport of FeO from ferropericlase into the core fluid (Davies et al., 2018), with implications for the dynamics of long-term core-mantle interaction.Moreover, solid exsolution may encapsulate distinctive core-characteristic signatures and transport them into the certain regions of the overlaying mantle (Helffrich et al., 2018), offering a valuable window to probe the core-mantle interaction (Deng and Du, 2023) increased from the Gamma-point only to a 2×2×2 Monkhorst-Pack mesh.We found this high precision recalculation to be important for optimizing the robustness of the MLP (Deng and Stixrude, 2021b) .atoms in metallic phase as a function of system size (i.e., number of atoms in the system) at 5000 K and 140 GPa.The ratio of the numbers of Mg, O, and Fe are 2:2:3 for all the systems. Text S3 Regression We fit the exsolution simulation results and previous experiments on ferropericlase-Fe partitioning simultaneously to resolve the thermodynamic parameters.We exclude the data with reported carbon contamination (Du et al., 2019b).Early experiments do not report Mg contents in the liquid Fe and are therefore only used to constrain O partitioning.We first explore fitting the oxygen partitioning independently using data from this study and experimental results from (Asahara et al., 2007;Frost et al., 2010;Ozawa et al., 2008).The goodness of fit (measured by R 2 ) dramatically drops from 0.93 to 0.74 as long as the four data points reported by (Frost et al., 2010) are included.This is largely due to the conflicting results between (Frost et al., 2010) and (Ozawa et al., 2008) at around 30-70 GPa.We therefore reject the data of (Frost et al., 2010) for fitting. We then fit both & '( and & * simultaneously using the data of this study and (Asahara et al., 2007;Ozawa et al., 2008).Based on F-test, the model with or without '( and '( '( fit the data equally well.As such, '( and '( '( are set to be 0. The negligible roles of '( and '( '( on partitioning are consistent with previous studies on metal-silicate systems (Badro et al., 2018;Du et al., 2019a;Liu et al., 2019).The resulting fitted parameters are listed in Table S3.Only data of Asahara et al., 2007 andOzawa et al., 2008 are used for fitting. Figure 1 . Figure 1.Molecular dynamics simulation of spontaneous ferropericlase exsolution from a homogeneous Mg2088Fe3456O2520 liquid at 140 GPa and 5500 K (NPT ensemble).(a) The initial configuration at 1 fs with a homogeneous distribution of Mg (green), Fe (yellow), and O (red) atoms.(b) The final configuration at 1.5 ns.Dark red planes are the Gibbs diving surfaces that separates the whole systems into crystalline ferropericlase, interface, and metallic liquid.The cell dimension is 50.9Å × 39.9 Å × 35.3 Å initially (a) and becomes 47.9 Å × 37.6 Å × 33.3 Å at the end of the simulation (b).(c) Evolution of number of atoms in liquid (liq) and solid exsolution (sol) in the last 100 ps.(d) Evolution of potential energy.Energy drops Figure 2 . Figure 2. Mg (a) and O (b) exchange coefficients as a function of oxygen content in the iron (XO) and temperature at 140 GPa.Solid circles are the results from this study, and solid lines are the best-fit curves (Supplementary Texts S2, S3).The color of curves and symbols represents the value of XO.Previous calculations and experiments are also shown for comparison.(a) W23 denotes the DFT calculation result by (Wilson et al., 2023) (upward triangle).B18 (dotted line), L19 (dashed line), and D19 (dotted-dashed line) represents the Mg exchange coefficient between silicate melt and Fe liquid calibrated by(Badro et al., Figure 3 . Figure 3. MgO solubility (a), chemical composition of the exsolution (b), exsolution rate (c), intensity of the dipolar magnetic field at Earth's surface produced by exsolution-driven dynamo (d) based on the Our MLP explores a wide compositional space, trained on Mg-Fe-O systems of varying Mg:Fe:O ratios, including the pure endmembers, Fe and MgO, as well as intermediate compositions denoted by (MgO)aFebOc , where a=0-64, b = 0-64, c = 0-16 with 2a+b≥ 64.The final training set consists of 4466 configurations generated at pressure up to 200 GPa and temperature up to 8000 K. Figure 1 . Figure 1.Molecular dynamics simulation of spontaneous ferropericlase exsolution from a homogeneous Mg2088Fe3456O2520 liquid at 140 GPa and 5500 K (NPT ensemble).(a) The initial configuration at 1 fs with a homogeneous distribution of Mg (green), Fe (yellow), and O (red) atoms.(b) The final configuration at 1.5 ns.Dark red planes are the Gibbs diving surfaces that separates the whole systems into crystalline ferropericlase, interface, and metallic liquid.The cell dimension is 50.9Å × 39.9 Å × 35.3 Å initially (a) and becomes 47.9 Å × 37.6 Å × 33.3 Å at the end of the simulation (b).(c) Evolution of number of atoms in liquid (liq) and solid exsolution (sol) in the last 100 ps.(d) Evolution of potential energy.Energy drops Figure 2 . Figure 2. Mg (a) and O (b) exchange coefficients as a function of oxygen content in the iron (XO) and temperature at 140 GPa.Solid circles are the results from this study, and solid lines are the best-fit curves (Supplementary Texts S2, S3).The color of curves and symbols represents the value of XO.Previous calculations and experiments are also shown for comparison.(a) W23 denotes the DFT calculation result by (Wilson et al., 2023) (upward triangle).B18 (dotted line), L19 (dashed line), and D19 (dotted-dashed line) represents the Mg exchange coefficient between silicate melt and Fe liquid calibrated by(Badro et al., Figure 3 . Figure 3. MgO solubility (a), chemical composition of the exsolution (b), exsolution rate (c), intensity of the dipolar magnetic field at Earth's surface produced by exsolution-driven dynamo (d) based on the Figure S1 . Figure S1.Convergence tests of total energy (a) and pressure (b) with varying energy cutoffs (ENCUT flag in VASP) for a mixture of Mg64O64Fe64 at the static condition.An energy cutoff of 800 eV is sufficient to obtain converged results for both energy and pressure. Figure S2 . Figure S2.Comparisons of energies (a), atomic forces (b), and stresses (c) between DFT and the machine learning potential (MLP) for all the test data at temperatures up to 8000 K and pressures up to ~200 GPa.15078 energies, 5988786 force components, and 135702 stress components are included in these comparisons.The red dashed lines are guides for perfect matches. . Table S1 Summary of exsolution simulations at 140 GPa. NMg , NFe , and NO are the numbers of 129 Mg, Fe, O atoms, respectively. The difference between the composition of the bulk system and the sum 130 of those of ferropericlase and metallic liquid yields the composition of the corresponding interface. Table S2 Summary of previous experimental results on ferropericlase-Fe element partitioning. Experiments with reported carbon and sulfur contamination are excluded . (see the content in a separate supplementary file) Table S2 . Summary of previous experimental results on ferropericlase-Fe element partitioning.Experiments with reported carbon and sulfur contamination are excluded.XFeO, XFe, XO are the FeO content in ferropericlase, Fe content in metallic liquid, O content in metallic liquid, respectively.
12,570.6
2024-08-19T00:00:00.000
[ "Physics", "Geology" ]
A Novel pH- and Salt-Responsive N-Succinyl-Chitosan Hydrogel via a One-Step Hydrothermal Process In this study, we synthesized a series of pH-sensitive and salt-sensitive N-succinyl-chitosan hydrogels with N-succinyl-chitosan (NSCS) and the crosslinker glycidoxypropyltrimethoxysilane (GPTMS) via a one-step hydrothermal process. The structure and morphology analysis of the NSCS and glycidoxypropyltrimethoxysilane-N-succinyl chitosan hydrogel (GNCH) revealed the close relation between the swelling behavior of hydrogels and the content of crosslinker GPTMS. The high GPTMS content could weaken the swelling capacity of hydrogels and improve their mechanical properties. The hydrogels show high pH sensitivity and reversibility in the range of pH 1.0 to 9.0, and exhibit on-off switching behavior between acidic and alkaline environments. In addition, the hydrogels perform smart swelling behaviors in NaCl, CaCl2, and FeCl3 solutions. These hydrogels may have great potential in medical applications. Introduction Hydrogels, as one of the most promising soft materials, have three-dimensional network structures composed of polymer and water [1][2][3][4]. Hydrogels with good environmental response have attracted more and more attention in pharmaceuticals, medicine, tissue engineering, materials science, food, and agriculture [5][6][7][8][9][10]. In particular, pH-and salt-responsive hydrogels are mostly studied because both parameters are important environmental factors in physiological and chemical systems [11,12]. Hydrogels made from natural polymers, including chitin [13], gelatin [14], cellulose [15], and sodium alginate [16], have many unique advantages, such as good biocompatibility, biodegradability, and these natural polymers have abundant resources. Natural polysaccharides, due to their unique advantages, can be used to make hydrogels for biomedical applications, such as stent coatings [17], especially in drug delivery [18]. Chitosan (CS), a biopolymer comprising glucosamine and N-acetylglucosamine, is an N-deacetylated product of chitin and the most abundant natural biomass material other than cellulose [19]. Chitosan has excellent biological properties such as biodegradability, biocompatibility, antibacterial, and wound healing [20][21][22]. However, the insolubility at neutral or high pH region has limited the application of chitosan. To improve the solubility of chitosan, a series of hydrophilic groups have been introduced into its skeleton, such as carboxymethyl chitosan [23,24], PEGylation [25], gallic acid grafting [26] etc. N-succinyl-chitosan (NSCS) is synthesized by attaching a succinyl group to the amine group of chitosan, which improves the solubility of chitosan in water. The pH-sensitive polymer made from NSCS is biocompatible and safe for human body [27]. The most common crosslinkers used to prepare chitosan-based hydrogels are dialdehydes such as glyoxal [28], and in particular glutaraldehyde [29,30]. However, they are mostly toxic [30,31]. The cytocompatible coupling agent glycidyloxypropyltrimethoxysilane (GPTMS) [32], has been conventionally applied in organic-inorganic hybrid materials via sol-gel reaction providing covalent linkage via the sol-gel reaction between organic and inorganic matrices. The representative sol-gel reaction is based on the silane functionality, silanol (Si-OH), ready for polycondensation to yield siloxane (Si-O-Si) bonds [33]. Therefore, GPTMS is an interesting alternative to prepare hydrogel. Although a few studies have reported that chitosan and GPTMS are crosslinked to synthesize hydrogels [33,34], the cumbersome synthesis process and the harsh experimental conditions restrict the further application. In this work, N-succinyl-chitosan (NSCS) is synthesized from chitosan and succinic anhydride, and the glycidoxypropyltrimethoxysilane-N-succinyl chitosan hydrogel (GNCH) was prepared by one-step cross-linking reaction of NSCS with the crosslinker glycidoxypropyltrimethoxysilane (GPTMS). NSCS can completely dissolve in deionized water without further treatment and the synthesis process of hydrogel is mild and simple. GPTMS allows direct crosslinking reaction in aqueous media under mild conditions, and there is no addition of external molecules such as reducers which is of detrimental to biocompatibility. The synthesis and properties of the chitosan hydrogel are systematically studied and the results may provide a new approach for the preparation of smart-responsive hydrogels from natural biomass polymers. This kind of hydrogels may have great potential in the biomedical applications. Results and Discussion The synthesis process of GNCH using NSCS is described in Figure 1. The formation mechanism of GNCH can be described as follows. The oxirane ring on the GPTMS reacted with the remaining amino group on the NSCS chain and hydration of the trimethoxy groups on the GPTMS formed silantriol pendent. Then the sol was heated at 80 • C to form inter-chain linkages between NSCS chains via the dehydration reaction among the silantriol groups. The reaction units are marked with green and blue, respectively. Figure 2a shows the FTIR spectra of CS, NSCS and GNCH. For the CS, the absorption peak located at 1575 cm −1 is attributed to the -NH 2 bending vibration. The absorption peak located at 3369 cm −1 is assigned to the -OH stretching vibration, and the absorption peaks located at 3030-3330 cm −1 are ascribed to the -NH 2 stretching vibration. No absorption peaks at 3080 cm −1 is observed in the infrared spectrum of CS due to the intramolecular and intermolecular hydrogen bonds. For the NSCS, two new characteristic absorption peaks appear at 1658 cm −1 and 1411 cm −1 correspond to the formation of -CO-NH- [35], and the obvious absorption peaks at 3080 cm −1 indicate the -NH 2 of CS has been partially substituted by succinyl groups (-NH(CO)-CH 2 -CH 2 -COOH), converting the primary amines (-NH 2 ) into secondary amides [36]. Structural Characterization In the spectrum of the NSCS, the absorption peaks at 1568 cm −1 is attributed to the N-H absorption [37]. 40 In GNCH, the intensities of the peak at 1575 cm −1 decreased are assigned to the N-H formed after cross-linking. The peak of 1045 cm −1 and 688 cm −1 are attributed to the Si-O-Si symmetrical stretching vibration and the asymmetric stretching vibration peak of Si-O-Si, respectively. The peak of 898 cm −1 corresponds to the Si-OH bond [38]. The FT-IR results confirm that GPTMS has successfully cross-linked with NSCS. Chemical structure and 1 H NMR spectra of N-succinyl-chitosan are shown in Figure 2b. The peak at 4.57 ppm is ascribed to H-1 of glucosamine (GlcN), and the peak at 3.54-3.86 ppm is ascribed to H-2, H-3, H-4, H-5, H-6 of GlcN and H-2 of N-acylated GlcN. Furthermore, the peak at 2.45 ppm (H-a) and 2.46 ppm (H-b) correspond to -NH(CO)-CH 2 -and -CH 2 -COOH of the substituted succinyl group (-NH(CO)-CH 2 -CH 2 -COOH), respectively [35]. The degree of substitution (DS) is calculated using Equation (1): where the A represents the integral value of protons corresponding to -CH 2 -CH 2 -(H-a and H-b) of the substituted succinyl group (-NH(CO)-CH 2 -CH 2 -COOH), and the A" represents the integral value of protons corresponding to H-2, H-2 , H-3, H-4, H-5 and H-6 [39]. The calculated value of DS is 71%. Compared with literature [40], the degree of substitution of NSCS was further improved, which was conducive to the complete dissolution of NSCS in distilled water. Therefore, the FTIR spectra together with 1 H NMR spectra indicate that the successful preparation of NSCS and the FTIR spectra indicate the successful preparation of GNCH. Figure 3 shows the interior morphological structures of freeze-dried GNCH with different GPTMS contents. All the hydrogels display a continuous and porous three-dimensional structure, which is caused by phase separation and sublimation of removing water during the freeze-drying process [41]. In addition, the pore size of hydrogel became larger as GPTMS contents increased. The reason is that the increased cross-link density could cause faster phase separation during freezing, resulting in a large pore size phase structure [42]. Swelling Properties As reported, the swelling capacity of the hydrogel decrease with the increase of crosslinker concentration. It can be seen from Figure 4, as the molar ratio of GPTMS to NSCS increased from 0.4 to 1, the swelling ratio of hydrogel decreased from 92 to 69 g/g, which makes it have good application in biomedicine. Meanwhile, the gel content increased with the increase of GPTMS content. So we can presume that the decrease in swelling ratio is associated with the increase in cross-link density of the gel. Figure 5 shows the time function of hydrogel swelling ratio. The swelling behavior of GNCH in distilled water is related to the content of crosslinker. The amount of absorbed water increased rapidly during the initial swelling for each hydrogel and then slowed down until reaching equilibrium at about 70 h. This behavior is analyzed using a second-order swelling kinetics model (Equation (2)) [43]. where SR t is the swelling ratio at given swelling time t (s); K is the swelling rate constant; SR eq is the swelling ratio at equilibrium time [41]. The t/SR t is linear with t and its correlation coefficient is greater than 0.999 (Figure 5b), which accords with the second-order swelling kinetics model [44]. According to Figure 5b, the swelling rate constant (K) and the experimental values of swelling ratio (SR eq ) were obtained from the experiment data, listed in Table 1. The minimum swelling ratio of the hydrogels and the lowest swelling rate constant (K) were obtained at the most cross-linked GNCH1, while the maximum swelling ratio of the hydrogels was obtained at the least cross-linked GNCH0.4. A similar phenomenon was also previously noted in the study of another hydrogel material [41]. This result was likely due to an increase in crosslinking density as the amount of the crosslinker increases, resulting in a decrease in the swelling ratio of the hydrogel. These results indicate that the increase of GPTMS content will increase the crosslinking density of GNCH, the swelling ratio of the hydrogel is inversely proportional to the amount of crosslinker GPTMS [44]. pH-Sensitive Behavior The pH-responsive behaviors of hydrogels from pH = 1.0 to pH = 9.0 are presented in Figure 6. The ionic strength of various pH solutions was controlled at 0.4 M by adjusting NaCl content. All hydrogels exhibited lower swelling ratio in 0.4 M ionic strength buffers compared with that in distilled water. Four samples of the GNCH exhibited clearly pH-sensitive behavior in buffers, which obtained the maximum swelling ratio at pH = 7.0. The volume of GNCH changed in a wide range of pH value due to acidic groups. The different pH-depended interacting species in swelling medium lead to the change of equilibrium swelling ratio (SR). Therefore, based upon pK a of SA (4.19) and pK b of CS (6.5), the involving species are mainly −COOH at pH 1.0-6.0, and −COO− at pH 7.0-9.0. At low pH (<7.0), because of the strong acidic condition, the dominant charges in the gels are acid form (−COOH); and at high pH (7.0-9.0), the dominant charges are the ionised carboxyl groups (−COO−). When at pH 1.0-6.0, the acid form (−COOH) could form intermolecular hydrogen bonds, which resulted in the unfavorable swelling behavior and lower swelling ratio for hydrogels. At pH = 7.0, the carboxyl groups gradually transformed into the ionized carbonate form (−COO−), leading to stronger hydrophilicity and higher electrostatic repulsion of the network, and hence enhance the water absorption capacity [45]. However, the repulsion of the negative −COO− groups would be shielded by more Na + in the basic condition (pH > 7.0) for screening effect, causing the shrinking of hydrogels, thus their swelling ratio decreased subsequently. The ionic groups play the main role in swelling variations of the GNCH. These results suggest that the swelling behavior of GNCH can be controlled by varying pH of the solution [46]. pH Reversible Behavior The pH-responsive behavior of GNCH was demonstrated to be reversible. Figure 7a shows a stepwise reproducible swelling change of the hydrogels with alternating pH between 4.0 and 9.18, demonstrating a reversible pH-responsive behavior of GNCH. The mechanism of the pH reversible effect is explained as showed in Figure 7b. The hydrogels reach higher swelling ratio at pH 9.18, but the swollen gel rapidly shrink due to the protonation of -COO− groups and exhibit intriguing on-off switching behavior [44], while at pH 4.0, the hydrogels shrink within a few minutes due to protonation of carboxylate groups [47]. After five cycles, the hydrogels exhibit well swelling-deswelling performance, which makes them suitable candidates for controlled drug delivery systems [48]. The evident change of water absorption with altering the pH of external buffer solution confirms the excellent pH-sensitive characteristic of GNCH. Salt Sensitivity Behavior The swelling behavior of the GNCH in various salt solutions is shown in Figure 8. In general, the salt-sensitive hydrogel consists of three phases, namely the three-dimensional polymeric network matrix, the interstitial fluid, and the ionic species [49]. In NaCl, CaCl 2 , and FeCl 3 solutions, a marked volume decrease was observed in hydrogels with the increasing of salt concentration, the swelling ratio of gels in saline solutions was appreciably reduced comparing to the values measured in deionized water. The swelling and shrinking behaviors of hydrogels in salt solution were determined by the ionic interactions between mobile ions and the fixed charges which make tremendous contributions to the osmotic pressure between the interior hydrogel and external solution. Because of the Donnan osmotic pressure, the gels began to shrink in higher salt concentrations [50]. The swelling ratio of GNCH exhibited sharp decrease with an increase of salt concentration in CaCl 2 and FeCl 3 solution, as shown in Figure 8b, c. The higher cation charges lead to higher degree of crosslinking and the smaller swelling value. Because of the swelling ratio of hydrogels in salt solution depended not only on the salt concentration but also on the ionic charge. Figure 8d shows the swelling ratio of the hydrogels with different proportion of crosslinkers in various salt solutions (0.01 M). Under the presence of excess salt, the counterion contribution to the osmotic pressure increased with the increasing of ionic charge. The higher cation charges lead to form internal or intermolecular complexes of −COO− groups inside the gel, and a multivalent ion can neutralize several charges within the gel. Consequently, the crosslinking density of the network increases, while the water absorption capacity decreases. Therefore, the swelling ratio of the hydrogel in the studied salt solutions is in the order of monovalent > divalent > trivalent cations [47]. Figure 9 shows the rheological properties of GNCH with different proportions of GPTMS at 25 • C. The gels exhibited typical viscoelastic behavior, as both the storage modulus (G'; Figure 9a) and loss modulus (G"; Figure 9b) increased with oscillating frequency. G' was larger than G" over the whole range of frequency, suggesting a general dominance of the elastic response of the gels to deformation over a broad time scale. G' of all GNCH was higher than G" over the whole selected angular frequency range [51]. Besides, both G' and G" showed a monotonous increase with GPTMS content in the gels, which was probably due to the improvement in the network structure of these samples and increased cross-link density [42,52]. Moreover, the consequence of higher cross-link density of the gel lead to more heat dissipation for chain segment movement [41]. The positive effect of GPTMS content on the mechanical properties of GNCH could also be observed in their compressive stress−strain curves (Figure 9c), where GNCH1 presented much higher stress values than the other hydrogels over the entire examined strain range. The storage modulus (G') and loss modulus (G") together with the compressive stress−strain curves show that the mechanical properties of GNCH can be significantly improved by increasing the content of GPTMS. Typically, the mechanical properties and rheological properties of chitosan hydrogels in recent related studies are listed in Table 2. Apparently, the mechanical property and preparation method of GNCH in our work are good and simple, which can be useful in design of new chitosan hydrogel. Synthesis of N-succinyl-chitosan (NSCS) Chitosan (5 g) was dissolved in 100 mL DMSO, then succinic anhydride (2.29 g) was added under stirring at 500 rpm for 4 h at 60 • C. The pH of the mixture after reaction was adjusted to 7 with 5% (w/v) NaOH (3 mL). After filtration, the precipitate was dissolved in 400 mL distilled water to prepare a solution of pH = 11 with 5% (w/v) NaOH (47 mL). This solution was recrystallized from acetone to form the pale yellow solid, and then washed with 400 mL of 75% acetone, 400 mL of 70% ethanol, and 400 mL of acetone, sequentially. The final product was dried under vacuum at 60 • C for 48h to obtain N-succinyl-chitosan (NSCS) particles [35]. The calculated yield of NSCS is 90.81%. Synthesis of Glycidyloxypropyltrimethoxysilane-N-Succinyl-chitosan Hydrogels (GNCH) GNCH were prepared by one-step hydrothermal process. A 8% (w/v) solution of NSCS in distilled water was prepared, and then mixed with a given amount of GPTMS for stirring at 100 rpm with 10 min to obtain a homogeneous solution. The reaction was let to proceed at 80 • C for 48 h. The five samples were labeled as GNCH0.4, GNCH0.6, GNCH0.8, GNCH1 by changing the molar ratio of GPTMS to NSCS to 0.4, 0.6, 0.8, 1. The hydrogel was extracted, cut into pieces and immersed in distilled water to remove the residual reactants and obtain pure samples. The washed hydrogel was dried for 48 h in a freeze dryer and used in the experiment. Figure 1 shows the hydrogel formation mechanism. Fourier Transform Infrared Spectroscopy (FTIR) FTIR spectroscopy of dry gel samples were conducted on a Bruker Tensor 27 FT-IR spectrometer (Karlsruhe, Germany) using KBr pellets and collected ranging from 4000 to 400cm −1 . Nuclear Magnetic Resonance (NMR) 1 H-NMR spectrum of CS and NSCS samples were obtained in D 2 O at 25 • C with Bruker AV II-600 MHz (Bruker, Zurich, Switzerland). Scanning Electron Microscope (SEM) The swollen hydrogels with different proportions of crosslinker (0.4-1) were freeze-formed under liquid nitrogen and then freeze-dried. The freeze-dried hydrogel was examined by surface-coated with Au. The cross-sections of the lyophilized samples were visualized using a scanning electron microscopy (SEM, Hitachi S-4800, Tokyo, Japan). Gel Content The gel content (G%) [41] is calculated according to Equation (3): where the W a represents the weight of the dried hydrogel (washed), and W b represents the weight of unwashed hydrogel. Swelling Behaviors of Hydrogel The swelling studies of the GNCH were carried out by the following method. All hydrogels were cut into 10 mm × 15 mm length (5 mm in thickness). The swelling ratio of hydrogels were studied by gravimetric method. The hydrogels were immersed in the distilled water, different pH solutions, and salt solutions at 25 • C for 4 days to reach equilibrium. Adjusting the pH value from 1 to 9 with Na 2 HPO 4 •12H 2 O, NaH 2 PO 4 •2H 2 O, C 6 H 8 O 7 , KCl, HCl, Na 2 CO 3 , and NaHCO 3 . The ionic strength of the pH solutions was 0.4 M, which was obtained by adding an appropriate amount of NaCl. The equilibrium swelling ratio (SR) of the hydrogel is calculated using Equation (4): where the W s and W d represents the weight of swollen gel and dry gel, respectively. Three replicates were conducted to determine the average SR value of each sample. Rheological Measurement Test The sample was subjected to a rheological test using HAAKE Rheowin MARS III (HAAKE, Karlsruhe, Germany). The hydrogel sample was first cut into a cylinder with height of 1 mm and diameter of 25 mm, and then placed in a 25 mm flat geometry. The storage modulus (G') and the loss modulus (G") were measured from 0 to 80 rad/s at 25 • C, 1% strain. The samples used in the compression test were cylindrical with a diameter of 14 mm and a height of 16 mm, and the compression rate was kept at 2 mm/min. Conclusions In summary, pH-sensitive and salt-sensitive N-succinyl-chitosan hydrogel (GNCH) can be prepared with NSCS and the crosslinker GPTMS. GNCH exhibit excellent pH-sensitive and pH reversibility due to the carboxyl from chitosan moieties. Study of swelling kinetics reveals that the pseudo-second-order model is suitable for illustrating the water absorption behavior of GNCH. Furthermore, hydrogels perform smart swelling behaviors in NaCl, CaCl 2 , and FeCl 3 aqueous solutions, and their swelling ratio decrease with an increase of the salt concentration. Rheological properties of GNCH increase with GPTMS contents in the polymeric network. This work offers an efficient and practical way to prepare smart-responsive hydrogels from chitosan. These smart hydrogels can have wide applications in the fields of agriculture, foods, and tissue engineering. Funding: This work has been supported by Jiangsu International Cooperation Project (no. BZ20170200).
4,709.6
2019-11-20T00:00:00.000
[ "Materials Science", "Chemistry" ]
Analysis of the vortex-dominated flow field over a delta wing at transonic speed The present work provides an advancement in the prediction of delta wing flow and an improved understanding of various flow physical phenomena which occur over the wing in transonic conditions. Scale-resolving simulations of the vortex-dominated flow around a sharp leading-edge VFE-2 wing have been performed using the SA-based IDDES model. The complex leading-edge vortex pattern with embedded shocks and subsequent shock-vortex interaction is investigated. A promising accuracy has been achieved using the high-fidelity flow field data provided by the scale-resolving simulation results. Besides the assessment of sensitivity to spatial and temporal resolution, physical aspects are presented, which are not accessible in experimental data in such detail and require scale-resolving simulation approaches. This includes the observation of the vortex system and the shocks in the fully three-dimensional flow field data. Finally, turbulence-related quantities such as eddy viscosity and resolved Reynolds-stresses and their behaviour during the vortex formation and sustaining process are analysed hybrid turbulence model simulations, and the results showed promise for gaining an understanding of the flow field. The present work provides an advancement in the prediction of the vortex-dominated flows for the understanding of the various flow phenomena that occur over the delta-wing particularly at transonic conditions. Simulations of the sharp leading-edge VFE-2 wing [6] have been performed with Ma ∞ = 0.8, Re ∞ = 2e6 and α= 20.5 • . Since the flow separation, which forms the initial stage of vortex formation, is fixed by the sharp leading-edge, the main challenge within the simulation is to correctly produce formation and further development of the vortical flow system along the wing surface, which is primarily affected by the treatment of the turbulence in terms of modeling or resolving turbulent eddies. In case of low-aspect-ratio delta wings, the shear layer emanating from the leading edge is rolling up to form a leading-edge vortex, thereby inducing additional velocities on the wing surface. The generated vortex sheet is highly influenced by the pressure gradients in its vicinity, and its separation at the swept leading edge causes a local low-pressure region on the suction side, which contributes to the overall lift [7]. The so-called vortex lift has a limiting AoA at which the vortex breaks down. This consists of an abrupt change in the flow topology where the flow decelerates and diverges. The flow physics over a delta wing further complicates in the presence of single or multiple shock waves. The interaction between shocks and vortices, which can trigger the appearance of the vortex breakdown [8], then becomes a relevant phenomenon to analyse. Both the complex vortex system and the shock-vortex interaction have been investigated in detail. The vortex-dominated flow has been investigated comparing Improved Delayed Detached Eddy Simulation (IDDES) and unsteady RANS (URANS) results with experimental data [6]. The IDDES method based on the Spalart-Allmaras One-Equation Model with corrections for negative turbulent viscosity (SA-neg) is applied in the scale-resolving computations, whereas the One-Equation SA-negRC (with corrections for Rotation/Curvature as well) is employed to close the RANS equations [9]. The IDDES approach has been selected for the scale-resolving simulations in the present work because on a sharp leading edge delta wing the generation of turbulence usually starts soon after the leading edge separation in the separated shear layer yet in close distance to the wall. The IDDES model essentially switches to URANS mode in the wall layer while running in LES mode in the off-wall region. Mesh refinement in the onset of the turbulent shear layer emanating from the leading edge helps to drive the IDDES model into wall-modeled LES mode and thereby benefits from its capability of controlling the transition between RANS and LES in the region immediately after separation by trying to mitigate the grey-area effects. The sensitivity of the results to temporal and spatial resolution will be addressed and discussed as well. The simulations have been performed employing the DLR TAU-Code [10]. Test case analysis A detailed numerical investigation of the VFE-2 delta wing configuration at transonic conditions with Ma ∞ = 0.8, Re ∞ = 2e6 and α= 20.5 • has been performed. The selected free-stream conditions are representative for highly agile delta wing aircraft and therefore relevant to aerodynamic design topics such as manoeuverability, stability and control. Geometry and mesh The VFE-2 wing with a leading-edge sweep angle of φ= 65 • features a sharp leading-edge. The experimental data consist of PSP and PIV measurements provided by DLR [6]. The operating conditions used in the CFD analysis are set according to the experimental conditions. Dimensionless Cartesian coordinates are introduced as follows, ξ = x/L, η = y/(x · tan(φ)) and ψ = z/(x · tan(φ)), where L is the characteristic length, the chord length in the symmetry plane. The computational mesh, denoted "extra-fine" in Table 1, is employed to investigate the VFE-2 geometry. It is shown in Fig. 1. The outer boundary is formed by a spherical farfield boundary located 50 L from the wing. The unstructured mesh consists of about 30 million cells and features up to 30 prism layers along the walls with the wall-normal growth factor equal to 1.1 and the first cell thickness such that y + < 1. The mesh is symmetric to the plane y = 0. The cells sizes vary within the computational domain and the finest cells, whose size is around 0.002 L, are located within the vortex region as well as close to the leading-edge to resolve the shear layer onset. In order to capture the development of turbulent scales, the mesh refinement follows the vortex region over the wing. The number of grid points inside the vortex diameter at chordwise location ξ = 0.4 provides a justification that the grid resolution can be assumed to be adequate for the given flow. As suggested by Landa et al. [11], the vortex diameter d ω has been computed from the vorticity distribution ω x and has been found to be at the order of d ω ≈ 0.3L. N ω ≈ 150 is therefore the number of grid cells inside the vortex diameter d ω . Although the cell size slightly increases along the wing, the ratio of the vortex diameter to the cell size rises due to the vortex expansion. Mesh convergence study -URANS and IDDES runs A grid-resolution study has been performed in order to analyse the grid effects and keep them as low as possible. Table 1 summarises the main mesh characteristics. The finest cell size has been halved step by step. The URANS simulations have been performed with all the four meshes, whereas the IDDES runs have been performed employing only the two finest meshes to demonstrate the suitability of the grid resolution. Figure 2. Relative deviation bar plot of the aerodynamic coefficients. "I" denotes the comparison between the IDDES results on the "fine" and "extra-fine" mesh [12]. Figure 2 shows the relative deviation of lift and pitching moment coefficients with respect to the finest mesh for which results will be presented. Taking both approaches (IDDES and URANS) into account, the plot of the relative deviation demonstrates the monotonous and effective reduction of the grid effects. In particular, the mesh convergence is clearly visible comparing the RANS results. Besides, the "fine-I" column in Fig. 2 shows that there is no relevant difference in the prediction of the aerodynamic coefficients between the IDDES results achieved with the "fine" and "extra-fine" mesh, confirming mesh convergence. Further considerations on the grid-sensitivity will be made in Section 4.1 with the analysis of mean surface pressure coefficient. Modeling and solution The DLR TAU-Code has been used to perform the CFD simulations. A brief code introduction with an algorithmic overview, which shortly describes the code functionality, has been provided by Gerhold [13]. In the present setup the flow solver TAU solves the compressible, three-dimensional, time-accurate Reynolds-Averaged or filtered Navier-Stokes equations using a finite volume formulation. Governing equations are presented within the work of Langer et al. [10] and will not fully be shown here for sake of conciseness. Model equations and approaches The SA-neg model is based on the following single equation by Spalart and Allmaras [14] for the eddy viscosity where the RANS length scale d = L RANS in the destruction term is the distance to the nearest wall [15]. In the SA-neg version [15], turbulent eddy viscosity is set to zero in case it becomes negative in Equation (1). This version is used to improve stability and robustness without changing the (converged) results of the SA model. The SA turbulence model often provides excessive eddy viscosity production in the vortex core with implications on the unburst vortex size, type and velocities. Shur et al. [16] proposed a streamline curvature correction (SA-RC), applied in the current work within the URANS computations, which alters the source term with a rotation function. In the RANS context a comparison with and without RC has been performed resulting in the selection of the SA-negRC model for the presented data as it produces superior results. In the IDDES model, L RANS is replaced withd = L IDDES , which is defined as where L RANS and L LES are the RANS (L RANS = d w for the SA model) and LES (L LES = C DES with the function defined in literature [9]) length scale respectively. In order to define a proper IDDES blending functionf d the authors of IDDES propose a set of semi-empirical functions designed to provide for both a correct performance of the WMLES and DDES branch [9]. The objective is to prevent an excessive reduction of the RANS Reynolds-stresses as could be caused by the interaction of RANS and LES regions in the vicinity of their interface [9]. Figure 3 shows the instantaneous hybrid length scale over RANS length scale (d/d). It illustrates where the IDDES approach switches from RANS to LES. The thin regions close to the wall are fully modeled by the RANS mode and a ratio around unity is expected. Thereby, the shear layer transition takes place in the LES region. Since the vortex core region is covered by the LES mode, the RC correction has not been applied within the hybrid RANS/LES model. Spatial and temporal resolution in this zone have been investigated and will be discussed in the following. To demonstrate that the current mesh resolution allows to resolve a major part of the turbulence spectrum, the LES Index of Resolution Quality has been included in Fig. 3. The LESIQ ν relates turbulent viscosity to laminar viscosity using the formulation proposed by Celik et al. [17]. The LESIQ ν is a dimensionless number between zero and one. It is calibrated in such a way that the index behaves similar to the ratio of resolved to total turbulent kinetic energy. An index of quality greater than 0.8 is considered a good LES, 0.95 and higher is considered DNS [18]. Overall, the plots indicate a good spatial resolution of the vortex region. Slight weaknesses appear in the downstream parts of the shear layer which, however, are too far downstream to have a significant impact on the vortex core formation and also on the shockinduced phenomenon. Numerical approach An implicit dual-time stepping approach, employing a Backward-Euler/LUSGS implicit smoother, has been used in the unsteady simulations. To ensure convergence of the inner iterations in the IDDES runs, Cauchy convergence criteria of several quantities, namely volume averaged turbulence kinetic energy, maximum Mach number and certain aerodynamic coefficients, with tolerance values of 1e − 5 have been applied. The matrix dissipation model has been selected while performing the computation of the fluxes with a central scheme. However, in hybrid RANS/LES the artificial dissipation has been reduced to prevent excessive damping of the resolved turbulent structures and a (hybrid) low-dissipation low-dispersion discretisation scheme (LD2) has been used, as suggested by Probst et al. [19]. where L is the characteristic length and U ∞ the free stream velocity. For URANS, the time step size equals 100μs ≈ 2.64e − 2 CTU. Ten CTU have been computed before starting the time-averaging in order to overcome the initial transient and five flow-through times have been taken into account in the computation of the mean flow field values. To fully resolve the convective transport and consequently to capture the flow characteristics accurately, the maximum allowed time step size for IDDES runs has been computed. On the "extra-fine" mesh, the time step size t = 1μs = 2.64e − 4 CTU adequately resolves the time scales of the energy containing eddies in the region of interest and keeps the convective Courant-Friedrichs-Lewy (CFL) number lower than unity [20], as shown in Fig. 5. To accelerate the simulation and save computational time, a simulation has been performed increasing the time step to t = 10μs = 2.64e − 3 CTU resulting in a convective CFL number approximately one order higher in magnitude. The IDDES run using the "extra-fine" mesh has been initialised from scratch with a steady run and afterwards five CTU have been performed with the time step size equal to t = 10μs. Ten overflows have been considered to compute the mean values of the flow properties. Besides, after five CTU performed with the time step size equal to t = 10μs, five CTU have been computed with t = 1μs before starting the time-averaging, to overcome the initial transient. Finally, in this case ten overflows have been considered as well to compute the mean flow field values. The effect of time step size can be observed comparing the black and the green lines in Fig. 4. The lift coefficient remains almost constant in time with the coarser temporal resolution whereas it starts to drop suddenly when reducing the time step. The increased temporal accuracy provides an overall improvement of the achieved numerical results. Figure 4 shows the evolution of the IDDES run with the "fine" mesh as well. These simulations have been initialised with URANS results and no difference has been found in the duration of the initial transient. It is also noteworthy that the time step sizes have been doubled (see the legend of the Fig. 4) URANS and Experimental Data IDDES with Δt = 10μs and Δt = 1μs (a) (b) Figure 6. Mean surface pressure coefficient c p , comparison between experimental and numerical data. The black contour lines indicate the sonic pressure coefficient c * p = −0.43 [12]. as the size of the finest cell has also been doubled compared to the "extra-fine" reference mesh, as summarised in Table 1. Results The flow physics is described and illustrated structured in the following sub-topics. The analysis of mean flow features is presented in Section 4.1 providing a comparison of several aspects, the instantaneous flow features are discussed in Section 4.2. In Section 4.1.1 an emphasis is given to the validation of the numerical results by comparison with experimental data regarding the mean surface pressure. Besides, the location and intensity of the shock waves are discussed. In Section 4.1.2 the vortex flow pattern and the vortex-shock interaction phenomenon are then investigated with the help of the scale-resolving results. Finally, the turbulence-related quantities, such as instantaneous eddy viscosity, resolved turbulence kinetic energy and components of the Reynolds-stress tensor are presented in Section 4.3. Mean flow features Over the delta wing, the flow separates at the leading-edge and subsequently rolls up to form a stable, separation-induced primary vortex. The flow reattaches on the surface as the primary vortex is rolling up and the spanwise flow underneath subsequently separates a second time to form a counter-rotating secondary vortex outboard of the primary one. The direct effect of the vortices is an additional suction footprint and, consequently, increased lift which eventually results in a non-linear dependence of the lifting force on the angle-of-attack. The understanding and prediction of vortex and shock waves, their generation and evolution, are of essential importance. The interaction between vortices and shock waves is a crucial feature of the flow physics at transonic conditions and therefore is assessed in detail. Result validation and shock-wave locations The suction footprint on the wing surface is mainly caused by high tangential velocity around the inner vortex core. Figure 6 shows the mean surface pressure coefficient. To investigate the prediction of c p , several slice planes have been extracted, as indicated in Fig. 6(a). The c p distribution along the spanwise direction at chordwise positions ξ = 0.2, 0.4, 0.6, 0.8, 0.95 is plotted in Fig. 7, comparing experimental data and simulation results (URANS, IDDES with t = 10μs and t = 1μs using the "extra-fine" mesh and IDDES with t = 2μs using the "fine" mesh). The comparison is presented to demonstrate that [12]. the best results have been achieved using the IDDES approach with extra-fine (reference) mesh and t = 1μs. The plots highlight the sensitivity of the results to the temporal and spatial resolution. The c p distribution shows that some differences appear between the IDDES results achieved with the two finest meshes (i.e. "fine" and "extra-fine"), even though Fig. 4 had shown almost identical lift coefficients. These discrepancies indicate that additional criteria need to be considered for the mesh convergence since the integral aerodynamic coefficients do not seem to be sufficient. However, the uncertainties of the IDDES results due to mesh resolution are considered significantly reduced and IDDES on the "extrafine" mesh hence is taken as reference. Figure 7 shows a satisfactory and suitable agreement between the IDDES results ( t = 1μs) and experimental data. The IDDES results ( t = 1μs) provide the closest match with the experiment of all numerical results, especially at section ξ = 0.4. Hybrid RANS/LES considerably improves the predicted suction footprint by the vortices, although the experimental results are only roughly reproduced in the front part of the wing. The fine resolution is particularly beneficial in the apex region to reduce the grey area, which will be further discussed in Section 4.3. The secondary and tertiary vortices are indicated with Roman numbers (II and III) at the smoother peaks of c p at η ≈ 0.7 and 0.85 from chordwise location ξ = 0.2. The tertiary vortex then disappears in the IDDES results with t = 1μs at chordwise location ξ = 0.4 and matches the experimental data perfectly, whereas it is still erroneously present in the URANS results. The experimental data indicates the overall absence of a tertiary vortex but, taking the data resolution into account, this phenomenon also cannot be fully excluded. Besides, the secondary vortex peak is weaker at ξ = 0.8 in the experimental data and it is not present any more at ξ = 0.95. This means that it breaks down between ξ = 0.8 and ξ = 0.95. Figure 7 indicates that in the IDDES with t = 1μs it bursts further upstream (ξ < 0.8), as it will be also discussed in Section 4.2. Moreover, it is very interesting to note that increasing the time step size does not considerably affect the surface c p caused by the primary vortex. On the other hand, the higher time step t = 10μs leads to a wrong prediction of the secondary vortex due to large unsteady fluctuations. As it can be seen in Fig. 7 at ξ = 0.95, the trend of the c p curve has been captured well by the numerical results except for the middle part (where the primary vortex core is located). The experiment shows a decay in this region whereas the simulation still predicts a strong drop of c p caused by the vortex. The vortex in the experimental data is weaker which could indicate the tendency to break down near the trailing edge. Besides, as it will be discussed in Section 4.1.2, close to the trailing edge, the shear layer is not rolling up to form a stable leading-edge vortex over the wing and the suction footprint behind the transported shear layer leading-edge separation abruptly drops. The primary vortex is no longer fed by the generated turbulence and the coherent vortex becomes more vulnerable. A possible explanation of this phenomenon will be also given in Section 4.2 by means of unsteady flow characteristics. The investigation of the supersonic area over the wing and consequently the presence of the shock waves can be observed in Fig. 6 where the sonic pressure coefficient c * p = −0.43 is indicated by the black contour lines. The IDDES approach replicates the experimental results best in terms of predicted supersonic area. In front of the secondary vortex a cross-flow shock wave appears, where Fig. 6 shows that the outboard directed flow underneath the primary vortex close to the surface is supersonic. The so-called "lambda" shock is also illustrated in Fig. 9. Besides, a shock wave typical for transonic freestream conditions is captured close to the wing apex, where critical flow conditions are reached. Finally, the contour lines of the sonic pressure coefficient in the centre of the delta wing give the indication of shock waves in proximity of the sting fairing: a first one located at about ξ = 0.55, a second one at about ξ = 0.75 and a third one closer to the trailing edge downstream of ξ = 0.9. The same shock waves are shown and enumerated in Fig. 11(a). Besides, Fig. 7 shows the mean surface c p distribution at symmetry plane η = 0 to quantify the intensity of the aforementioned shock waves. Although the simulation results predict a higher pressure peak close to the wing apex (ξ = 0) and to the sting tip (ξ ≈ 0.62), the pressure trend and the position of the shock waves above the wing are captured well by the numerical data. Vortex pattern and vortex-shock interaction Considering the simulation results valid based on the discussion above, the detailed analysis of the IDDES results with t = 1μs can be performed. The vortex pattern itself and how it is modified by the shock-vortex interaction needs to be understood in detail. For this purpose, Fig. 8 shows the field of mean x-direction vorticity ω x at chordwise locations ξ = 0.5, 0.6, 0.7, 0.75 from both experiment and IDDES. PIV data [6] have been collected only at these locations illustrated even in Fig. 6(b). Figure 9 then shows the mean density gradient magnitude ||∇ρ||, the normalised mean x-direction velocity u/U ∞ and the normalised mean in-plane tangential velocity u t /U ∞ distribution, where u t = √ v 2 + w 2 . Further, Fig. 10 shows the primary vortex core line extracted considering the local maximum mean x-velocity. Since a vortical structure comes from the concentration of the low energy parts of the flow, initially contained in the boundary layer, the vortex can also be seen as a region of rotational flow whose stagnation conditions, especially the pressure, are inferior to those of the outer flow field. Thus, the vortex is like an energy "hole" or entropy "hill" which will be more sensible to disturbing entities, such as shock waves [21]. The unburst vortex is characterised by a coherent structure with high values of axial and rotational velocity. As it can be seen in Fig. 8, the IDDES results capture the primary and the secondary vortex in a very accurate matter. Between ξ = 0.6 and ξ = 0.7, the x-direction vorticity starts to decrease, as the vortex does not expand uniformly in the streamwise increase of the η-coordinate. As the self-induction breakdown theory explains [22], the axial vorticity of the vortex induces an azimuthal velocity, which in turn tilts the vorticity vector towards the azimuthal direction. Due to the gradient of azimuthal vorticity caused by the increased circulation, the leading-edge vortex expands radially. This is followed by a change of sign of the azimuthal vorticity caused by the region downstream of the vorticity gradient which rotates slower than the upstream region. These actions proceed together up to the turning point where the vortex filaments turn inward onto themselves causing a change of sign of the axial vorticity. This phenomenon is visible in a slice plane close to the trailing edge in Fig. 11(b), where the instantaneous x-vorticity is shown. It means that, after the interaction with the shock upstream of the sting fairing (ξ = 0.62), the vortex appears more vulnerable and prone to breakdown. The detached shock caused by the sting tip interacts with the vortex core and causes a decrease of the x-velocity (see Fig. 10(b) at chordwise ξ = 0.62) with a subsequent reduction of the suction footprint also visible in Fig. 7 at ξ = 0.8. The same behaviour of u max can be seen in proximity of the other shock-vortex interactions (at about ξ = 0.2 and ξ = 0.35) as already discussed in Section 4.1.1. The vortex core loses velocity and kinetic energy across the shock. The width of the shock wave is of the same order of the magnitude as the local mean free path of fluid particles. Hence, the shock wave can be regarded as a sharp discontinuity in common aerodynamic flow and, across the shock, the Mach number and the normal velocity suddenly drop. A more gradual drop can be seen in the figures because the mean variables are plotted. This means that the shocks location fluctuates slightly on the wing over time. Besides, the in-plane (tangential) velocity ideally is parallel to the shock surface, which means that it should not be altered significantly by the shock wave, as Fig. 9 demonstrates. At the present flow conditions the shock wave is not strong enough to abruptly trigger the vortex breakdown. The intensity of the shock close to the sting fairing onset increases as the angle-of-attack rises since it is directly depended on the incoming Mach number. At transonic conditions, the increase of the angle-of-attack may thus induce the vortex breakdown. If, like in this case, the shock wave is not strong enough to induce a vortex breakdown after the interaction with the shock, the leading-edge vortex tends to recover its stability and overcomes the perturbation. Since the main vortex is fed continuously along the wing by the shear layer emanating from the leading edge (illustrated in Fig. 11(b)), the xvelocity of the vortex consequently starts to increase again in the vortex core, as it can be seen in Figs 9 and 10(b). In Fig. 8 also regions of negative divergence of the in-plane velocity vectors (∇ · u) are marked, which gives an indication of the location and shape of a cross-flow shock wave in the experimental results. These shocks are well captured by the IDDES results and the distribution of u t is altered, as it can be seen from the iso-contour lines of u t in Fig. 9. The magnitude of the mean density gradient ||∇ρ|| in Fig. 9 shows this phenomenon as well. A complex shock system is located beneath the leading-edge vortex, as it has been introduced in Section 4.1.1. The "lambda" shock, predicted from the experimental data as well, is mainly caused by the acceleration of the cross-flow in this region and its presence deeply alters the local flow topology. Driven by the interaction of this shock with the boundary layer on the upper side of the wing, the flow separates and feeds the secondary vortex [23]. Moreover, the leadingedge vortex takes a kidney shape for Ma = 0.8 and cross-flow shocks appear around the leading-edge vortex and on the top of the shear layer. They are assumed to be caused by the curvature of the flow trajectory leading to an acceleration up to thermophysical conditions which cannot be sustained [23]. These shock waves affect the behaviour of the velocity curve with reversed flow upstream of the sting fairing shock shown in Fig. 10. Figure 9 shows further phenomena which occur in the rear part of the wing. As mentioned in Section 4.1.1, the secondary vortex breaks down within the range 0.7 < ξ < 0.8, and is not visible at ξ > 0.8 anymore. Besides, here the primary vortex is not further fed by the shear layer and becomes more vulnerable. The streamwise boundary layer separation highlighted at ξ > 0.8 might be considered as starting point for the vortex breakdown. Indeed, the breakdown of the secondary vortex generates a chaotic motion of the turbulent shear-layer and prevents rolling up to sustain the primary vortex, as it can be seen at ξ > 0.9. Figure 11(a) shows an illustration of the vortices by plotting the instantaneous Q-criterion iso-surface coloured by the normalised helicity H n . The sign of helicity (positive in blue and negative in red) indicates the sense of rotation to separate primary from secondary and tertiary vortices [24]. The IDDES results in Fig. 11(a) allow for a qualitative assessment of the turbulence resolution in the LES regions. Turbulent fluctuations are clearly visible in the vortices. The level of resolution seems to be adequate to thoroughly investigate the flow physics, a large spectrum of turbulent structures has been resolved by the grid. Instantaneous flow features Moreover, as already introduced, at the transonic speed of Ma = 0.8 the flow is much more complex compared to subsonic conditions, because the flow above the delta wing reaches supersonic speeds and shock waves appear. Figure 11(a) with the iso-surface of the density gradient in x-direction ∇ρ x plot confirms that three main shocks are present over the delta wing in proximity of the sting fairing. Other shock waves are also present close to the wing apex, as already pointed out. The formation onset of the primary vortex caused by the separated shear-layer is visualised in Fig. 11(b) where the instantaneous total pressure and the primary vortex stream-traces are shown. The total pressure is shown since it represents an energetic quantity and gives an indication of the vortex strength and position. Streamlines of different colours are used to visualise the individual effects. The primary vortex formation starts immediately downstream of the wing apex and the high velocity core of the primary vortex is exclusively built from flow coming off the shear-layer that separates at the wing apex as indicated by the black streamlines. The primary vortex then grows in diameter because it is fed by the shear-layer all along the wing, as it can be seen from the green, brown and purple stream traces in Fig. 11(b). This flow reinforces the main core by rotating around it and feeding it with kinetic energy. In this way the vortex is sustained, remains coherent and the axial velocity increases. Figure 11(b) also shows the instantaneous x-vorticity, which distinguishes between the primary and the secondary vortex by the rotational direction and the red stream traces of the secondary vortex (in the right half). This plot can be used to observe formation and breaking of the secondary vortex. The secondary vortex formation is not directly caused by the shear-layer separation. The stream traces of the particles that form the secondary vortex do not come from the leading edge in proximity. The secondary vortex is more a secondary effect caused by the primary vortex. Both are closely connected and the secondary vortex only appears well structured if the primary vortex is strong enough. Then, as it can be seen in Fig. 11(b), the secondary vortex breaks down in the second half of the wing downstream of the sting tip and this phenomenon makes it impossible for the shear layer to roll up fully feeding into the primary vortex. These considerations lead to the hypothesis explaining the process of primary vortex breakdown to be seen at higher angles of attack. Over the wing, the flow separates at the leading edge and subsequently rolls up forming a stable, separation-induced primary vortex. Typically, the flow reattaches when the primary vortex reaches the wing surface. Under certain conditions, the spanwise flow below the primary vortex subsequently separates a second time to form a counter-rotating secondary vortex outboard of the primary one. The secondary vortex is well structured and clearly visible only at midwing, though its suction footprint starts about at the same location as the primary vortex (see Fig. 6(b)). Feeding the shear layer into the primary vortex is supported by the presence of the secondary vortex. When the bow shock generated by the sting tip interacts with the primary vortex, its flow conditions change, and the secondary vortex becomes not sustainable anymore. At this point, the boundary layer separates in streamwise direction in proximity of the lambda shock forming a recirculation zone. The fluid previously forming the secondary vortex then breaks into turbulent motion of smaller scales. The shear layer thereby is prevented from rolling up and feeds the incoherent turbulent flow instead of the primary vortex. Consequently, the primary vortex, which loses its source of kinetic energy, becomes more vulnerable. This process can be regarded as the onset of primary vortex breakdown, which only is fully established at slightly higher angles of attack as known from the experimental data. Figure 12 shows the contours of R t = μ t /μ, where μ is the molecular dynamic viscosity. The modeled turbulent eddy viscosity, μ t , is compared for the chordwise locations ξ = 0.2, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.95 between URANS and IDDES. Overall, the URANS approach produces very high levels of turbulent eddy viscosity in the vortex core. The region with large μ t values in RANS corresponds to vortex motions with high production of turbulence energy due to strong flow rotation and deformation [4]. As it can be seen in Fig. 12, the RC correction avoids the excessive eddy viscosity production in the front part of the wing. Unfortunately, it is not sufficient in the rear part of the vortex region, where the turbulent viscosity production is by far too high. Regarding the hybrid RANS-LES, it is worth to recall Fig. 3 that shows the instantaneous hybrid length scale over RANS length scale (d/d). The IDDES approach employs the SGS eddy viscosity in the off-wall region and the IDDES R t -contours show that the SGS eddy viscosity is much smaller than its RANS counterpart. The relatively low level of modelled μ t in LES mode is associated with the local fine grid resolution required for LES. On the other hand, the region with higher μ t in LES mode indicates strong local flow rotation/deformation and/or coarse grid resolution, usually inducing intensive energy dissipation of resolved large-scale turbulence [4]. A slight grid refinement could be indicated in the rear part of the wing close to the leading edge where the shear layer separates and rolls up. Figure 13(a) shows the resolved turbulence kinetic energy normalised by the squared free-stream velocity U ∞ . As it can be seen from the first slice plane and assuming the grey-area issue, K is probably under-predicted in the initial development of the vortical flow. The formation of resolved turbulence is slightly delayed. This phenomenon is the so-called grey-area problem rooted in the IDDES modeling and it affects the downstream development of the turbulent process including the vortex breakdown onset position. The grey-area problem happens because the formation of the vortex initially takes place in near-wall layers that are modeled by the RANS mode. The transition of the shear layer between the RANS and the LES mode is then supposed to be the main reason for the slight discrepancies highlighted in the c p results in the front part of the wing and it generates a rather stiff resolved vortex motion leading to a delayed vortex breakdown. In the rear part of the wing, the resolved turbulence kinetic energy shows the separated shear layer and turbulent downstream transport close to the leading edge, whereas a cross-flow shock alters the K distribution below the primary vortex. Figure 13 also shows the components of the normalised specific Reynolds-stress tensor R ij = u i u j /U 2 ∞ along the wing. R ij represents the intensity of the turbulent fluctuations along the three directions and their covariance. The components of the Reynolds-stress tensor have been normalised by the free-stream velocity and the local mean density to obtain the same order of magnitude of K and to show the turbulent fluctuations without considering the contribution to the energy transportation, respectively. Besides, knowing the high values of the axial velocity in the vortex core shown in Section 4.1.2, it can be affirmed that the density considerably drops in the vortex region. For this reason, the contribution of the turbulence kinetic energy to the total kinetic energy becomes negligible in the vortex core. It acts mainly around the primary vortex core and within the shear ayer where the density value is at least one order higher in magnitude. R 11 and R 22 are illustrated in Fig. 13(b). R 11 shows the turbulent behaviour of the transported turbulent shear layer. The turbulent motion becomes more intense once the secondary vortex breaks and the turbulence kinetic energy is then transported downstream without feeding the primary vortex, as explained in Section 4.2. R 22 indicates that the fluctuations are generated from the leading edge. It is also the main origin of high turbulence kinetic energy in the vortex core. Figure 13(c) shows the covariance R 12 and the normal component R 33 . R 12 can be mainly used to visualise the core of the primary and secondary vortices, where negative values are located (positive on the other wing side), and to identify the boundary of the primary vortex, where the highest values are present. The z-direction of these turbulence fluctuations is visible from R 33 . The elements transported downstream over the wing do not roll up to form the primary vortex but tend to grow moving away from the wing and influencing the primary vortex. Finally, the covariances R 13 and R 23 are shown in Fig. 13(d). The same phenomenon identified with R 12 can be also discussed with the R 13 component. R 13 further indicates the location of the shear layer as well as its thickness and the coherence of the vortex. In the rear part it becomes less coherent and tends to breakdown. Finally, as it can be seen from the legend scale, R 23 is the strongest covariance component and it mainly appears in the shear layer where the complex process of separation and rolling up appears. It is also negative in the core centre (positive on the other wing side), but it is not as localised and compact as R 12 . Summary and conclusions The scale-resolving simulation has provided detailed insight to the transonic flow field around the delta wing. The flow physics has been described, explained and illustrated in detail divided into the analysis of the mean and instantaneous flow features. Provided adequate cell and time step sizes, the simulation reveals important flow features and represents a valuable method to improve the understanding of the physics of delta wings at transonic conditions. Therefore, the sensitivity of the results to temporal and spatial resolution has been addressed and discussed to ensure that the presented data yields a high prediction accuracy. Numerical results have further been validated by comparing them with the experimental data regarding the mean surface pressure coefficient. Besides, the behaviour of shock waves has been investigated. The surface pressure and the position of the shock waves have been captured well by the numerical data. The vortex flow pattern and vortex-shock interaction have then been assessed in detail. The scaleresolving method is capable of producing the interaction between the leading-edge vortex and the shock waves. The magnitude of the mean density gradient has been shown to provide insight to the vortex and shock structures and to analyse their characteristics. The formation of the primary vortex caused by the separated shear layer emanating from the leading edge has been visualised by streamlines. Formation and breakdown of the secondary vortex have been examined as well, providing a hypothesis to explain the process and mechanism for vortex breakdown. The turbulence-related variables, such as instantaneous eddy viscosity, resolved turbulence kinetic energy and the components of the Reynolds-stress tensor have been discussed. It has been shown that the initial development of resolved turbulence is slightly delayed. Resolved K is predicted insufficiently in the very initial stage of the vortical flow but thereby affects the downstream development of the turbulent process, including the vortex breakdown onset position, which has been predicted slightly too far downstream on the wing by the IDDES results. Being situated within the well-known range of grey area issues, this observation provides potential to introduce slight adaptations into the turbulence model within future work. The components of the Reynolds-stress tensor have been shown to discuss the turbulent behaviour of the flow and to visualise several phenomena, such as the thickness and location of the shear layer and shape and size of the vortex. IDDES can be then considered as a promising approach to simulate the flow around a delta wing even at transonic conditions and this approach might serve as a reference model for such test cases, even at more challenging conditions and configurations. In continuation of this analysis, further steps will focus on analysis of the vortex structure and the unsteady nature of the flow.
9,620.8
2023-04-26T00:00:00.000
[ "Engineering", "Physics" ]
Field lines in art and physics The physical space surrounding us is not empty, it is filled by objects and fields. Macroscopic objects are visible, but fields can only be detected with sensitive instruments rather than the naked eye. Physicists introduced visual clues (field lines, trajectories, streamlines, equipotential surfaces) based on helpful mathematical concepts to characterize (the mostly invisible) physical fields. These illustrations are used in high school-level physics, but – due to their abstract nature – are not easily understood by students. Most of the confusion is caused by students mistaking the plotted auxiliary concepts for physical reality. We can help their understanding by drawing parallels between the visual representation of fields and the technical methods used by a painter to express the subject. Introduction The space surrounding us is not empty [1] in the physics point of view, since sets of particles of different qualities (like bodies and media) and fields (like force fields, streamlines and vortices) are present. The bodies or media of macroscopic measures are generally visible (except in cases when the absolute refractive indexes are the same). The force fields are invisible for us, to detect them we need special measuring devices. To characterise the force fields scientists made helpful notions mostly mathematical ones that can be presented visually, like field lines, trajectories, flow lines, equipotential surfaces, like in Figure 1. These illustrations are used in secondary physics classes, and the deeper understanding for the students is generally not an easy task, mostly because of their abstract nature. One of the major motives of the confusion is that the students consider these helpful notions as physical reality. In fact, these are systems of lines in plan or space to help to understand reality better. With them we can turn the invisible characteristics into visible ones adjusting them to the defined quantities. We can make the understanding easier if we draw a parallel between the characterization of the visualization of scientific fields and technical solution of the substantive expression in fine art. When we use this method, we need to make an emphasis on the essential difference. The goal of the visualisation in science is to provide objective quantitative description [2]. All physicists draw the same (or very similar) figure regarding the same force field. Art due to its nature commentates a subjective point of view. Each artist depicts the truth through his on emotion-based filter, so despite the same theme the art-pieces can be very different in different interpretations by diverse artists. We can perceive that the same artist revisits a theme several times and the art-pieces are considerably different. The motive can be like the effect of the momentary mood of the creator. Central fields When teaching physics, we put emphasis on the study of the fields in electrostatics. The source of the field is the electric charge. We introduce in the case of a single charge the field strength vectors defined in any point of the field, the field-lines, the equipotential surfaces, and the relations of all these. Later we expand the description to more complex cases. We can define the central of a charge field two ways. We can either describe it with some calculations based on Coulomb's law or we can just illustrate with an experiment (Figure 2.). This is how to make the experiment. We pour oil into a glass saucer forming a shallow layer only. We immerse a metal rod in the middle, and we attach it to a static machine, like a Wimhurst machine. We strew semolina into the oil. The grains become small dipoles; therefore, they get settled facing the opposite poles to one another, thus they form the electric field lines. If we let some light shine through, we can project the structure. We show it in Figure 2b. We can easily demonstrate that the source of the field is the charge, which is the centre of the structure. The density of the filed lines is big near the source, and it decreases still in a radial setting as we get farther. It implies that the field is getting weaker, so the field strength is getting smaller. The structure mentioned above is illustrative, but it is not equivalent with the central field. It is because the semolina granules are not perfectly following the radial structure, so they do not give the exact direction in each of the points. The uncertainty of the density distribution does not provide accurate information to enable us to derive the punctual value of the field strength vector. If we vary the size of the granules in our experiment, the spectacle varies, even though it still characterizes the density and the central structure very well. The central electrostatic field and the light-distribution surrounding a point-lamp are not identical, but they show similarities. G. Balla (Italian futurist artist) created a painting (Figure 3c) in 1909, which masterly illustrates this mentioned similarity. The light spreading centrally from the lamp is visualized with little colourful dashes set radially. The luminous intensity is decreasing from the centre, which is also demonstrated in an illustrious way. It gives special curiosity to the painting (and the artist) that the latest results of physics of the period are represented. The tiny colourful dashes representing light refer to the well-known facts on white light: it is a composite of colours and photon refers to the particle nature of light. We can catch the glimpse of the moon, which has more meanings. One is that it suggests nightshift. The other is that it is kin to the lamp: as a secondary light source it reflects the sunlight. Thus, the artificial light as the great achievement of mankind with its surrounding traits is ruling the picture along with the natural light present. The rising sun and the seeds are symbols on Vintage Sower at Sunset by Vincent van Gogh. The primary energy source for the plant to grow from the seed is the Sun. The flow departed from the Sun reaches the earth and recolours it. The design of the flow is very similar to the field lines of the single charge. This central design is used in more of the paintings, as you can see in Figure 3. Streamlines We characterize the flow of a continuous medium by streamlines plotted by appointed minute volumes in the medium. We call these trajectories or flow lines. These are curves, and if we draw the tangent of the curve in any point of the flow-field, we get the velocity of the particles. We illustrate the speed by the density of the streamlines. We show it on Figure 4. In slow, viscous flow cases we can observe that fluids move in laminar way with constant speed. In this case the streamlines and the trajectories are similar: they are parallel lines, as you can see in Figure 1. In cases of great speed, especially if hurdles are in the stream the order is split up: the streamlines become curves, they haywire, and they become very hard to be tracked, like in Figure 5. In this case we talk about turbulent flow. The streamline-setting is utterly different, their nature is different. Figure 6. shows laminar and turbulent flows. The representation of flows is often present is paintings. To perceive certain motions of media, like flows of rivers, waterfalls or winds, the artist quits the moment. He visualizes the space in its motion, so that he can open opportunity to present even deeper content. In Van Gogh's Starry Night (Figure 7a), the turbulent flow indicates a coming storm or a high-powered wind. In the painting the Karman vortices are in the focus. Also, in Figure 7b we can see in parallel the Karman vortices that are generated behind mountains in an intense flow. As we have seen Karman vortices behind mountains, we can also observe them behind islands, like in NASA satellite pictures, in Figure 7c. Art often expands beyond portraying natural phenomena, like when it uses visualization of turbulent flows of great speed to gain emotional effect. Depicting stormy wind or sweeping flood is associated in many of us with the feeling of uncertainty and fear. The only straight line is the bridge providing the perspective on Edvard Norton's art-piece, The Scream (Figure 8). The artist is a Norwegian expressionist painter. Everything else on the painting is waving, swirling, flowing, eddying, which gives a huge approval to cause global fickleness and insecurity. The original title as: The scream of nature, as we can read in the artist's memorandum. According to his memoir he observed the sunset in the Oslo fjord tired and sick, he found the scenery frightening in a sentiment way, and it seemed to scream. Based on analysers the atmospheric phenomenon that touched the heart of the artist was due to the eruption of the Krakatau volcano. Sensing the swirling red sky was observable for one day lifespan only, according to studies of H. Gibson the reason behind the peculiar red sky is another phenomenon: the stratospheric polar clouds formed 20-30 km high in the wintertime. 2.3. The curved spacetime In the Einstein theory of gravity there is no gravitational field. The pretence interaction attributed as gravity is caused by the curving of spacetime. So, the effect of materialistic objects is known based on the special theory of relativity. But we face difficulties if we want to illustrate it. Exemplifying the curving of the 2-dimensional surfaces is an easier task: we can make models of cylindric surfaces, and we can study the similarities and the differences between plan and curved surfaces. We can see the curving if we step out to a 3-dimentional space and quit from the 2D surface. We could do the same with spacetime including 1 time and 3 space dimensions, if we could step out to a 5 th dimension. But it is not possible for us. It is a herculean task for an artist to signify space that is modified by the presence of matter. The Mercury passing in front of the Sun by G. Balla (Figure 9) shows the orbital motion of the planet with expressing the modified space structure around it. The fragmented shapes propound the inhomogeneity of space, whereas the spiral refers to change of the path in time, pretty much like a time-lapse recording. The direct effects of field lines onto fine art The examples mentioned before are to demonstrate similarities of space interpretation between art (especially paintings) and physics. In today's modern art it is not unique if the focus of an art piece is reinterpreting certain notions of science, like entropy, energy, velocity, etc. László Pirk is a contemporary Hungarian artist, a painter. His series called Fuga is based on the magnetic field lines. The paintings make connections among music, science, physics, the elementary existence of light and artistic painting. Analysing his painting, the "Fuga 2" is not only a good example of integration of art and science (like in STEAM pedagogy). It is also highly beneficial for the students in physics classes for influencing their philosophy, helping them to gain better attitude towards the subject. Conclusion One of the most important factors of imparting physics knowledge is that it should be easily restructured. The notions we intend to evolve should not be rigid or inflexible. Bearing in mind that they should be interpreted in punctual, accurate and exact ways. If we introduce scientific notions in a brand-new way in new contexts and other situations too, we can offer to achieve a more creative, integrated knowledge for our students [15]. En passant, we can provide higher level motivation for students whose personal interest focuses on other subjects. We have flashed a few possibilities in fine art and physics, each of our examples were dedicated to central and magnetic fields, streamlines and spacetime.
2,755.2
2021-05-01T00:00:00.000
[ "Art", "Physics" ]
Elastic , inelastic and inclusive alpha cross sections in 6 Li + 112 Sn system Elastic, inelastic and inclusive alpha cross sections have been measured. Coupled-channels calculations by including the couplings of projectile breakup channels and target excitations explain the measured elastic and inelastic data. The inclusive alpha particle production is found to be more than two-third of the total reaction cross section. From coupled-channels calculations, the contribution of α from the non-capture α-d breakup of the Li projectile is calculated to be very small. It suggests the possibility of many transfer induced breakup channels, having α as one of the breakup fragments, to be responsible for such a large α particle production. Introduction Study of nuclear reactions involving weakly bound projectiles is very interesting because of the observation of several unusual features compared to the case of strongly bound projectiles.Suppression of complete fusion, breakup threshold anomaly in optical potential describing elastic scattering, large production of alpha particles in the reactions and larger peak to valley ratio of the fission fragment mass distribution at sub/near barrier energies are some of those interesting observations [1][2][3][4].Projectile breakup in the field of a target nucleus is known to play an important role in the manifestation of all the above features.To understand the underlying reaction mechanism, the experimental data on projectile breakup cross section is thus very important to compare with the coupled-channels calculations that include the breakup channels.In order to constrain the values of coupling parameters and the potentials in the coupled-channels calculations it is also important to reproduce simultaneously the experimental data for as many reaction channels as possible for the same target+projectile system. With the above motivation it was planned to measure the elastic, inelastic, transfer and breakup cross sections simultaneously for a system involving weakly bound projectile 6 Li and a target 112 Sn at a near-barrier energy.Choosing 112 Sn as a target has two advantages: i) the inelastic states of the target are well separated from its ground state compared to other isotopes of Sn and ii) the Q-value of the transfer induced breakup cross sections are favorable.Although the experimental data on elastic scattering for the above system at several energies are available in the literature but there is no data on inelastic and other reaction channels at those energies available. Experimental details The angular distributions for elastic, inelastic and inclusive alpha cross sections for 6 Li+ 112 Sn system have been measured at a bombarding energy of 30 MeV using BARC-TIFR Pelletron facility at Mumbai.Four telescopes of single Si detectors, placed at 10 o apart, on a rotatable arm inside a 1.5 m diameter scattering chamber were used to detect the light charged particles in the angular range of θ lab = 40 o − 140 o .Each telescope consists of a ΔE detector of thickness ∼ 50μm and a E-detector of ∼ 1500 − 2000μm.A typical 2-dimensional (ΔE-E) spectrum acquired using a single telescope at θ lab =100 o is shown in Fig. 1(a).The one dimensional projection spectra for 6 Li, α and deuteron particles are shown in Fig. 1(b), Fig. 1(c) and Fig. 1(d) respectively.Along with the elastic peak, the yields of two inelastic states corresponding to 1st two excited states of target i.e., 2 + (1.25 MeV) and 3 − (2.35 MeV) states are dominant.For the alpha and deuteron yield, broad spectra of width ∼6.4 MeV and ∼3. 4 MeV peaking at energies corresponding to the beam velocity are observed implying a major contribution could possibly coming from projectile breakup channels. Elastic and inelastic scattering Differential cross sections for the elastic scattering angular distributions normalized to the Rutherford cross sections are shown in Fig. 2(a).The inelastic cross sections corresponding to 112 Sn(2 + ,1.256 MeV) and 112 Sn(3 − ,2.355 MeV) are shown in Fig. 2(b) and (c) respectively.Optical model (OM) analysis using snoopy code has been made to fit the elastic scattering data and a total reaction cross section of 914 mb is obtained.To include the effect of breakup coupling, continuum discretized coupled channels (CDCC) calculations using the code fresco have been performed using cluster-folded (CF) potential.The potential parameters for fragment target interactions i.e., α+ 112 Sn (d+ 112 Sn) are taken from Ref. [5] ( [6]).An overall factor of Inclusive alpha The measurement of inclusive alpha particle production in the present reaction is another interesting area.As mentioned earlier and shown in Fig. 1, a large alpha yield with velocity equal to that of the incident beam is observed compared to its complementary breakup fragment 'd'.The angular distribution of the inclusive alpha cross section is extracted and shown in Fig. 3.An arbitrary fit that describes the measured data well (shown as a solid line) is applied and the angle-integrated inclusive alpha cross section of 647±35 mb is obtained.This is more than two-third of the total reaction cross section (914 mb).From the CDCC calculations, the non-capture breakup cross section of 6 Li→ α+d is found to be only ∼60 mb.The rest of the inclusive alpha cross section could be originated from several other reactions [3] e.g., (i) 6 Li (−1n) → 5 Li→ α+p (ii) 6 Li (+d) → 8 Be→ α + α (iii) 6 Li→ α+d followed by d capture, etc. Summary The differential cross sections for elastic, inelastic and inclusive alpha have been measured for 6 Li+ 112 Sn system at a beam energy of 30 MeV.Coupled-channels calculations are performed to include the effect of projectile breakup and target excitations.The normalized cluster-folded potential that explains simultaneously the elastic and two inelastic states are used to calculate the projectile breakup cross sections.The calculated non-capture breakup cross section of 6 Li→ α+d is found to be very small compared to the inclusive alpha yield suggesting possible α contributions from various transfer induced breakup channels.Detailed calculations and measurements of some of these channels are underway to understand the various reaction mechanisms that are responsible for such a large inclusive alpha cross section. Figure 1 : Figure 1: Typical 2-dimensional (ΔE-E) spectrum acquired using a single telescope at θ lab =100 o for 6 Li+ 112 Sn reaction at E lab =30 MeV is shown in upper panel (a).One dimensional projections of 6 Li, alpha and deuteron particle spectra are shown in panel (b), (c) and (d) respectively. Figure 2 : Figure 2: Experimental differential cross sections for (a) elastic scattering and (b,c) inelastic scattering corresponding to (2+, 1.256 MeV) and (3-, 2.35 MeV) excited states of 112 Sn respectively.Solid lines correspond to the results of coupled-channels calculations using fresco. Figure 3 : Figure 3: Angular distribution of inclusive alpha cross section.Solid line represents the fit to the measured data to obtain angle-integrated cross section.
1,552.6
2016-01-01T00:00:00.000
[ "Physics" ]
The Impacts of Attitudes and Engagement on Electronic Word of Mouth (eWOM) of Mobile Sensor Computing Applications As one of the latest revolutions in networking technology, social networks allow users to keep connected and exchange information. Driven by the rapid wireless technology development and diffusion of mobile devices, social networks experienced a tremendous change based on mobile sensor computing. More and more mobile sensor network applications have appeared with the emergence of a huge amount of users. Therefore, an in-depth discussion on the human–computer interaction (HCI) issues of mobile sensor computing is required. The target of this study is to extend the discussions on HCI by examining the relationships of users’ compound attitudes (i.e., affective attitudes, cognitive attitude), engagement and electronic word of mouth (eWOM) behaviors in the context of mobile sensor computing. A conceptual model is developed, based on which, 313 valid questionnaires are collected. The research discusses the level of impact on the eWOM of mobile sensor computing by considering user-technology issues, including the compound attitude and engagement, which can bring valuable discussions on the HCI of mobile sensor computing in further study. Besides, we find that user engagement plays a mediating role between the user’s compound attitudes and eWOM. The research result can also help the mobile sensor computing industry to develop effective strategies and build strong consumer user—product (brand) relationships. Introduction As a two-way personal device, mobile helps users create and consume huge of data every day. Beyond sending/receiving SMS, mobile users could create a "virtual world" by using instant messengers (WhatsApp, WeChat, and other applications), wikis, blogs, social networking platforms and podcasts to access and share information. With the development of sensor computing recently, mobile users could get more benefit, including ubiquitous availability, fast response, and context-aware ability. Mobile sensor computing could help users share information while walking, driving or even sleeping can provide plenty of extremely useful applications, such as personal service (m-commerce, eHealthcare, exercise/fitness and safety), location-based services (nearby restaurants, bars, gas stations and ATMs), traffic service (accidents, congestion, road work) and vehicle service (VAENT) gathered through on board sensors. The next generation of mobile sensor computing will move forward on understanding external context, intelligent reminder mechanism, and network resources usage. In the to embrace the critical elements of tendency, attitude object, and evaluation [14]. Cunningham et al. thought that attitudes are constructed from relatively stable representations [15]. In psychology, attitude can be understood as a particular cognitional process [6]. Previous social psychology literature indicated that attitude should conceptually separate into two dimensions: Affect and cognition [16][17][18]. Studies described cognitive component as the faith or knowledge a person holds toward things [19,20]. Therefore, a sense of fact statement in evaluation often comes with the attitude cognition, which suggests the agreement or refusal a person feel toward the attitudinal subjects. In this study, attitude includes affective attitude (AA, users can obtain a mobile sensor computing application when they feel happy, positive, and good) and cognitive attitude (CA, users can obtain a mobile sensor computing application when they feel wise, beneficial, and valuable). Engagement The term "engagement" has been discussed in different fields, such as psychology, sociology, political science, and organizational behavior [21]. In the organizational behavior literature, the concept of engagement has been explored as a mean to explain organizational commitment and organizational citizenship behavior [22]. In the marketing and service literature, very few academic articles used the terms "engagement" prior to 2005 [23]. In contrast, the term "involvement" is more popular. In general terms, involvement refers to personal phenomena. Involvement is related to an individual's needs, values, and self-concept, and it implicitly expresses the person's beliefs and feelings about an object in a particular situation [24,25]. Involvement influences information searching, information processing, and decision making [26]. Brodie et al. distinguished the engagement from "involvement"-The concepts of "involvement" or "participation" may be viewed as customer engagement antecedents, instead of dimensions [23]. Mollen and Wilson also thought "involvement" fails to reflect the notion of interactive experience [27]. Customer engagement is based on a customer's co-creative experiences [28]. Pertaining to engagement contexts, Web 2.0 applications create a unique platform for users [29,30]. Bezjian-Avery et al. found that consumer engagement may be used to assess the effectiveness of interactive media advertising [31]. Hollebeek recognized the importance of customer engagement in the Web 2.0 applications, which could help to share information and value based on user bases [21]. Gambetti and Graffigna highlighted that media is one of the central roles of consumer engagement in maintaining customer -brand relationships [32]. Customer engagement has a positive effect on online social platform participation and word-of-mouth communication [28]. Customer engagement in the online social platform can be seen as a construct including vigor, absorption and dedication towards the online social platform, which is driven by involvement and social interaction [28]. eWOM Arndt defined word-of-mouth (WOM) as informal communications among consumers on products or services [33]. Later, researchers made lots of effort to try to figure out the mechanism of WOM spreading. Early studies used psychological properties (e.g., customer satisfaction) to predict WOM behaviors [34]. The involvement and self-enhancement are also conducive to generating positive WOM [35]. The term electronic word-of-mouth (eWOM) has been defined as "any positive or negative statement made by potential, actual, or former customers about a product or company, which is made available to a multitude of people and institutions via the Internet" [36]. Recently, study regards eWOM as spreading behaviors by which consumers post their personal experiences (e.g., online review; arguments; recommendations) of specific products or services and generate convictive effects on the targeted receivers by using the internet [37]. Chu and Kim indicated that eWOM in Social Network Sites (SNSs) conceptually included three aspects: Opinion seeking, opinion giving and opinion passing [38]. When consumers made a purchase decision, some of them are more likely to search for information and advice from others because of opinion seeking behavior [39]. In contrast, the opinion leaders may cause a significant influence on others' behavior and attitude by spreading their comments [40]. Dellarocas argued that under the online social context, opinion passing behavior could easily reach to the receivers since the multidirectional communications on the internet is quite a common thing [41]. Hence, Chu and Kim pointed out that opinion passing behavior is a supplement concept of eWOM in SNSs [38]. Since the late 1990s, the rapid proliferation of the internet has enabled consumers to spread their post-purchase experience through such online communications as email, website bulletin boards, news-groups, and blogs [42]. With the emergence of social networking commerce, there has been growing interests on searching and exchanging the eWOM [41]. Following this trend, Okazaki argued that WOM research should focus on ubiquitous media as both information seeker and source are likely to exchange information via mobile devices [43]. eWOM strongly influences the customer behaviors [37,38,44,45]. Varadarajan and Yadav pointed out four important changes that are occurring in the buying environment as a result of eWOM: Facilitating access to the type and amount of information; increasing ease of comparing and evaluating; improving the quality of information; organizing and structuring information [46]. eWOM has become increasingly popular with the rapid growth of availability and ubiquitous in mobile communication and firms have attempted to disseminate promotional campaigns via mobile internet channels [42,47]. In this study, eWOM included three aspects: Opinion seeking, opinion giving and opinion passing. Research Gap, Research Model and Hypotheses The preceding literature review reflects a substantial amount of research on the subjects of WeChat, attitude, customer engagement, and eWOM. Scholars have showed enormous enthusiasm in studying WeChat, the research topics including: The commercial potential of WeChat, CRM in WeChat, etc. However, most of the articles focused on practice, instead of a theory or empirical research. Researchers didn't figure out the mechanism of HCI in WeChat till now. According to marketing literature, attitude is a predictor of consumers' behavior, however, one of the major drawbacks of these studies is the failure to address how attitude influence customer engagement and eWOM behavior. Very little research has focused on the concept of customer mobile sensor computing engagement [21]. Little theory-guided research has been undertaken to understand the nature of customer engagement and eWOM in the specific context of mobile sensor computing [28]. Most studies regard eWOM as an antecedent of expectation, perception, and behavioral intention. In contract, not many scholars emphasize eWOM as an outcome variable in their conceptual frameworks, and the communication process and communication effectiveness of eWOM are still not clear. Hence, our study will endeavor to bridge these gaps by figuring out the relationships among attitude, customer engagement, and eWOM in the context of WeChat, which will be helpful for design and evaluate mobile sensor computing applications. According to Saks, engagement is positively related to attitudes [22]. Numerous evidence demonstrated that attitudes influence both of the processing of information and behavior [7,48]. Calder and Malthouse indicated that engagement is "the sum of the motivational experiences" [49]. The experiences could be customer's attitudes toward online social media platform [49]. Mollen and Wilson argued that online engagement is the customer's cognitive and affective commitment to a computer-mediated brand value [27]. Overall, these pieces of evidence indicate that attitudes will affect customer engagement. Therefore, the following hypotheses are formulated to explore the relationships between attitude and customer engagement in the context of mobile sensor computing: A consumer will make behavioral (e.g., eWOM) intention directly to a specific brand or product. A positive attitude will reflect in a positive evaluation of the brand or product [6]. Studies pointed out that engaged customers may experience confidence in the brand [50][51][52]. Saks argued that engagement positively related to individuals' intentions and behaviors [22]. Social judgment theory assumed that people would judge and assimilate new information base on existing feelings [53]. Attitude and contextual information are correlated positively based on assimilation effect [13]. Thus, following hypotheses are formulated to figure out the relationships between attitude and eWOM in the context of mobile sensor computing: Brodie et al. identified that engaged customers play a key role in providing referrals and recommendations for specific products or services [23]. The customer should not only be satisfied with the product but also be willing to promote the product [54]. eWOM could be considered as one of these promotion behaviors. Beside, Vivek et al. suggested that customer is positively associated with an individual's WOM activity [55]. Bowden argued that emotion could drive WOM recommendation [56]. Chu and Kim indicated that the consumer-social network relationships should play a key role in shaping eWOM [38]. Furthermore, if a customer is willing to add information to an online social platform, he or she will have a higher propensity to participate in an online social platform, as well as to spread eWOM [28]. From these perspectives, it is reasonable to argue that customer engagement will affect eWOM. Hence, following hypotheses are formulated to explore the relationships between customer engagement and eWOM in the context of mobile sensor computing: To this point, we have argued affective attitude and cognitive attitude will guide the processing of information and influence behavior. Indeed, researchers indicated that customer engagement may be manifested cognitively, affectively, behaviorally, or socially [55]. Hence, we argued here that customer engagement plays an important role in explaining the relationships among attitude and eWOM. In another word, we have implicitly described a model in which customer engagement mediates relationships between compound attitudes and eWOM behavior. Thus, we posit the following hypothesis: ‚ H12. Customer engagement mediates the relationship between compound attitudes and eWOM behavior Figure 1 shows the research model based on the hypothesis that we have discussed. eWOM. In another word, we have implicitly described a model in which customer engagement mediates relationships between compound attitudes and eWOM behavior. Thus, we posit the following hypothesis: • H12. Customer engagement mediates the relationship between compound attitudes and eWOM behavior Figure 1 shows the research model based on the hypothesis that we have discussed. Measures of Constructs Attitudes, engagement, and eWOM have been widely discussed in the literature. The choice of scales for our study constructs has therefore been based on the findings of previous publications then adapted to the context of our study. Structured questionnaires comprising 33 items were used to measure the 10 constructs mentioned in the research model. At the beginning of the survey, respondents were first asked to recall their feelings of a most impressive or attractive Official Accounts from WeChat. Next, participants rated their own brand (product) related affective attitude, mobile sensor computing platform related affective attitude, brand (product) related cognitive attitude, mobile sensor computing platform related cognitive attitude, vigor, absorption, dedication, opinion seeking, opinion giving, and opinion passing of the Official Accounts by using a five-point Likert scale that ranged from "strong disagree" (1) to "strong agree" (5). In-depth interview was conducted among a small group (12 students) of heavy users of WeChat to revise and adjust the scale of the research constructs in the context of WeChat. The students have been asked three open-end questions: 1. The user experience of WeChat; 2. The user experience of official accounts; 3. The opinion of WeChat marketing. All the responses and answers were noted during the interview. The measured items are listed in Table 1. Measures of Constructs Attitudes, engagement, and eWOM have been widely discussed in the literature. The choice of scales for our study constructs has therefore been based on the findings of previous publications then adapted to the context of our study. Structured questionnaires comprising 33 items were used to measure the 10 constructs mentioned in the research model. At the beginning of the survey, respondents were first asked to recall their feelings of a most impressive or attractive Official Accounts from WeChat. Next, participants rated their own brand (product) related affective attitude, mobile sensor computing platform related affective attitude, brand (product) related cognitive attitude, mobile sensor computing platform related cognitive attitude, vigor, absorption, dedication, opinion seeking, opinion giving, and opinion passing of the Official Accounts by using a five-point Likert scale that ranged from "strong disagree" (1) to "strong agree" (5). In-depth interview was conducted among a small group (12 students) of heavy users of WeChat to revise and adjust the scale of the research constructs in the context of WeChat. The students have been asked three open-end questions: 1. The user experience of WeChat; 2. The user experience of official accounts; 3. The opinion of WeChat marketing. All the responses and answers were noted during the interview. The measured items are listed in Table 1. Construct Items Source Brand (product) related affective attitude 1. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very happy. Shih et al. [37] 2. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very positive. 3. I'm very like to use the Official Accounts to access reviews about brand (product) evaluation. 4. The Official Accounts is very attractive to me. Brand (product) related cognitive attitude 1. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very wise. Shih et al. [37] 2. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very beneficial. 3. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very valuable. 4. Regarding the use of the Official Accounts to access reviews about brand (product) evaluation, I feel very useful. Data Collection In order to test the research model, we collected data from a convenience sample of university students from Macau University of Science and Technology (MUST) by using a structured questionnaire. The university students were chosen as our research sample as college students were deemed as the critical SNS user [58]. The data collection procedure comprised two stages. First, we conducted a pilot study to pretest the survey instrument. During the pilot study stage, we formed the questionnaire base on the research theme and distributed to the university students. A total of 23 responses were collected in this stage. The results of the pilot test have been used for revising and refining the questions. After pilot test stage, we carried out the formal research by distributed the revised questionnaire. The same as the pretest stage, revised questionnaire were distributed to the university students from MUST. The survey was administered in the campus over a 4-week period and students from MUST were solicited for participation. We obtained responses from a total number of 313 students, resulting in a response rate of 86.9%. Descriptive Analysis The results of the descriptive analysis are presented in Table 2. On average, respondents were 20 year-old: 56.9% of them were female and 43.1% were male. As our sample was collected from university students, participants have all received good education: 78.3% of them had college or equal level education experience and 21.7% were postgraduate students. All of the respondents had experience in using WeChat: 42.5% of them used WeChat for more than 2 years and 44.7% of the respondents claimed they used WeChat for more than 3 hours per day. Reliability and Construct Validity We used Cronbach's alpha to measure the reliability of our research constructs: Brand (product) related affective attitude, Brand (product) related cognitive attitude, Sensor computing platform related affective attitude, Sensor computing platform related cognitive attitude, Vigor, Absorption, Dedication, Opinion Seeking, Opinion Giving, Opinion Passing. As shown in Table 3, all Cronbach's alpha of the constructs were exceeded Nunnally's recommended benchmark (α = 0.703, α = 0.760, α = 0.845, α = 0.715, α = 0.734, α = 0.744, α = 0.721, α = 0.764, α = 0.741, α = 0.716, respectively). These results of the test represents all of the constructs in our research have a high level of internal consistency reliability within consistent and stable items. Before conducting the factor analysis, each construct of the study was assessed for validity (from Tables 4-7): Affective attitudes, affective attitudes, engagement and eWOM have good construct validity. Table 8 reports correlations among all research constructs and control variables. Almost all the constructs were positively associated with each other. However, there were no statistical significant connections between some constructs (i.e., sensor computing platform related affective attitudes and absorption; sensor computing platform related cognitive attitudes and absorption; sensor computing platform related cognitive attitudes and opinion seeking; absorption and opinion seeking). Factors Analysis and Mediation Effect Analysis Normed Chi-square (i.e., χ 2 /df ), incremental fit indexes (e.g., CFI; NFI; TLI) and absolute fit indexes (e.g., RMSEA; GFI; AGFI) were chose to measure the fitness of our research model. According to the acceptable thresholds of Fit indexes, we suggested that our model fits the data fairly well and can be used to conduct hypothesis tests ( Table 9). The parameter estimate statistics of our research model are presented in Table 10. H1a, H1b, and H1c assumed that brand (product) related affective attitudes directly and positively influences vigor, absorption, and dedication, respectively. The paths from brand (product) related affective attitudes to vigor, absorption and dedication were positive and statistically significant (standardized β = 0.209, p < 0.01; standardized β = 0.162, p < 0.05; standardized β = 0.258, p < 0.01; respectively). Thus, H1a, H1b, and H1c were supported by the data. H2a, H2b, and H2c were also supported, as the paths to vigor, absorption and dedication from sensor computing platform related affective attitudes were statistically significant (standardized β = 0.379, p < 0.001; standardized β = 0.399, p < 0.001; standardized β = 0.338, p < 0.001; respectively). Consequently, mobile sensor computing platform related affective attitudes have a direct and positive influence on vigor, absorption and dedication. In H3a, it was presumed that brand (product) related cognitive attitudes directly and positively influences vigor. This path was statistically significant (standardized β = 0.252, p < 0.001). The association between brand (product) related cognitive attitude and absorption (standardized β = 0.256, p < 0.001) was also supported by the data. By contrast, the path from Brand (product) related cognitive attitude to the dedication, which was hypothesized in H3c, was not statistically significant (standardized β = 0.115, p = 0.108). Hence, H3c was unsupported. The standardized path estimates provided support for H4a, H4b, and H4c. Therefore, mobile sensor computing platform related cognitive attitude has a direct and positive influence on vigor, absorption and dedication. H5a, H5a, and H5c assumed that brand (product) related affective attitudes directly and positively influences opinion seeking, opinion giving, and opinion passing, respectively. The paths from brand (product) related affective attitudes to opinion seeking and opinion giving were positive and statistically significant (standardized β = 0.511, p < 0.001; standardized β = 0.485, p < 0.001; respectively). Thus, H5a and H5b were both supported by the data. By contrast, the path from a brand (product) related affective attitudes to opinion passing, which was hypothesized in H5c, was not statistically significant (Standardized β = 0.247, p = 0.450). This result was contrary to the expectation. In H6a, it was presumed that sensor computing platform related affective attitudes directly and positively influences opinion seeking. This path was not statistically significant (standardized β = 0.400, p = 0.123). Therefore, H6a was not supported by the data. By contrast, the paths from mobile sensor computing platform related affective attitudes to opinion giving and opinion passing, which were hypothesized in H6b and H6c, were statistically significant (standardized β = 0.716, p < 0.001; standardized β = 0.606, p < 0.001, respectively). H8a, H8b, and H8c were also supported. Therefore, sensor computing platform related cognitive attitudes has a direct and positive influence on opinion seeking, opinion giving, and opinion passing. In H9a, it was presumed that vigor directly and positively influences opinion seeking. This path was statistically significant (standardized β = 0.260, p < 0.05). The same as H9a, H9c was also supported by the data. By contrast, the path from Vigor to opinion giving, which was hypothesized in H9b, was not statistically significant (standardized β = 0.174, p = 0.139). In H11a, it was presumed that dedication directly and positively influences opinion seeking. This path was statistically significant (standardized β = 0.408, p < 0.001). H11c was also supported by the data. By contrast, the path from dedication to opinion giving, which was hypothesized in H11b, was not statistically significant (standardized β = 0.081, p = 0.398). H3c was unsupported. For mediation effect analysis, we tested the total effects of the independent variables (i.e., the dimensions of compound attitudes) on the dependent variables (i.e., the dimensions of eWOM). The results of standardized total effect estimate and significance test are shown in Tables 11 and 12 respectively. As can be seen from Table 12, all standardized total effect estimates are statistically significant except the path from a brand (product) related affective attitudes to opinion passing and the path from sensor computing platform related affective attitudes to opinion seeking. According to [28,38], dimensions of customer engagement (i.e., vigor, absorption, and dedication) neither mediate the relationship between brand (product) related affective attitudes and opinion passing nor the relationship between sensor computing platform related affective attitudes and opinion seeking. The direct effects of compound attitudes (i.e., sensor computing platform related cognitive attitudes, brand (product) related cognitive attitudes, sensor computing platform related affective, brand (product) related affective attitudes) on customer engagement and customer engagement on eWOM were tested in the level of dimension respectively. As shown in Table 13, all standardized direct effect estimates are statistically significant except the path from a brand (product) related cognitive attitudes to dedication, the path from the dedication to opinion giving, and the path from vigor to opinion passing. Table 14 reports the indirect effect estimates of compound attitudes on eWOM in the level of dimension respectively. Meanwhile, the significance of these estimates is presented in Table 15. To our surprise, all of the indirect effect estimates are not statistically significant. Accordingly, we argued that: Vigor fully mediates the relationship between brand (product) related affective attitudes and opinion seeking; Vigor fully mediates the relationship between sensor computing platform related affective attitudes and opinion giving; Vigor fully mediates the relationship between brand (product) related cognitive attitudes and opinion seeking; Vigor fully mediates the relationship between brand (product) related cognitive attitudes and opinion giving; Vigor fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion seeking; Vigor fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion giving; Absorption fully mediates the relationship between brand (product) related affective attitudes and opinion seeking; Absorption fully mediates the relationship between brand (product) related affective attitudes and opinion giving; Absorption fully mediates the relationship between sensor computing platform related affective attitudes and opinion giving; Absorption fully mediates the relationship between sensor computing platform related affective attitudes and opinion passing; Absorption fully mediates the relationship between brand (product) related cognitive attitudes and opinion seeking; Absorption fully mediates the relationship between brand (product) related cognitive attitudes and opinion giving; Absorption fully mediates the relationship between brand (product) related cognitive attitudes and opinion passing; Absorption fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion seeking; Absorption fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion giving; Absorption fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion passing; Dedication fully mediates the relationship between brand (product) related affective attitudes and opinion seeking; Dedication fully mediates the relationship between sensor computing platform related affective attitudes and opinion passing; Dedication fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion seeking; Dedication fully mediates the relationship between sensor computing platform related cognitive attitudes and opinion passing. According to Table 16, the standardized regression coefficient and the standard error of the path from brand (product) related cognitive attitudes to dedication are 0.115 and 0.033. Meanwhile, the standardized regression coefficient and the standard error of the path from dedication to opinion passing are 0.596 and 0.037. Hence, dedication mediates the relationship between brand (product) related cognitive attitudes and opinion passing with a small mediated effect. Similarly, we demonstrated that dedication mediates the relationship between brand (product) related cognitive attitudes and opinion giving (z = 3.329, p < 0.05, small mediated effect); dedication mediates the relationship between brand (product) related cognitive attitudes and opinion seeking (z = 3.420, p < 0.05, small mediated effect); dedication mediates the relationship between sensor computing platform related cognitive attitudes and opinion giving (z = 10.573, p < 0.05, large mediated effect); dedication mediates the relationship between sensor computing platform related affective attitudes and opinion giving (z = 7.417, p < 0.05, medium mediated effect); dedication mediates the relationship between brand (product) related affective attitudes and opinion giving (z = 9.137, p < 0.05, large mediated effect); vigor mediates the relationship between sensor computing platform related affective attitudes and opinion passing (z = 10.743, p < 0.05, large mediated effect); vigor mediates the relationship between brand (product) related affective attitudes and opinion passing (z = 10.460, p < 0.05, large mediated effect); vigor mediates the relationship between sensor computing platform related cognitive attitudes and opinion passing (z = 8.494, p < 0.05, large mediated effect); vigor mediates the relationship between brand (product) related cognitive attitudes and opinion passing (z = 10.720, p < 0.05, large mediated effect). Combining the mediating effect analyses above, we inferred that H12 (Customer engagement mediates the relationship between compound attitudes and eWOM behavior was partially supported by the data. The Theoretical Contributions First, this research examined the relationships among compound attitude, engagement, and eWOM in the context of mobile sensor computing application. Traditionally, most researches treat eWOM as an antecedent of behavioral intention. We analyzed the relationships among compound attitude, engagement, and eWOM on different dimensional levels. Our research indicated that Brand (product) related affective attitudes positively influences vigor, absorption, dedication, opinion giving, and opinion seeking; sensor computing platform related affective attitudes is positively associated with vigor, absorption, dedication, opinion giving, and opinion passing; Brand (product) related cognitive attitude positively influences vigor, absorption, opinion seeking, opinion giving, and opinion passing; sensor computing platform related cognitive attitudes is positively associated with vigor, absorption, dedication, opinion seeking, opinion giving, and opinion passing; vigor positively influences opinion seeking and opinion passing; absorption is positively associated with opinion seeking, opinion giving, and opinion passing; dedication positively influences opinion seeking and opinion passing. Besides, we also found that the customer engagement partially mediates the relationship between compound attitudes and eWOM behaviors in the context of WeChat. Further, the results of moderating effect analyses indicated that gender has interaction effect on brand (product) related affective attitudes, social media platform related cognitive attitudes and opinion passing; education background has interaction effect on dedication and opinion passing; time of usage (total) has interaction effect on social media platform related affective attitudes, social media platform related cognitive attitudes, dedication and opinion passing; time of usage (daily) has interaction effect on social media platform related affective attitudes, social media platform related cognitive attitudes opinion giving, and opinion passing. Second, our study enhanced the understanding of compound attitudes, customer engagement, and eWOM behaviors by delineating the eWOM process in WeChat. We empirically investigated the customer engagement as an important antecedent for eWOM behaviors in the context of mobile sensor computing, which is a lack of empirical evidence before. Furthermore, the empirical evidence indicated that compound attitudes which consist of brand-related attitudes and mobile sensor computing related attitudes are convictive to predict customer engagement behavior. Accordingly, our conceptual model is representative in the emerging mobile sensor computing platforms. Theoretical Contributions First, this research examined the relationships among compound attitude, engagement, and eWOM in the context of mobile sensor computing application. Traditionally, most researches treat eWOM as an antecedent of behavioral intention. We analyzed the relationships among compound attitude, engagement, and eWOM on different dimensional levels. Our research indicated that Brand (product) related affective attitudes positively influences vigor, absorption, dedication, opinion giving, and opinion seeking; sensor computing platform related affective attitudes is positively associated with vigor, absorption, dedication, opinion giving, and opinion passing; Brand (product) related cognitive attitude positively influences vigor, absorption, opinion seeking, opinion giving, and opinion passing; sensor computing platform related cognitive attitudes is positively associated with vigor, absorption, dedication, opinion seeking, opinion giving, and opinion passing; vigor positively influences opinion seeking and opinion passing; absorption is positively associated with opinion seeking, opinion giving, and opinion passing; dedication positively influences opinion seeking and opinion passing. Besides, we also found that the customer engagement partially mediates the relationship between compound attitudes and eWOM behaviors in the context of WeChat. Further, the results of moderating effect analyses indicated that gender has interaction effect on brand (product) related affective attitudes, social media platform related cognitive attitudes and opinion passing; education background has interaction effect on dedication and opinion passing; time of usage (total) has interaction effect on social media platform related affective attitudes, social media platform related cognitive attitudes, dedication and opinion passing; time of usage (daily) has interaction effect on social media platform related affective attitudes, social media platform related cognitive attitudes opinion giving, and opinion passing. Second, our study enhanced the understanding of compound attitudes, customer engagement, and eWOM behaviors by delineating the eWOM process in WeChat. We empirically investigated the customer engagement as an important antecedent for eWOM behaviors in the context of mobile sensor computing, which is a lack of empirical evidence before. Furthermore, the empirical evidence indicated that compound attitudes which consist of brand-related attitudes and mobile sensor computing related attitudes are convictive to predict customer engagement behavior. Accordingly, our conceptual model is representative in the emerging mobile sensor computing platforms. Theory of reasoned action, social cognitive theory, and theory of planned behavior model indicated that attitudes directly link to behavior intention or behaviors [59]. Our research reinforced these theories by empirically demonstrating the relationships between compound attitude and eWOM. Furthermore, we found that Brand (product) related affective attitude are not positively associated with opinion passing. We also found that sensor computing platform related affective attitude are not positively associated opinion seeking in come in the context of mobile sensor computing. Implications for Practices Mangold and Faulds argued that sensor computing plays a hybrid role in online business, as it enables companies to produce a unified consumer-centric advertising message to connect with their customers [60,61]. In contrast, eWOM in the context of sensor computing allows consumers to communicate each other when giving information in WeChat. When giving information in WeChat, consumers tend to share their product experience with all their contacts; these communications may also play a critical role in IMC. Our study suggested that in order to shape positive eWOM, a marketer should pay attention on the consumers' attitude toward the brand (product) and try to engage them in the mobile sensor computing platforms. This research not only enriches the theoretical knowledge about the determinant factors of eWOM in social media, but also helps IMC marketers to develop effective social media marketing strategies and build strong consumer-brand (product) relationships. We found that effort in building good customer engagement in mobile sensor computing can drive positive eWOM behavior. Since WeChat provides an efficient channel for building these relationships, marketers should try to encourage the users of WeChat to engage in their Official Accounts and spread positive eWOM regarding selected brands or products. Our research found that compound attitudes will positively influence customer engagement in mobile sensor computing platform. A marketer should make efforts on shaping customer's attitudes. In the context of Wechat, these efforts should be a target in building an attractive and friendly official account. Since customer engagement will positively influence customer's eWOM behavior, affirmative and valuable eWOM behavior for a brand (product) are expected if a customer shows positive attitudes on engaging the social media. Empirical evidence from our research demonstrated that consumers will show opinion seeking, opinion giving opinion passing in the context of social media. As mentioned before, eWOM may not directly link to profit which marketer expected, but eWOM can affect the sales and consumers' decision-making processes. Beside, Amblee and Bui indicated that eWOM can be used to convey the reputation of the product, the reputation of the brand, and the reputation of complementary goods [2,[62][63][64]. By affecting the reputation of a brand (product), eWOM has a great influence on marketing. Managers have been interested in customer engagement for about a decade. A large number of companies are providing platforms to get customers to come to their websites and purchase. However, companies are not sure where or how to target their efforts [55]. This paper suggests that the marketer should focus on both existing customers and potential customers' compound attitude. Endeavor into customer brand (product) engagement on mobile sensor computing platforms may also help marketers to improve customer relationships. Limitations and Further Studies Our research sample was collected from both undergraduates and postgraduates of MUST in Macau, since college students represent the majority of SNS and mobile sensor computing platform users [38,65]. However, the country-specific factors cannot be ignored. Hence, future studies should discuss eWOM communication in mobile sensor computing varies across generations and regions. Because of the limited time and resource, the causal relationships among the variables are still not crystal clear. Future studies should consider other factors that can lead to eWOM communication in mobile sensor computing and find out more antecedents which may influence customer engagement. In order to reduce the bias, further studies should conduct the researches through different types of mobile sensor computing platforms.
8,268.4
2016-03-01T00:00:00.000
[ "Computer Science", "Sociology" ]
Influencing Factors in Using Interactive E-Learning Tool We are investigating the factors that could have effect on the learning outcomes of an interactive eLearning tool. Our experiment consisted of participants from undergraduate and postgraduate students spanning across three semesters in a Database Management System course conducted in a blended learning environment. The eLearning tool has a built in data event logger whenever a student used the system to capture the interaction. We analyze the learning outcomes and effectiveness through the captured interactive events with factors such as the students’ course perceptions (CP), evaluation of self-efficacy (SE) beliefs, class attendance, student gender, and academic discipline based on social learning theory. We used analytical path model, t-test and one-way ANOVA to determine the influencing factors directly related to the interactive eLearning tool. Introduction This study investigates the learning outcomes of an interactive eLearning tool on the learners and the factors that may have caused the effects.It was conducted with participants over 400 Higher Learning Institution (HLI) students spanning across three semesters in a Database Management Systems course conducted in a blended learning environment.Interaction with the eLearning system was collected and a survey was conducted at the end of each semester for more data collection.This study of learning effectiveness is based on social learning theory and used a number of influencing factors such as the students' course perceptions (CP) and evaluation of self-efficacy (SE) beliefs, tutorial attendance, student gender, and academic discipline.We use the analytical path model, t-test and one-way ANOVA to analyse the data sets.The analysis reveals consistent usage and a higher level of eLearning engagement for above average performance students.These patterns of usage provide key indicators for learning effectiveness.Our results also show that SE, tutorial attendance, and academic discipline are mediating factors and have a direct impact on learning effectiveness. In the next section we will discuss the related work with the hypothesis and the proposed model.We will then discuss the methodology followed by the result and discussion.This will be followed by a conclusion at the end. Related Work Various factors in learning environments and in the student themselves affect the way they go about the process of learning.There are studies conducted online, in conventional education, and in organisations that provide concrete evidence and frameworks for the effectiveness and the factors that contribute to learning and learning transfer.Very little is being investigated on the factors behind the effectiveness of the interactive eLearning tool in a blended learning environment. In Vermunt's [1] work, for example, he looked into the relationship between the learners' ways of learning and personal, contextual and performance variables in education in general.In the study, a comprehensive and detailed analysis was conducted on whether and how age, gender, academic disciplines, and performance outcomes affect students' learning patterns in a middle size university. From an organisational training perspective on factors affecting learning transfer or learning effectiveness, Baldwin and Ford [2] provided a critique of the existing transfer research and suggested directions for future research.The review also noted key limitations relevant to the way transfer had been operationalized.They closed their review by noting that the existing researches are problematic, given the relatively short-term, single source, perceptual database that has been created.Since then, the transfer literature has expanded to address a number of issues raised in that review [3].A wider array of individual differences and motivational variables has been studied for their impact on learning effectiveness.In addition, a number of studies have examined the learning environment for its impact on the transfer of learning. Recent attempts were made to qualitatively summarize what we know about learning effectiveness from this expanding research base [4]- [8].These reviews have typically focused on trainee and work environment characteristics and their impact on transfer; moreover, they highlight several inconsistent and conflicting findings in the literature.For instance, [7] conclude in their findings that the relationships between general dispositions and learning transfer have shown incoherent results.Similarly, [6] concluded that there was mixed support for conscientiousness and that other personality variables were said to have minimal or no empirical evidence supporting their relationship with learning transfer.They also described the mixed findings for the impact of trainee interventions on learning effectiveness. Hypothesis E-Learning interactive engagement represents the interaction and learners' experiences between learners and an interactive eLearning tool called Database Learning Tool (DBLT).DBLT is a 24/7 online eLearning tool to supplement students' learning in an introductory course on Database Management Systems.Students can login to DBLT at their own initiative and work through the examples to enhance their conventional learning [9].The DBLT recorded a count when a student responds to a question.In our system the count to question responses is more meaningful than the time spent using the DBLT.A student could log on to DBLT for a long time doing unrelated things and only spend minimally related to their learning.Therefore in our study the count of responses is used as the variable for eLearning interaction.Hence the more counts a student makes, the greater the experience as well as the amount of learning.We believe that the interaction with the eLearning tool have a direct impact on student's learning outcomes in terms of GPA performance and hence the following hypothesis: H1: Students' eLearning interactive engagement is direct function to their academic performance (GPA). When interacting with an eLearning tool, an individual can acquire attitude, readjust behaviour, and gain knowledge.Individual learning process through an eLearning tool is also composed of: 1) SE beliefs; and 2) satisfaction or the learners' perception (CP) of a course, in a blended learning environment.Based in grounded social cognitive theory [10], self-efficacy (SE) has been a research focus as a predictor of individual perception and computer technology use.ELearning provides an alternative channel to learn at any point in time and space, and provides more opportunities to be an active self-regulated learner.Bandura's social learning theory [11] [12] also stresses the self-regulation of learning that functions as an initial motive for achieving desirable learning outcomes.Accordingly, individuals will self-initiate, regulate their learning, and actively construct knowledge by acquiring, generating, and structuring information.This results in learning activities that centre on learner autonomy and interactive learning actions.ELearning offers more opportunities for improving problem solving capabilities and achieving learning effectiveness [13]. The attitude of the learners are also key to individuals attaining desirable learning outcomes which is reflected through reactions or the learners' perception of the course in terms of their satisfaction towards the learning.The study [14] indicated that the level of learners' satisfaction was significantly in terms of the level of transfer of learning.Based on the argument above, we hypothesize that: H2: Students' eLearning interactive is a function to their SE beliefs, H3: Students' eLearning interactive is related to CP and, H4: Students' CP is directly related to performance outcome (GPA).Bandura [10] defines SE as personal beliefs about one's capabilities to learn or perform skills at designated levels.Individuals' SE beliefs influence their thoughts, emotional reactions, and behavior.People with high SE beliefs are more persistence, less fear, willing to solve problem, and therefore feel more confident in enabling the achievement of level of outcomes [10].Thus we hypothesize that: H5: Students' SE belief is related to the transfer of learning. We therefore propose the research model shown in Figure 1 by using an analytical path model, based on our argument above. Methodology The experiment was carried out over three semesters with more than400students from diverse backgrounds.The course ran normally with 1 lecture, 1 lab session and 1 tutorial session.The assessment consist of two class tests, one assignment and a final exam at the end of the semester.The students' interactions with the DBLT were recorded throughout.We recorded that only 354 used the DBLT.There were 150 respondents to the survey questionnaires and, of these, five were eliminated due to incompleteness.Of the remaining 145 usable responses, 59 are female and 86 male. We used analytical path model to measure the effect of eLearning interaction and experience on the performance outcomes.Three major instruments proposed by Kraiger et al. [15]: cognitive (directly from GPA-based performance); affective (self-efficacy belief and perception of course); and skill-based (directly from GPA based performance) were measured using the data collected through DBLT logs and the feedback from survey questionnaires from students.The DBLT logs collected form the basis of the data for eLearning interaction and the survey data form the learners' reaction and self-efficacy.The GPA performance the students achieved at the end of semester was used to measure the knowledge and skill based learning domain.Students' attendance, gender and academic disciplines information were collected directly. Survey questions used 5-point Likert scale to measure the variables of SE and CP of students.SE and CP questionnaires were based on Bandura's SE theory [11] and Kirkpatrick's theory on training evaluation [16] where 11 items were related to SE belief and 12 items related to students' CP.The GPA aggregation is from two inter-semester tests, one assignment, and a final exam. Results and Discussion Partial Least Squares (PLS) was used to analyse the survey and collected data for the hypotheses H. T-test, one-way ANOVA, and the Pearson correlation tested the factors for the gender, academic disciplines, and attendance respectively to check the influence on the GPA outcomes. Descriptive statistics and correlations amongst the direct and endogenous factors SPSS was used for the statistics on normality assumption, the means, standard deviations, skewness and Kurtois.The absolute values of skewness ranged from 0.75 to 0.99, while the absolute values of the Kurtois ranged from 0.26 to 1.08, all of which did not exceed the absolute value of 2.0, indicating the normal distribution of data.Correlations were also examined to test the strength of the relationships amongst the interested variables.The results revealed correlation amongst all of the variables at the alpha level of 0.01 (Table 1). Assessment of measurement model and hypothesis testing Reliability and validity of the variables CP and SE was based on guidelines set out by Joreskog and Sorbom [17].The values for Cronbach's coefficient (α) were all above 0.7 indicating a reliability measurement instrument.We assessed convergent validity by examining composite reliability (CR) and average variance (AVE) extracted from the constructs.CR values are higher than the suggested minimum of 0.7 and AVE values were all above 0.5, thus providing evidence of convergence validity.The Variance Inflation Factors (VIF) for the independent variables of CP and SE indicating no significant collinearity between the independent variables, confirming that factors do not load onto each other in the measurement [18]. Partial Least Square (PLS) was used to test the hypothesized relationship of eLearning interactive engagement, CP, SE and tutorial attendance.It estimates the parameters of the structural model, that is, the strength and direction of the relationship among the model variables [19].Table 2 shows the summary of the results.The models' predictive power was also assessed by measuring R 2 for the variables for H1, H4 and H5. The first hypothesis, H1, strongly suggests a positive impact of interactive eLearning on students' learning transfer and, hence, their GPA performance (β = 0.244).This result confirms that interactive eLearning leads students to improved performance. Hypothesis H2 (β = 0.362) and H3 (β = 0.472) confirm that learners satisfaction in course perception and achieve a better SE belief after using the interactive eLearning tool. Hypothesis H4 (β = -0.119)does not support our argument that CP has a positive impact on the transfer of learning.We believe that CP and SE are strongly related in eLearning.Therefore a partial positive contribution from CP to the transfer of learning is not ruled out. Our hypothesis H5 (β = 0.365) indicates that SE leads to academic success which is in line with [20].Therefore SE has a positive impact on the transfer of learning. The effect of gender and academic background on GPA Performance As for the effect of gender, the t-test results found to have no significant effect on the learning outcomes between male and female students with p = 0.118 which is greater than 0.05 assuming homogeneity of variance. The academic discipline has an effect on the performance outcomes.We used one-way between groups ANOVA testing to confirm the results.Only students using the DBLT are tested and 349 students were available for the test.The result indicates that the significant value (sig = 0.010) is less than 0.05 indicating that there is a significant difference somewhere among the mean scores for the different discipline backgrounds.Table 3 shows that the main significant values are only between IT and Engineering disciplines. We ran the Pearson Correlation test on students attendance against the GPA performance outcomes and found that they are highly correlated (Table 1).We further ran PLS to test the β path coefficient (0.241) of the attendance and confirmed that it is significant (p < 0.05) in our study. Conclusions The purpose of the study is to investigate the factors influencing the learning effectiveness of interactive eLearning tools in a blended learning environment in HLI.We based our study on social learning theory and using an analytical path model to investigate the relationship between students' interaction and various direct and indirect factors.Our results found that eLearning interaction has a direct positive impact on the GPA performance outcomes of students.We found that students' eLearning interaction also has a positive relationship on the mediation effects of SE belief and learners' CP.The investigation showed a positive mediation effect of SE on the performance output.Although CP did not support the learning outcome in our investigation, we observed that it has some partial influence.The analysis indicates that attendance plays a positive role in the transfer of learning in a blended learning environment and gender has no effect to the learning outcomes.Academic disciplines have statistical significance impact to learning outcomes but only for IT and Engineering discipline.This study serves as a good platform for continuous improvement for pedagogical practice from an educational point of view and provides recommendations for peer learners to improve learning outcomes from the student point of view. Figure 1 . Figure 1.Relationship between various factors and GPA performance. Table 1 . Correlation of CP, SE and GPA to eLearning interaction. ** Correlation is significant at the 0.01 level, Number of students N = 145. Table 3 . Comparisons of academic disciplines.
3,266.2
2016-02-22T00:00:00.000
[ "Computer Science", "Education" ]
Hierarchical Emulsion-Templated Monoliths (polyHIPEs) as Scaffolds for Covalent Immobilization of P. acidilactici The immobilized cell fermentation technique (IMCF) has gained immense popularity in recent years due to its capacity to enhance metabolic efficiency, cell stability, and product separation during fermentation. Porous carriers used as cell immobilization facilitate mass transfer and isolate the cells from an adverse external environment, thus accelerating cell growth and metabolism. However, creating a cell-immobilized porous carrier that guarantees both mechanical strength and cell stability remains challenging. Herein, templated by water-in-oil (w/o) high internal phase emulsions (HIPE), we established a tunable open-cell polymeric P(St-co-GMA) monolith as a scaffold for the efficient immobilization of Pediococcus acidilactici (P. acidilactici). The porous framework’s mechanical property was substantially improved by incorporating the styrene monomer and cross-linker divinylbenzene (DVB) in the HIPE’s external phase, while the epoxy groups on glycidyl methacrylate (GMA) supply anchoring sites for P. acidilactici, securing the immobilization to the inner wall surface of the void. For the fermentation of immobilized P. acidilactici, the polyHIPEs permit efficient mass transfer, which increases along with increased interconnectivity of the monolith, resulting in higher L-lactic acid yield compared to that of suspended cells with an increase of 17%. The relative L-lactic acid production is constantly maintained above 92.9% of their initial relative production after 10 cycles, exhibiting both its great cycling stability and the durability of the material structure. Furthermore, the procedure during recycle batch also simplifies downstream separation operations. Introduction Biorefining research and exploration has consistently been one of the cutting-edge techniques as a substituting strategy for exhaustible fossil resources in light of the rising environmental challenges of fossil fuel overuse [1][2][3][4][5][6]. Microbial fermentation, as an essential component of biorefinery engineering, provides sustainable bioenergy or chemicals by using natural biomass resources as ingredients to produce green chemical raw materials, promoting social and economic sustainability, and adhering to current carbon neutrality [7,8]. Despite a wide range of applications, the inherent issues of conventional microbial fermentation systems, such as environmental blockage, product inhibition, and cell elution, have always been detrimental to fermentation output and efficiency. Therefore, developing an appropriate delivery system is critical to protect sensitive cells from adverse environments and improve metabolic product yield. The immobilized cell fermentation technique (IMCF) aims to constrain the cells to the water-insoluble carrier that supports cell growth and metabolism. This IMCF system can be desirable over freely suspended cells since immobilization normally promotes an improvement in metabolic performance, cell stability, and product separation during fermentation, as well as permitting repeated catalysis for adherent cells [9,10]. In IMCF, it is crucial to establish an applicable carrier or scaffold that facilitates cell growth and the metabolism process since the surface chemistry of the immobilized carrier material can affect the activity and stability of immobilized cells [11,12]. Diverse carriers have recently been developed for cell immobilization to achieve excellent mass transfer and long-term cell viability. P. S. Panesar et al. established a calcium pectate gel immobilized system to entrapped Lactobacillus casei for L (+) lactic acid production from whey, and the optimized pectate-entrapped bacterial cells achieved a high lactose conversion and corking reusing ability [13]. Despite the most frequently utilized matrix in IMCF, natural materials such as Ca-alginate and pectin are solely focused on mass transfer enhancement and cell biocompatibility. Due to their low mechanical strength and susceptibility to corroded corrosion, natural carriers cannot retain stability against agitation and shear. To provide sufficient mechanical properties, inorganic mesoporous material was developed as support for the advantages of large specific surface area, narrow void distribution, and good resistance against biodegradation [14][15][16]. Zijian Zhao and colleagues absorbed Lactobacillus rhamnosus (L. rhamnosus) in mesoporous silica-based carriers under mild conditions and the Isi-L. rhamnosus exhibited excellent reusability for lactic acid production due to its stabilization against disruption [17]. However, this attachment approach could not efficiently capture cells and is easy to cause leakage of already immobilized cells throughout the fermentation process due to the weak binding effect. Mechanical strength in highly stable cell-carrier systems can also be enhanced through the use of other strategies, such as synthetic polymer and composite materials. Fabricio dos Santos Belgrano et al. designed a 3D-printed porous nylon bead and modified the microbead with polycationic polyetherimide (PEI) [18]. Such microbeads are very stable against stirring and acidic conditions, and the cell-bound microbeads may be used as inoculants even after extended storage. Nevertheless, the cells could not adhere to the 3D-printed nylon bead lack of PEI better; conversely, treatment of the beads with PEI decreased productivity due to the polycation's inhibitory action. It is still tough to immobilize cells in a polymeric porous carrier that is both mechanically robust and highly active in the culture medium environment. Recently, high internal phase emulsion (HIPE) templated porous polymeric material (polyHIPE) has been built as a promising scaffold for cell immobilization owing to its well-defined and fully interconnected microporous structure [19][20][21][22]. PolyHIPE is prepared by solidifying the continuous phase of a HIPE (i.e., an emulsion with its internal phase occupying at least 74.05 vol.%), which provides significant benefits over other porous material fabrication due to its exceptional flexibility and controllable structure [23][24][25][26][27][28]. The liquid flow in the hierarchical void structure of polyHIPE complies with Murray's law, and the microchannels between adjacent voids not only preserve its structural integrity but also boost mass transfer efficiency during cellular metabolism [29,30]. Furthermore, more stable cell adhesion may be achieved by the surface modification of void walls, such as functionalized group modification or adding functional polymeric monomers. Despite its 3D-scaffold, polyHIPE has demonstrated rapid cell adhesion ability and excellent biocompatibility for cell growth in IMCF [31][32][33][34][35]; in the practically applied fermenter, the mechanical qualities of certain polymerization materials, such as polyglycolide acid (PGA), are usually insufficient for the fermentation. The mechanical strength of polyHIPE can be controlled by both porous structures and selected monomers in the continuous phase, which has great potential for applications in the culture and fermentation of immobilized microbial cells. In this study, we described a hierarchically polymeric monolith as a scaffold for the efficient immobilization of microbial cells. As a case study, an engineered cell of P. acidilactici was immobilized onto mechanically sound polyHIPE to convert glucose to lactic acid in a recycled-batch fermentation process. Initially, the w/o HIPE was prepared by homogenizing the CaCl 2 aqueous solution into the organic solution composed of styrene (St), divinylbenzene (DVB), and glycidyl methacrylate (GMA). Then the HIPE template poly-HIPE was synthesized by copolymerizing the external phase monomer, and the obtained P(St-co-GMA) polyHIPE was used to covalently immobilize the cells for fermentation. The used engineered microbial cell of P. acidilactici completely and coordinately converted the glucose into high-titer and high-charity L -lactic acid [36]. The aim of this study was to test the feasibility of overcoming the bioprocess limitations by the use of a novel scaffold designed and produced by polyHIPE as a support for microbial cells. The monomer of St was employed to enhance the mechanical property, and the effect of void size, interconnectivity, and functionalization with GMA on cell adhesion and fermentation was studied. Preparation and Characterization of Immobilized polyHIPE Scaffold Herein, a P(St-co-GMA) polyHIPE monolith was prepared via an emulsion-templating method for microbial cell immobilization. Prior to the templating copolymerization, a w/o HIPE was prepared by homogenizing the oil and water with a high internal phase fraction. The polyHIPE was synthesized by curing the external phase of the HIPE template, in which the St functioned as a mechanical monomer, whereas GMA was employed as a functional monomer to immobilize cells covalently. Moreover, the incorporation of the cross-linker DVB could improve polyHIPE's structural stability [37]. The polyHIPE was characterized by FT-IR. As shown in Figure S1, the strong absorption peak at 2924 cm −1 is attributed to the C-H stretching vibration of the -CH 2 -structure in copolymers, whereas the peak at 1734 cm −1 is associated with the C=O group in GMA. The ring vibrations of the benzene ring were in the absorption bands of 1601, 1493, and 1452 cm −1 , and the bending vibration of hydrogen atoms on the mono-substituted benzene ring absorbs at 1029 and 759 cm −1 , respectively. The characteristic peak of the epoxy group of GMA is present at 907 cm −1 . The absorption peaks at 797, 839 cm −1 are indicative of dibasic substituted compounds at the adjacent and interposition of the DVB benzene rings, which is approximately similar to the spectra in the literature, demonstrating the successful modification of functional group GMA on the monolith [38]. The obtained polyHIPE was supposed to be applied to immobilize cells, and the structure must be figured out as the first factor that the monolith should be set up to facilitate the movement of cells within the polyHIPE scaffold. As seen in Figure 1, all generated polyHIPEs have an open-cell void structure, and it can be observed that the void structure of polyHIPE varies with different emulsifier Span80 concentrations or varied internal phase ratios. As is summarized in Table 1, it could be seen that the average void size reduced from 18.9 µm to 11.6 µm as emulsifier concentration increased from 5 wt.% to 20 wt.%. In addition, along with the internal phase fraction increasing from 75 vol.% to 90 vol.%, the void size decreased from 16.1 µm to 12.5 µm. This finding reveals that the void size of polyHIPE can be tuned by appropriately changing the parameters for the preparation of HIPE. In IMCF, the size of the pore throats should be larger than the size of the bacteria so that the cells can enter the voids without resistance before being completely immobilized, and the size of the voids should be large enough to provide the cells for metabolism and growth activities. It can be seen from Table 1 that the size of the pore throats tends to be a similar level (3-4 µm), which is sufficient for the entering of P. acidilactici to fulfill their admission requirements. Moreover, despite the variations, the void sizes of the entire given polyHIPEs are highly satisfying and provide a spacious site for the cells. When the polyHIPEs are used for cell immobilization and subsequent fermentation, the cells and broth infiltrate the polyHIPEs, and the capillary effect acts on the scaffold [39,40]. Moreover, since the polyHIPE is constantly shaken in the broth throughout the fermentation, it would be eroded by the broth or collide with the container walls, which requires the scaffold to have significant mechanical strengths to preserve its intrinsic porous structure against the complex fermentation process. The mechanical properties of the polyHIPEs herein were characterized by measuring the compressive strength ( Figure 2). Figure 2a depicted a typical polyHIPE stress-strain curve, with a linear region at low strains, a brief stress plateau, and a densification stage with a sharp increase in stress [41]. Attribute to the polystyrene skeletal frame and the stiff cross-linker DVB, the polyHIPE scaffolds exhibit excellent mechanical properties with a high Young's modulus (E). In detail, as is shown in Figure 2b, by comparing Young's modulus of PHI-0590 (36.6 MPa), PHI-1090 (32.6 MPa), and PHI-2090 (7.3 MPa), a decrease in Young's modulus was displayed with the increase in the amount of Span80 used in the preparation of the emulsion. Moreover, the polyHIPE with increasing internal phase fraction had an enhanced Young's modulus (E PHI-1075 = 8.3 MPa, E PHI-1080 = 26.7 MPa, and E PHI-1090 = 32.6 MPa). It is worth mentioning that the Young's modulus of given polyHIPEs in this study exceeds that of most scaffolds widely used in cell culture today, such as PVA-based materials (E = 1.88 MPa) and F127-BUM gel (E = 0.67 Mpa) [42,43], and its robust mechanical property could ensure the structural integrity during the process of subsequent immobilization and several cycles of fermentation. When the polyHIPEs are used for cell immobilization and subsequent fermentation, the cells and broth infiltrate the polyHIPEs, and the capillary effect acts on the scaffold [39,40]. Moreover, since the polyHIPE is constantly shaken in the broth throughout the fermentation, it would be eroded by the broth or collide with the container walls, which requires the scaffold to have significant mechanical strengths to preserve its intrinsic porous structure against the complex fermentation process. The mechanical properties of the polyHIPEs herein were characterized by measuring the compressive strength ( Figure 2). Figure 2a depicted a typical polyHIPE stress-strain curve, with a linear region at low strains, a brief stress plateau, and a densification stage with a sharp increase in stress [41]. Attribute to the polystyrene skeletal frame and the stiff cross-linker DVB, the polyHIPE scaffolds exhibit excellent mechanical properties with a high Young's modulus (E). In detail, as is shown in Figure 2b The smooth flow of substrate inside the material is very important, i.e., the interconnectivity of the porous material should be sufficient to ensure a sufficient mass transfer rate. As is shown in Table 1, the interconnectivity of polyHIPEs increased along with the volume ratio of the internal phase increasing (PHI-1075, PHI-1080, and PHI-1090). Moreover, because the emulsifier could reduce the HIPE droplet size and increase the interconnectivity of porous materials [27,44], the interconnectivity was improved as the Span80 content increased (PHI-0590, PHI-1090, and PHI-2090). The smooth flow of substrate inside the material is very important, i.e., the interconnectivity of the porous material should be sufficient to ensure a sufficient mass transfer rate. As is shown in Table 1, the interconnectivity of polyHIPEs increased along with the volume ratio of the internal phase increasing (PHI-1075, PHI-1080, and PHI-1090). Moreover, because the emulsifier could reduce the HIPE droplet size and increase the interconnectivity of porous materials [27,44], the interconnectivity was improved as the Span80 content increased (PHI-0590, PHI-1090, and PHI-2090). Given that the medium is a kind of aqueous solution, the wettability of the material is a crucial prerequisite for effective following cell entrance and mass transfer. The water wettability of the polyHIPE was then investigated by putting a drop of water onto the sample. As shown in Figure 3, taking PHI-1090 as an example, when roughly 20 μL of water is dropped upon a sample, the water permeates the monolith within 1.5 s. Great water wettability was also proved by other polyHIPEs (PHI-0590, PHI-2090, PHI-1075, and PHI-1080) with infiltration time within 2 s for a drop of water ( Figure S2), and the wetting rate increased with an increase in the interconnectivity of polyHIPE. The favorable water permeability ensures a smooth mass transfer during subsequent cell immobilization and fermentation. Immobilization Process The cell immobilization procedure is divided into two phases. Firstly, P. acidilactici cells were collected by centrifuging the seed broth at 4 °C and distributed in a PBS buffer (20 mM, pH = 7.4), which aimed to remove the culture medium and limit further cellular growth and proliferation. Then 1 g of sterilized polyHIPE scaffold was placed in the above PBS dispersion of P. acidilactici, and shaken at 150 rpm at room temperature for cell immobilization, in which the cells were attached to the carrier of the porous scaffold. During this process, due to the high reactivity and plasticity of the epoxy group, which on GMA Given that the medium is a kind of aqueous solution, the wettability of the material is a crucial prerequisite for effective following cell entrance and mass transfer. The water wettability of the polyHIPE was then investigated by putting a drop of water onto the sample. As shown in Figure 3, taking PHI-1090 as an example, when roughly 20 µL of water is dropped upon a sample, the water permeates the monolith within 1.5 s. Great water wettability was also proved by other polyHIPEs (PHI-0590, PHI-2090, PHI-1075, and PHI-1080) with infiltration time within 2 s for a drop of water ( Figure S2), and the wetting rate increased with an increase in the interconnectivity of polyHIPE. The favorable water permeability ensures a smooth mass transfer during subsequent cell immobilization and fermentation. The smooth flow of substrate inside the material is very important, i.e., the interconnectivity of the porous material should be sufficient to ensure a sufficient mass transfer rate. As is shown in Table 1, the interconnectivity of polyHIPEs increased along with the volume ratio of the internal phase increasing (PHI-1075, PHI-1080, and PHI-1090). Moreover, because the emulsifier could reduce the HIPE droplet size and increase the interconnectivity of porous materials [27,44], the interconnectivity was improved as the Span80 content increased (PHI-0590, PHI-1090, and PHI-2090). Given that the medium is a kind of aqueous solution, the wettability of the material is a crucial prerequisite for effective following cell entrance and mass transfer. The water wettability of the polyHIPE was then investigated by putting a drop of water onto the sample. As shown in Figure 3, taking PHI-1090 as an example, when roughly 20 μL of water is dropped upon a sample, the water permeates the monolith within 1.5 s. Great water wettability was also proved by other polyHIPEs (PHI-0590, PHI-2090, PHI-1075, and PHI-1080) with infiltration time within 2 s for a drop of water ( Figure S2), and the wetting rate increased with an increase in the interconnectivity of polyHIPE. The favorable water permeability ensures a smooth mass transfer during subsequent cell immobilization and fermentation. Immobilization Process The cell immobilization procedure is divided into two phases. Firstly, P. acidilactici cells were collected by centrifuging the seed broth at 4 °C and distributed in a PBS buffer (20 mM, pH = 7.4), which aimed to remove the culture medium and limit further cellular growth and proliferation. Then 1 g of sterilized polyHIPE scaffold was placed in the above PBS dispersion of P. acidilactici, and shaken at 150 rpm at room temperature for cell immobilization, in which the cells were attached to the carrier of the porous scaffold. During this process, due to the high reactivity and plasticity of the epoxy group, which on GMA Immobilization Process The cell immobilization procedure is divided into two phases. Firstly, P. acidilactici cells were collected by centrifuging the seed broth at 4 • C and distributed in a PBS buffer (20 mM, pH = 7.4), which aimed to remove the culture medium and limit further cellular growth and proliferation. Then 1 g of sterilized polyHIPE scaffold was placed in the above PBS dispersion of P. acidilactici, and shaken at 150 rpm at room temperature for cell immobilization, in which the cells were attached to the carrier of the porous scaffold. During this process, due to the high reactivity and plasticity of the epoxy group, which on GMA provided anchoring points for cells on the void wall and reacted directly with the -NH 2 group on the cell surface for covalent immobilization [45]. The immobilized P. acidilactici was characterized via SEM-EDS images. Before observation, the immobilized cells were treated using 2.5 wt.% glutaraldehyde as a fixative to freeze the cell morphology. The cells were subsequently dehydrated using an ethanol gradient before the cell-immobilized polyHIPE was freeze-dried. SEM-EDS images of the distribution of cells within the voids are shown in Figure 4a,d. Figure 4a shows that the cells were located on the void walls, which demonstrated that the P. acidilactici was successfully immobilized on the polyHIPE support. The successful immobilization of P. acidilactici was further confirmed by the sulfur element EDS mapping as the sulfur element in sulfhydryl groups is only present in cells in this system (Figure 4d). provided anchoring points for cells on the void wall and reacted directly with the -NH2 group on the cell surface for covalent immobilization [45]. The immobilized P. acidilactici was characterized via SEM-EDS images. Before observation, the immobilized cells were treated using 2.5 wt.% glutaraldehyde as a fixative to freeze the cell morphology. The cells were subsequently dehydrated using an ethanol gradient before the cell-immobilized polyHIPE was freeze-dried. SEM-EDS images of the distribution of cells within the voids are shown in Figure 4a,d. Figure 4a shows that the cells were located on the void walls, which demonstrated that the P. acidilactici was successfully immobilized on the polyHIPE support. The successful immobilization of P. acidilactici was further confirmed by the sulfur element EDS mapping as the sulfur element in sulfhydryl groups is only present in cells in this system (Figure 4d). The quantitative analysis of the immobilized P. acidilactici was investigated by determining nitrogen contents with elemental analysis; the nitrogen mass fraction in the P. acidilactici was maintained at 11.4 wt.%, which can be used as a reference standard to figure out the content of immobilized bacteria, which was calculated to 0.04 g/g (the immobilized efficiency was 75.3 wt.% compared to the cells wight before immobilization), which demonstrated the excellent characteristics of the polyHIPE on the immobilized cells. Since the cells are covalently immobilized on the PHI, the cells are firmly bound to the scaffold and not easily dislodged. In contrast to the cells adsorbed on the carrier [46], the initial cell amount on PHI can be kept constant. Moreover, as cells were immobilized on the carrier, the material's hydrophilicity rose, resulting in an increase in water wettability and permeability ( Figure S3). Fermentation Process of Immobilized P. acidilactici The fermentation of lactic acid by immobilized P. acidilactici was conducted under an anaerobic environment. The distribution of cells in the voids at the end of fermentation can be observed by SEM-EDS mapping (Figure 4b,e), from which it can be seen that the number of cells in the voids multiplied substantially, proving the extraordinary biocompatibility of the polyHIPE scaffold. Then the optimum temperature of the immobilized P. acidilactici fermentation was characterized by the lactic acid production fermented under a series of temperature gradients. As shown in Figure S4, the 1 H-NMR spectrogram The quantitative analysis of the immobilized P. acidilactici was investigated by determining nitrogen contents with elemental analysis; the nitrogen mass fraction in the P. acidilactici was maintained at 11.4 wt.%, which can be used as a reference standard to figure out the content of immobilized bacteria, which was calculated to 0.04 g/g (the immobilized efficiency was 75.3 wt.% compared to the cells wight before immobilization), which demonstrated the excellent characteristics of the polyHIPE on the immobilized cells. Since the cells are covalently immobilized on the PHI, the cells are firmly bound to the scaffold and not easily dislodged. In contrast to the cells adsorbed on the carrier [46], the initial cell amount on PHI can be kept constant. Moreover, as cells were immobilized on the carrier, the material's hydrophilicity rose, resulting in an increase in water wettability and permeability ( Figure S3). Fermentation Process of Immobilized P. acidilactici The fermentation of lactic acid by immobilized P. acidilactici was conducted under an anaerobic environment. The distribution of cells in the voids at the end of fermentation can be observed by SEM-EDS mapping (Figure 4b,e), from which it can be seen that the number of cells in the voids multiplied substantially, proving the extraordinary biocompatibility of the polyHIPE scaffold. Then the optimum temperature of the immobilized P. acidilactici fermentation was characterized by the lactic acid production fermented under a series of temperature gradients. As shown in Figure S4, the 1 H-NMR spectrogram showed double peaks at δ = 1.10-1.11 ppm, which are characteristic peaks of L -lactic acid. Take PHI-1090 as an example, as shown in Figure 5 and Table S1, in the temperature range of 38-48 • C, the L -lactic acid produced by the immobilized cells was higher than that of the suspended cells owing to the low thermal conductivity of the polyHIPE. In addition, 28.6% of the remaining activity of the immobilized cells was retained at 40 h, which is higher than alginate immobilized cells (55% at 4 h) [47], demonstrating the excellent temperature stability of the PHI immobilized cells. The optical purity of L -lactic acid produced by immobilized P. acidilactici was 99.6 ± 0.1%, which was equivalent to that of L -lactic acid generated by suspended P. acidilactici, indicating that immobilization had no influence on the L -lactic acid generation. Take PHI-1090 as an example, as shown in Figure 5 and Table S1, in the temperature range of 38-48 °C, the L-lactic acid produced by the immobilized cells was higher than that of the suspended cells owing to the low thermal conductivity of the polyHIPE. In addition, 28.6% of the remaining activity of the immobilized cells was retained at 40 h, which is higher than alginate immobilized cells (55% at 4 h) [47], demonstrating the excellent temperature stability of the PHI immobilized cells. The optical purity of L-lactic acid produced by immobilized P. acidilactici was 99.6 ± 0.1%, which was equivalent to that of L-lactic acid generated by suspended P. acidilactici, indicating that immobilization had no influence on the L-lactic acid generation. The structural dependency of L-lactic acid production at 42 °C by immobilized cells was evaluated by immobilizing P. acidilactici on various structural carriers. As shown in Figure 6 and Table S2, the lactic acid yields of the immobilized cells were all higher than those of the free cells, with a maximum yield increase of 17.6%. Moreover, by comparing the L-lactic acid yield from the fermentation of P. acidilactici immobilized PHI-1075, PHI-1080, and PHI-1090, the lactic acid yield improved as the polyHIPE interconnectivity increased, which is because higher interconnectivity leads to a more efficient mass transfer. This result can also be seen from PHI-0580, and PHI-1090, where the highest yield of lactic acid was obtained after PHI-1090 fermentation, and the morphology of the PHI-1090 after undergoing a fermentation process could be seen in Figure 7a,b. However, P. acidilactici immobilized on PHI-2090 had the highest interconnectivity but lower fermentation efficiency, which may be caused by the lower mechanical properties of PHI-2090, and the shaking during fermentation may damage the porous structure, resulting in low lactic acid yield (Figure 7c,d). The structural dependency of L -lactic acid production at 42 • C by immobilized cells was evaluated by immobilizing P. acidilactici on various structural carriers. As shown in Figure 6 and Table S2, the lactic acid yields of the immobilized cells were all higher than those of the free cells, with a maximum yield increase of 17.6%. Moreover, by comparing the L -lactic acid yield from the fermentation of P. acidilactici immobilized PHI-1075, PHI-1080, and PHI-1090, the lactic acid yield improved as the polyHIPE interconnectivity increased, which is because higher interconnectivity leads to a more efficient mass transfer. This result can also be seen from PHI-0580, and PHI-1090, where the highest yield of lactic acid was obtained after PHI-1090 fermentation, and the morphology of the PHI-1090 after undergoing a fermentation process could be seen in Figure 7a,b. However, P. acidilactici immobilized on PHI-2090 had the highest interconnectivity but lower fermentation efficiency, which may be caused by the lower mechanical properties of PHI-2090, and the shaking during fermentation may damage the porous structure, resulting in low lactic acid yield (Figure 7c,d). The fermentation process occurred when the monolith of immobilized P. acidilactici PHI-1090 was introduced directly to the fermentation medium at 42 °C to convert glucose to L-lactic acid within 40 h. The cell capacity in the support was measured by 0.11 g/g after 40 h, which was much higher than the primary content (0.04 g/g), indicating cell growth during fermentation. The glucose and L-lactic acid concentrations during fermentation were monitored. As shown in Figure 8, the glucose was consumed from 50 g/L to 3.5 g/L and 3.4 g/L for suspended and immobilized P. acidilactici, respectively. In addition, the generated L-lactic acid concentration was 34.0 g/kg, with an increase of 17% compared to that of suspended cells (29.1 g/kg), exceeding a lot of reported lactic acid yield enhancements by immobilized cell fermentation (Table S3) and demonstrating that the immobilized cells of P. acidilactici by PHI could effectively utilize glucose to produce L-lactic acid [17,[48][49][50]. Interestingly, the L-lactic acid production of the suspended cells was initially The fermentation process occurred when the monolith of immobilized P. acidilactici PHI-1090 was introduced directly to the fermentation medium at 42 • C to convert glucose to L -lactic acid within 40 h. The cell capacity in the support was measured by 0.11 g/g after 40 h, which was much higher than the primary content (0.04 g/g), indicating cell growth during fermentation. The glucose and L -lactic acid concentrations during fermentation were monitored. As shown in Figure 8, the glucose was consumed from 50 g/L to 3.5 g/L and 3.4 g/L for suspended and immobilized P. acidilactici, respectively. In addition, the generated L -lactic acid concentration was 34.0 g/kg, with an increase of 17% compared to that of suspended cells (29.1 g/kg), exceeding a lot of reported lactic acid yield enhancements by immobilized cell fermentation (Table S3) and demonstrating that the immobilized cells of P. acidilactici by PHI could effectively utilize glucose to produce L -lactic acid [17,[48][49][50]. Interestingly, the L -lactic acid production of the suspended cells was initially higher than that of the immobilized cells, but from 20 h the immobilized cells produced more L -lactic acid compared to the suspended cells. This phenomenon could be explained as follows. Firstly, cells of P. acidilactici tend to form biofilms on the void walls, which are preferable to typical biotransformation of suspended cells [51]. On the other hand, during immobilized cell fermentation, as the number of cells rises, the diffusion of oxygen and nutrients inside the void limits cell growth, resulting in increased L -lactic acid yields owing to the utilization of glucose in a L -lactic acid generation [48,52]. Recyclability and Reusability of Immobilized P. acidilactici One of the greatest advantages of the IMCF is the cells' reusability to be reused following medium change. As shown in Figure 9, the relative yield of lactic acid was consistently stable over 92.9% in 10 cycles, which is higher than that of free cells (80.8%, Table S4). In addition, the immobilized P. acidilactici displays higher lactic acid yield in each cycle compared to suspended cells, reflecting the immobilized P. acidilactici herein is greatly recyclable and reusable. The cell content in the polyHIPE support was 0.17 g/g measured by an elemental analyzer after 10 batch fermentations. This reveals that the fermentation process was carried out with a significant cell proliferation in the voids, and the effective mass transfer during the fermentation process. Moreover, corresponding to Figure 4c, the physical resistance and the existence of the immobilized P. acidilactici after multiple batches were confirmed by the SEM-EDS. During the recycling batch, the enhanced amount of cells in the polyHIPE could be seen in Figure 4f, which was primarily responsible for the synthesis of lactic acid [53]. Both the macro-morphology of the polyHIPE and the microstructure of the hierarchical void are relatively well maintained under continuous violent shaking during repeated fermentation, indicating the excellent mechanical properties and thus excellent cell retention of the polyHIPE ( Figure S5). Furthermore, the P. acidilactici immobilized PHI-1090 was placed in a fermenter for amplified fermentation (Figure S6), and showed equal fermentation cyclability. Polymers 2023, 15, x FOR PEER REVIEW 9 of 16 higher than that of the immobilized cells, but from 20 h the immobilized cells produced more L-lactic acid compared to the suspended cells. This phenomenon could be explained as follows. Firstly, cells of P. acidilactici tend to form biofilms on the void walls, which are preferable to typical biotransformation of suspended cells [51]. On the other hand, during immobilized cell fermentation, as the number of cells rises, the diffusion of oxygen and nutrients inside the void limits cell growth, resulting in increased L-lactic acid yields owing to the utilization of glucose in a L-lactic acid generation [48,52]. Recyclability and Reusability of Immobilized P. acidilactici One of the greatest advantages of the IMCF is the cells' reusability to be reused following medium change. As shown in Figure 9, the relative yield of lactic acid was consistently stable over 92.9% in 10 cycles, which is higher than that of free cells (80.8%, Table S4). In addition, the immobilized P. acidilactici displays higher lactic acid yield in each cycle compared to suspended cells, reflecting the immobilized P. acidilactici herein is greatly recyclable and reusable. The cell content in the polyHIPE support was 0.17 g/g measured by an elemental analyzer after 10 batch fermentations. This reveals that the fermentation process was carried out with a significant cell proliferation in the voids, and the effective mass transfer during the fermentation process. Moreover, corresponding to Figure 4c, the physical resistance and the existence of the immobilized P. acidilactici after multiple batches were confirmed by the SEM-EDS. During the recycling batch, the enhanced amount of cells in the polyHIPE could be seen in Figure 4f, which was primarily responsible for the synthesis of lactic acid [53]. Both the macro-morphology of the pol-yHIPE and the microstructure of the hierarchical void are relatively well maintained under continuous violent shaking during repeated fermentation, indicating the excellent mechanical properties and thus excellent cell retention of the polyHIPE ( Figure S5). Furthermore, the P. acidilactici immobilized PHI-1090 was placed in a fermenter for amplified fermentation ( Figure S6), and showed equal fermentation cyclability. Preparation of polyHIPE Scaffold St (1.8 g), GMA (0.2 g), DVB (0.9 g), and Span80 (0.39 g, 10 wt% relative to monomers) were added to a 100 mL beaker and stirred until well mixed as an oil phase before the initiator AIBN (0.12 mmol) dissolved. Then, 10.4 mL of deionized water (80 vol.%) with CaCl 2 was added to the oil phase at room temperature, and the mixture was homogenized at 400 rpm. The resulting emulsion was transferred into a 10 mm × 10 mm × 15 mm PTFE mold and cured for 10 h at 70 • C. The prepared polyHIPE was washed with ethanol and then deionized water to remove unreacted monomers and leftover Span80. The given monolith, which was called PHI-1080, was dried and then sterilized in an autoclave for 20 min at 121 • C. In order to investigate the different polyHIPE carrier structures, other polyHIPEs prepared by different amounts of Span80 and aqueous phase fraction were also prepared in the same way, which can be seen in Table 1. Immobilization Process The seed culture solution was centrifuged in a cryogenic centrifuge (4 • C) with a speed of 4000 rpm. The separated P. acidilactici (67.0 mg) was washed with PBS buffer before it was dispersed in a 50 mL PBS buffer. Then 1.0 g of the carrier was added to the cell dispersion of PBS solution and shaken under 150 rpm for 24 h at room temperature. After immobilization, the obtained cell-immobilized carrier was rinsed with sterilized deionized water three times to remove the suspended cells. Fermentation Process of Immobilized P. acidilactici The optimum temperature of immobilized P. acidilactici for L -lactic acid fermentation under an anaerobic environment was first determined. A total of 1 g of P. acidilactici immobilized polyHIPEs were placed in the fermentation medium in a 250 mL conical flask with 50 mL simplified MRS medium as fresh fermentation broth containing 50 g/L glucose, and shaken under 150 rpm for 40 h at 38,40,42,44,46, and 48 • C, respectively, and the L -lactic acid concentration was measured by 1 H-NMR. Then immobilized cell polyHIPEs were used for fermentation at the optimum temperature to further find the optimum fermentation parameters. PolyHIPEs with varying structures were obtained by adjusting the content of Span80 and the volume fraction content of the HIPEs' internal phase (see Table 1 for details on these parameters) and were used to immobilize the cells and subsequently conduct fermentation to find the highest lactic acid yield. The found P. acidilactici immobilized polyHIPE with optimum parameter was utilized for fermentation and the fermentation process was monitored for the L -lactic acid yield and glucose consumption by withdrawing 500 µL of the broth every 4 h. Other fermentation conditions remained the same as before. The free cell fermentation as a control experiment was conducted as follows. Firstly, 5 mL (10 vol.%) of seed medium was withdrawn and added into a 50 mL fermentation medium containing 50 g/L glucose, and the fermentation was conducted by shaking under 150 rpm for 40 h. A total of 500 µL of the broth was withdrawn every 4 h and the substrate and product content was measured. Recyclability and Reusability of Immobilized P. acidilactici The recyclability of the P. acidilactici-loaded polyHIPE was investigated via repeatbatch fermentation for 400 h (10 batch cycles) at 42 • C. Following the completion of each batch (i.e., 40 h), the fermentation broth was removed and substituted with a freshly prepared medium. Collected immobilized P. acidilactici polyHIPEs were rinsed in sterilized deionized water three times (20 min for once) between batches and then re-introduced to a 50 mL of fresh fermentation medium for the next fermentation loop. The free cell reusability was conducted by withdrawing 10 vol.% of fermentation broth of each batch and transferring it into a fresh medium for the next fermentation cycle. Characterization FT-IR spectrum of the P(St-co-GMA) based polyHIPE scaffold was characterized using an FTIR spectrometer (Nicolet 6700). The microstructure of polyHIPEs and cell-immobilized polyHIPEs were observed by scanning electron microscopy (SEM, S-4800, Hitachi, Japan). In order to estimate the mean size of the voids and interconnected pores, 100 voids in the SEM image were investigated using the software Image J. For the cell observation, the cell immobilized polyHIPE was soaked into 2.5 wt.% of glutaraldehyde for 40 min at 35 • C, and then the sample was transferred and submerged for 15 min in a gradient concentration of ethanol. The interconnectivity (I) of the polyHIPE was calculated according to Equation (1): where n is the average number of interconnected pores per void. D and d are the average diameters of the void and interconnected pores, respectively. For the mechanical strength test, the compression moduli of polyHIPEs were measured as follows: a cube sample of about 10 mm × 10 mm × 10 mm was made and squeezed with a WDW-50E universal testing machine (Sansi, Shenzhen, China) at a constant rate of 1.0 mm/min until half the height of the sample was reached. The stress-strain curve was used to compute the Young's modulus (E) from the slope of the curve's initial linear region, and it is the average value obtained by compressing three samples. The material's wettability was characterized using a contact angle goniometer (JC2000C Contact Angle Meter, Powereach Co., Shanghai, China). A drop of water (about 20 µL) was dripped onto the sample. The contact angle was determined by averaging the results of five measurements taken at various locations on the same sample. The qualitative analysis of immobilized cells was characterized by an energy dispersive spectrometer (EDS, QUANTAX 400-30), which evaluates the location of microbial cells within the carrier by analyzing the sulfur element distribution of the material microregion components. The quantitative analysis of immobilized cell content in the polyHIPE was determined by an elemental analyzer (Elementar Vario EL Cube). Simultaneously, quantitative analysis was conducted on the control sample of polyHIPE carriers without cells. The cell content within the support is determined by calibrating the pure cells before immobilization for elemental nitrogen content, and the immobilized capacity (g/g) was calculated according to Equation (2). Immobilized capacity (g/g) = ω im − ω polyHIPE ω c (2) where ω im (wt.%) and ω c (wt.%) are the mass fractions of elemental nitrogen of cell immobilized polyHIPE and pure cells, respectively. ω polyHIPE (wt.%) is the mass fraction of elemental nitrogen in the polyHIPE without cell and is used to correct for testing errors of the elemental analysis instrument. The Megazyme D -/ L -Lactic Acid Kit was utilized in order to conduct an analysis of the chiral purity of the L -lactic acid (Megazyme International Ireland, Bray, Wicklow, Ireland). The reduced glucose was determined using the 3,5-dinitrosalicylic acid (DNS) assay [55][56][57]. The standard curve was obtained as follows: Take 0 mL, 0.2 mL, 0.4 mL, 0.6 mL, 0.8 mL, and 1.0 mL of glucose standard solution (1.0 mg·mL −1 ) in a 15 mL tube, fill to 1.0 mL with distilled water, and add 2.0 mL of DNS reagent, respectively. The solutions were heated in a boiling water bath for 2 min, cooled with running water, and fixed with water to a 15 mL scale. The absorbance was measured at 540 nm using a UV-5800 spectrometer (Shanghai Metash Instruments Co, Ltd., Shanghai, China). For sample determination, the sample was diluted appropriately to make the sugar concentration within the range of 0.2-0.6 mg·mL −1 . A total of 1.0 mL of the diluted sugar solution was added in a 15 mL tube, and boiled for 2 min before 2.0 mL of DNS reagent was added. The mixture was cooled down and filled up with water to a 15 mL scale, and measured the absorbance. The glucose content was obtained from the standard curve. The product L -lactic acid was determined by 1 H-NMR (AVANCEIII 500 MHz spectrometer, Bruker, Germany). A total of 1.0 mL of the post-fermentation mixture was taken and centrifugated at 10000 rpm. A total of 500 µL of the supernatant was withdrawn and mixed with 100 µL of internal standard solvent dimethyl sulfoxide-d6 (DMSO-d6) for a 1 H-NMR test. Conclusions In this study, we have designed a mechanically robust hierarchical monolith as a cell scaffold for the immobilization and recycling batch fermentation of P. acidilactici to generate L -lactic acid. First, highly ordered and tunable P(St-co-GMA) porous copolymers were synthesized to immobilize P. acidilactici using w/o high internal phase emulsions as templates, where GMA provided anchor points for the cells to be firmly tethered to the void walls. During the fermentation of immobilized P. acidilactici, the open-cell structure of the polyHIPE permitted efficient substrate transfer, and the mass transfer increased along with enhanced interconnectivity of the monolith, resulting in higher L -lactic acid yield. In cyclic fermentation studies, the immobilized P. acidilactici maintained produced over 92.9% of their initial relative lactic yield production after 10 cycles, exhibiting both its great cycling stability and the durability of the material structure. The polyHIPE scaffold in this work offers tremendous potential in multiple-cell co-immobilization and cascade fermentation.
9,783.2
2023-04-01T00:00:00.000
[ "Biology", "Engineering" ]
An effective method for solving multiple travelling salesman problem based on NSGA-II Abstract In this paper, an effective multi-objective evolutionary algorithm is proposed to solve the multiple travelling salesman problem. In order to obtain minimum total visited distance and minimum range between all salesmen, some novel representation, crossover and mutation operators are designed to enhance the local and global search behaviours, then NSGA-II framework is applied to find well-convergent and well-diversity non-dominated solutions. The proposed algorithm is compared with several state-of-the-art approaches, and the comparison results show the proposed algorithm is effective and efficient to solve the multiple travelling salesman problems. Introduction Multiple Travelling Salesman Problem (MTSP) is an extension of the famous Travelling Salesman Problem (TSP) that visiting each city exactly once with no sub-tours (Gerhard, 1994). MTSP involves assigning m salesmen to n cities, and each city must be visited by a salesman while requiring a minimum total cost. Lots of real-life problems can be modelled as MTSP, such as printing press scheduling problem (Gorenstein, 1970), distribution of emergence materials problem (Liu & Zhang, 2014), vehicle routing problem (Angel, Caudle, Noonan, & Whinston, 1972), UAVs planning problem (Ann, Kim, & Ahn, 2015) and hot rolling scheduling problem (Tang, Liu, Rong, & Yang, 2000). As an NP-hard problem, MTSP has always been valued by researchers. Currently, evolutionary computation is the main method for solving the MTSP problem. In the application process of genetic algorithm, researchers have made a series of improvements to the representation of chromosomes to reduce redundant solutions and improve search efficiency. The one chromosome technique is proposed and applied to hot rolling scheduling problems (Tang et al., 2000). The two-chromosome technique is proposed and used to solve Vehicle Scheduling Problem (Malmborg, 1996;Park, 2001). The two-part chromosome technology is proposed which can reduce the algorithm search space effectively (Carter & Ragsdale, 2005). An estimation of distribution algorithm (EDA) with a gradient search is used to solve MOmTSP which CONTACT Zhang Kai<EMAIL_ADDRESS>This article has been republished with minor changes. These changes do not impact the academic content of the article. objective function is set as the weighted sum of the total travelling costs of all salesmen and the highest travelling cost of any single salesman (Shim, Tan, & Tan, 2012), in which the author considers minimizing the longest cost to balance the workload between salesmen. An effective evolutionary algorithm, reinforced by a post-optimization procedure based on path-relinking (PR), is used to deal with a bi-objective multiple travelling salesman problem with profits (Labadie, Melechovsky, & Prins, 2014). Meanwhile, other evolutionary algorithms based on swarm intelligence are used to solve the MTSP problem. To minimize the number of mobile nodes in desired wireless sensor network, the routes plan for all mobile nodes is modelled with multiple travel salesman problem and solved with PSO algorithm (Zhang, Chen, Cheng, & Fang, 2011). A new acceleration particle Swarm optimization is constructed to solve the MTSP problem (Qing, Kang, & Marine, 2015), which can effectively overcome the premature convergence. The ant colony optimization (ACO) is applied to the MTSP with ability constraint and the results are encouraging (Pan & Wang, 2006). A multiobjective EA which combine ACO and the multi-objective evolutionary algorithm (EA) based on decomposition (MOEA/D) is proposed to solve MTSP (Ke, Zhang, & Battiti, 2013). Several multi-objective ACSs are proposed to tackle MTSP from a bi-criteria perspective which need to minimize the total cost of travelled subtours while achieving balanced subtours (Necula, Breaban, & Raschip, 2015). But in the research on MTSP, most of the problems are treated as a single-objective problem, or considered from two separate perspectives. For the latter, there are usually two objective functions that are used: minimizing the total distance travelled and minimizing the travel distance of the longest traveller. The second objective usually as a condition for balancing the subtours, which involves some problems such as the workload among salesmen and service time of each customer in practical situation. However, considering that reducing only the total distance will result in a height imbalance between each subtour. The result is that one traveller has gone through most of the journey, and other travellers only pass a city near the deport. If we only consider the balance between subtours, this will often make salesman walk unnecessary distances. Therefore, optimizing the total distance and the balance between subtours are two conflicting goals which determines that we cannot consider it separately. In this paper, the difference between the longest route and the shortest route is used to balance the workload between salesman, and the total distance of the salesman is taken as the total cost. Then we use the NSGA-II framework to optimize both targets simultaneously, and improve the crossover operator and mutation operator to get a better non-dominated frontier. Compared with the results of the existing literature, the feasibility of the algorithm is shown. The effect of our algorithm is encouraging. Mathematical model Based on the number of depots, the MTSP problem can be divided into single-depot multiple TSP (SD-MTSP) and multi-depot multiple TSP (MD-MTSP), the former means that the salesmen start from the same starting depot and the latter means that the salesmen start from different starting depot. This paper mainly explores the two objective SD-MTSP, where the two objective is conflicting. Usually, the problem can be described as follows: there is a set of n number of cities and m number of salesmen, the set is expressed as C = {i}, i = 0, 1, 2, . . . , n, and V = {k}, k = 1, 2, . . . , m. Each salesman departs from the same depot, takes a tour route and returns to the original starting city. Here we use c ij for the distance between cities i and j, and x ijk for salesman k from cities i to j. Each city will be visited exactly once (except the starting point). Ideally, the total travel distance is minimized while the travel distance between salesmen is as close as possible. The mathematical model is as follows: (1) Equations (2) and (3) respectively represent two objective functions: the total distance of the salesman and the difference between the longest route and the shortest route. What we need to do is to minimize both f 1 and f 2 . Equation (5) represents the constraint: all salesmen start from the same starting city 1. It is required that, except for the starting city, there is one and only one salesman in each city passing through, and all salesmen return to their starting city. The final solution doesn't generate subtours. Chromosome representations In order to reduce the redundancy of solution space, we use two-part chromosome technology to encode the problem. For the sake of convenience, we have made some modifications to the chromosomes. Figure 1 shows a chromosome representing of the MTSP solution (where n = 10 and m = 3), in which n cities are represented by natural numbers between 0 and n−1, and 0 represents the central city. Among them, the chromosome consists of two parts: the first part is the arrangement of n−1 natural numbers; the second part is m−1 break points, which divides the first part into m groups, each group is represented by a salesperson. In order to more intuitively display the route of each traveller represented by the chromosome code, we assume that the relative positions of the 10 cities are as shown in the left of Figure 2, and the solution corresponding to the chromosome in Figure 1 is as shown in the right of Figure 2. From this, the tours of three travellers can be obtained: 0 → 3 → 1 → 2 → 0; 0 → 4 → 7 → 9 → 6 → 0; 0 → 8 → 5 → 0. We use P1 for 31247968537 and P2 for 031204796085, and the process of chromosomes from P1 to P2 is called decoding. Crossover operator The crossover operator in genetic algorithm is designed to simulate the intersection of chromosomes in nature. A good crossover operator is critical to the algorithm's local search capabilities. For the natural number coding form of TSP problem, Partial-Mapped Crossover, Cycle Crossover and Order Crossover are proposed and widely used (Goldberg & Lingle, 1985;Oliver, Smith, & Holland, 1987), but in practical applications, the convergence degree is slow and the effect is not ideal. A crossover operator named HAG for real-coded traveling salesman problem is proposed, which has a good effect in practical applications (Tang, 1999). In the two-objective problem, which minimizes two functions simultaneously, once the crossover operator has strong local search ability on a target, we control the evolution direction of the population through the objective function, then the Pareto front can be as close as possible to the coordinate origin. This is the main idea of the crossover operator design in this paper. The crossover operators used in this paper are improved on HGA to make it suitable for MTSP, and the main process of the crossover is shown in Algorithm 1. In Algorithm 1, PA and PB represent two parents, and the children are generated in different search directions according to the input MARK value. If we take the input PA as 123456789, and k is set as 5. When the value of MARK is latter, the search direction is from front to back in the PA. At this time, the next city adjacent to the k in the PA is found by the latterCity() function, i.e. 6. When the value of MARK is former, the search direction is from back to front in the PA. In this case, the previous city in the PA adjacent to k is found by the formerCity() function, i.e. 3. The same applies to PB. The role of distanceOfTwoCity() is to find the Euclidean distance between the cities represented by the two parameters. Through experiments, we found that the derived children have different performances on the two objective functions for different inputs of PA and PB. Under the chromosome representation of this paper, if we take the part1 of the parent chromosome as input, most of the results obtained will have a better balance between salesmen, but it is not so good in total distance. If we take the decoded parents as input and finally rationalize the result, most of the results obtained will have a better performance in total distance, but it is not ideal in balance. The reason for this phenomenon is that HGA-based crossover operators have a certain bias in the goal of minimizing the total distance. This bias is stronger when using decoded chromosomes as input. So, in order to obtain a well-diversity set of solutions, we combine the above two methods as the final crossover operation. The crossover operation used in our paper is called combined HGA, which generates descendants in two different ways. As is shown in Figure 3, the part1 of the parents is used as the input to algorithm 1, and the output is taken as part1 of child1, then part2 of one of the parents is taken as part2 of child1. For child2, we use the decoded parents as the input of algorithm 1, and then the output is subjected to the removal of adjacent zeros and the random generation of breakpoints, so that the result is rationalized. The whole process is shown in Figure 4. Mutation operator The main function of the mutation operator is to prevent the search process from falling into local optimum. Two mutations are designed for the coding method of chromosomes. During the algorithm, different mutations are used with a specific probability. Two mutation operators are designed to obtain more chromosomal variant forms and combinations, so that the search process can have a greater probability of jumping out of the local optimum. Method 1: In the part 1, two numbers i and j(i < j) are randomly generated, and the order of the genes between the two points i and j + 1 is inverted. In the part 2, randomly generate new m−1 split points, as shown in Figure 5. Method 2: In the part 1, two numbers i and j(i < j) are randomly generated, and these two points divide the chromosome into three parts: 1, 2 and 3, then the new chromosomes are combined in the order of 2, 1 and 3. In the part 2, a new m−1 points are randomly generated, as shown in Figure 6. Overall algorithm In the framework of NSGA-II (Deb, Pratap, Agarwal, & Meyarivan, 2002), the process of producing new populations is shown in Figure 7. There are two parts in this process that involve selection operators. The first part is the generation of Qt through genetic algorithm. Here the binary tournament selection method is adopted: a population arrangement is randomly generated, and the better individual are selected from two adjacent individuals, in which the standard for this comparison is the non-dominated level and the crowding distance. Executing twice in the above method, and then a descendant population of the same size as the original population will be obtained. The second part is the generation of P t+1 : combining P t and Q t , and performing non-dominated sorting to generate boundary set R = {r 1 , r 2 , r 3 , . . .}; the boundary set is sequentially merged into P t+1 until |P t+1 ∪ r i+1 | is greater than |P t |; calculating the crowding distance for individuals in r i+1 and sort by this, and then adding individuals to P t+1 until |P t+1 | is equal to |P t |. To show the specific steps of our algorithm, we present detailed pseudo-code. In Algorithm 2, the P 0 is a random initial parent population, N is the number of iterations and MP is the probability of mutation. The population P i in the ith iteration generates a new population Q i through binaryTournamentSelection(), crossover() and mutation operation, in which the three operations of the genetic algorithm have been described in detail above. The fastNonDominatedSort() and the crowdingDistanceAssignment() are two important operations in the NSGA-II. At the first operation, a combined population R i that is included all previous and current population members is sorted by nondomination, which is used to ensure the convergence of the algorithm. For the second operation, the crowding-distance is used to measure the fitness of the solution on the same nondominated set, which is used to maintain the diversity of the algorithm. Results and discussion Regarding the multi-objective MTSP problem, there are currently no benchmark examples like TSPLIB that can be Algorithm 2: The whole process of our algorithm. Input: P 0 , N, MP Output: used. Most of the existing work is done on the TSP benchmark which the first city is set as depot, where the number of salesman is predetermined. In Necula et al. (2015), eil51 51 7 berlin52 52 5 berlin52 52 7 eil76 76 3 eil76 76 7 rat99 99 7 the author selected four examples based on the location and distance of the central city relative to other cities, and simultaneously optimizes the total distance and the "amplitude" of the subtours. To verify the feasibility of our algorithm, this paper selects the same example to conduct experiments, as shown in Table 1. The proposed algorithm has been implemented in JavaScript on a CPU Intel Core i5-7500 with 3.40 GHz and 8 GB of RAM. The scale of the population is set as 100 and the probability of mutation is set as 0.05. Because our algorithm is based on the random arrangement of the population, and the binary tournament selection is used, here is no need to set the crossover rate. In order to compare with the existing experimental results, we run 10 times on each instance, and then perform non-dominated sorting on the obtained 10 solution sets Figure 8. Non-dominated fronts for eil51-m7 and berlin52-m7 instances. and take the non-dominated front as the final solution. For 51 cities and 52 cities, we set the number of iterations to 1400. For the instances of 76 cities and 99 cities, we set the number of iterations to 1800 and 2200, respectively. In order to show the improvement effect of the algorithm, we also present another set of experimental data, in which the crossover operator we make the following changes: the part1 of the parents uses PMX operation to generate two part1 of the child, and part2 of child1 retains the part2 of the one of the parents, the part2 of child2 is randomly generated, and the rest of the algorithm remains unchanged. We will call it PMX+X later. In the literature (Necula et al., 2015), they have investigated three multi-objective ACO-based algorithms and a single-objective ACO algorithm with the objective of minimizing the longest tour. Where P-ACO belongs to an algorithm with multiple pheromone trail matrices and one single heuristic matrix. Conversely, MACS belongs to the algorithms which use a single pheromone trail matrix and several heuristic matrices. The third is multi-objective ant colony optimization based on decomposition in Figure 9. Non-dominated fronts for berlin52-m5 and eil76-m7 instances. combination with ACS which is called MoACO/D-ACS. As for g-MinMaxACS, it treats the two-objective problem with a compacted method, through resorting to the Min-Max approach. At last, CPLEX MinMax SD-MTSP are also given in the literature, which indicates that the CPLEX optimizer is used to optimize the longest subtour. Here we put our own experimental results on the same axis as the non-dominated solution set obtained by the above method. In all the following figures, the horizontal axis represents the total distance of tours and the vertical axis represents the difference between the longest subtour and the shortest subtour. In Figure 8, it is obvious that the nondominated front obtained by combined HGA is better than the non-dominated front edge obtained by PMX-X. The former solution tends to have less total distance and lower difference. Compared with the other four methods, the solution from our algorithm can dominate most of the solution obtained by other methods, especially the instance eil51-m7. In berlin52-m7, our result is to show a clear advantage in the objective of minimizing the total cost, but not very good in minimizing the balance. As shown in berlin52-m5, combined HGA still performs better than PMX-X at the same number of iterations. In eil76-m7, the result of the algorithm with PMX-X has a relatively poor performance, so we don't show it on the chart. From both two instances in Figure 9, it is obviously that our algorithm can get better non-dominated front. In Figure 10, our algorithm also gets a better nondominated front. Especially in the rat99-m7 example, our non-dominated solution set reflects a good diversity. Since our crossover operator is based on the shortest distance, our results do not perform well in the extreme case of minimizing the balance in some instances. In general, the non-dominated front derived from our algorithm is closer to the origin than the non-dominated solution set obtained by any of the other four methods, the results we obtain simultaneously are of good diversity and can be provided to decision makers more choices. Conclusion This paper proposed an effective method based on NSGA-II to solve the multiple traveling salesman problem. The total travel distance and the difference between the longest subtour and the shortest one are the two conflict objectives. A novel crossover operator is designed to generate new offspring, and two mutation operators are proposed so that the search process can jump out of the local optimum. Our algorithm is tested with several MTSP benchmark instances and compared with several stateof-the-art approaches. The comparison result shows the effectively and efficiency of our algorithm. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by the National Natural Science Foundation of China (Grant No. 61472293).
4,636.8
2019-10-11T00:00:00.000
[ "Computer Science" ]
Realization of sextuple polarization states and interstate switching in antiferroelectric CuInP2S6 Realization of higher-order multistates with mutual interstate switching in ferroelectric materials is a perpetual drive for high-density storage devices and beyond-Moore technologies. Here we demonstrate experimentally that antiferroelectric van der Waals CuInP2S6 films can be controllably stabilized into double, quadruple, and sextuple polarization states, and a system harboring polarization order of six is also reversibly tunable into order of four or two. Furthermore, for a given polarization order, mutual interstate switching can be achieved via moderate electric field modulation. First-principles studies of CuInP2S6 multilayers help to reveal that the double, quadruple, and sextuple states are attributable to the existence of respective single, double, and triple ferroelectric domains with antiferroelectric interdomain coupling and Cu ion migration. These findings offer appealing platforms for developing multistate ferroelectric devices, while the underlining mechanism is transformative to other non-volatile material systems. 4. It seems that the authors used ML models to explain quadruple and sextuple polarization states in pure CIPS (the reference is not available for the reviewer).However, the authors used DBs (~ 10 MLs) to explain these multiple polarization states here.This change has not been properly justified.In my opinion, this is a significant weakness of the present work. 5. The discussion on the DBs and their boundary structures and dynamics is absent.It would be insightful if the authors can discuss these issues.Just curious, are the DB boundaries associated with defects? 6.What are the energy barriers between these different states?Can the detection voltages or thermal fluctuations affect the state stability?7. The authors speculated that the quadruple and sextuple states were possibly due to the nonuniform electric field under the tip.If this is the case, the tip size plays a crucial role in the distribution of the electric field under the tip.But the impact of the tip size was not discussed.If the tip radius decreases from the current 30 nm to, for example, 5 nm, what are the consequences? Response: We apologize for the likely confusion between multiple resistive states in the field of ferroelectric synapses and the multiple polarization states stressed in the present manuscript, and appreciate this opportunity to better elucidate the distinction between the two.In essence, the multiple states highlighted in the context of ferroelectric synapses are resistive states rather than intrinsic multiple polarization states.The key distinction between the two is that the former case is evidenced by the characteristic double-state ferroelectric hysteresis loops in those ferroelectric synapses [1~5].Achieving resistive multistate involves adjusting the areal ratio of Pup and Pdown domains in a ferroelectric film or stacking multiple layers of ferroelectric films separated by dielectrics, which can occur in many systems with two intrinsic polarization states, as exemplified in Fig. R1 [1].In contrast, in our CIPS systems, we demonstrated six-fold reversals of polarization directions with six observed corresponding coercive voltages in a single hysteresis loop, which directly attests to the existence of sextuple polarization states.Besides, we observe double-state hysteresis loops in the CIPS films that originally demonstrated sextuple-state, as well as quadruple-state hysteresis loops from the film that demonstrated sextuple-state initially, representing the same maximum number of intrinsic polarization states experimentally reported so far in the literature, CIPS (Ref 20), 3R-MoS2 (Ref 23) and non-vdW perovskites (Ref 13~15). Thus, the intrinsic ferroelectric multistates observed in our CIPS samples are fundamentally different from the resistive multistates observed in the ferroelectric synapses.Our reported CIPS, featuring sextuple polarization states, also stands as the maximum number of intrinsic polarization states reported so far.More interestingly, the number of switchable polarization states can be reversibly transformed from sextuple to quadruple to double, which has never been reported in any other ferroelectric systems so far. Given this improved clarification, we hope the Reviewer would agree that the present work indeed contains substantial new findings to warrant its publication in Nature Communications. To avoid potential future confusion, we have also added the following sentences on page 8 of the revised manuscript: "Here we note that the multiple polarization states as established here are fundamentally different from the multiple resistive states reported in previous studies of ferroelectric synapses 6,8,42-44 , as distinctly contrasted by the observations of mutually convertible six/four/two polarization states in a single hysteresis loop in the former case, and only two polarization states in the latter case". Comment 1: The authors mentioned that it is possible to tune polarization orders (Fig. 3 and Fig. S9).However, the hysteresis loops appear to distinguish between the original (Fig. 3a) and recovered (Fig. 3e) sextuple states.Quadruple cases (Fig. 3b .vs.Fig. 3d; Fig. S9a .vs.Fig. S9c) also exhibit this behavior.I thus doubt whether the polarization states belonging to the initial and subsequent polarization orders are exactly the same ones. Response: In principle, ferroelectric polarizations can be switched reversibly to precise states, resulting in perfectly symmetric and reproducible hysteresis loops for ideal materials systems.However, with practical considerations, real samples include various unintended defects "such as local variations in stoichiometry, vacancies, stacking faults, layer thickness variations, surface defects, and contaminants", as itemized in the questions raised by Reviewer 2 (questions 1~3).Because of those defects, the polarization switching details such as the coercive voltage and polarization value can be varied among multiple hysteresis loops, while the same macroscopic state can be formed with different microscopic combinations of electric dipoles.Thus, it is challenging to experimentally observe identically repeatable ferroelectric hysteresis loops as proposed in theories, even for some traditional ferroelectric materials [6~8].We also observed hysteresis loops with double polarization states demonstrated similar repeatability to the previously reported double-state loops of CIPS at room temperature [9,10].Additionally, from the practical application point of view, such as nonvolatile ferroelectric memories, the functionality of a ferroelectric material is maintained as long as the multiple polarization states can be distinguished within an acceptable tolerance. In response to the Reviewer's comment, we have added the following sentences on page 11 of the revised manuscript: "We note that the hysteresis loops observed before and after the reversible transformation are not exactly the same, which can be due to various unavoidable defects in the CVT-grown CIPS films during the deposition and subsequent transfer processes, such as local variations in stoichiometry, vacancies, stacking faults, etc.For example, variations in stoichiometry can lead to local phase separation, which induces local strain and helps to stabilize the quadruple polarization states as proposed in Ref 20.Cu vacancies and excess Cu ions have been proposed to have the effect of lowering the energy barrier of Cu to move across the vdW gaps 21 .Stacking faults can lead to local kink, edge-type, or knot-type dislocations, which can cause the formation of nanodomains in CIPS films 47 .Nevertheless, even though the polarization switching details such as the coercive voltage and polarization value can be varied because of the presence of the aforementioned defects, the polarization orders can be distinguished and transformed within an acceptable tolerance". Comment 2: The off-field amplitude hysteresis loops in Fig. 2c around 4-6 V bias window show that the number of minima varies across cycles.The individual stability and ability to retain after withdrawing electric fields of each sextuple polarization state should be carefully discussed including the time scale. Response: As elucidated in response to the Reviewer's comment 1, the stability and repeatability of remanent polarization states can be influenced by various types of defects.More importantly, the nonuniform electric field distribution and testing time scale can also affect the resultant formation of the remanent polarization states.It is well known that defects can efficiently pin the domain walls [11~13] and cause variations in the coercive voltage (i.e. the minima in the amplitude loop), compromising the repeatability of the measured hysteresis loops.On top of those structural and chemical defects, the dynamic movement of Cu ions unique to the CIPS system adds additional randomness even under a uniform electric field.That is, when the Cu ions are excited by a tipinduced nonuniform electric field, the repeatability of our CIPS hysteresis loops can be influenced further.We agree with the Reviewer that the number of amplitude minima varies in our samples because of the randomness from defects and dynamic movement of the Cu ions.We have additional datasets exemplifying a discrete and less varied number of amplitude minima throughout the entire bias window (Fig. R2).Importantly, although certain variability of coercive voltages and polarization values among multiple loops are inevitable in the currently reported CIPS material system, the emergence of sextuple polarization states remains consistent and replicable across various sample batches.Furthermore, the Reviewer's inquiry regarding the time scale effect is indeed insightful as one of the major causes of variability in the hysteresis loops.To investigate this effect, we conducted additional experiments where we varied the pulse durations for hysteresis loop measurements, as illustrated in Fig. R3.We found that the sextuple remanent polarization states remained consistent across pulse width (both excitation and reading pulses) of 15 ms (used in our manuscript) and 50 ms.This indicates that variations in this time scale range do not alter the major characteristics of the sextuple polarization states. In response to the Reviewer's comment, Figures R2 and R3 have been added to the Supplementary Information as new Fig.S16 and S17.We have also added the description on pages 8 and 9 as "This phenomenon has been observed consistently over 23 CIPS samples from various crystal batches (an additional dataset is present in Fig. S16)"."Additionally, the stability of the sextuple remanent states has been investigated with varied reading pulse duration (Fig. S17), suggesting that the variations in the time scale of 15~50 ms do not alter the major characteristics of the observed sextuple polarization states".[11] J. Appl.Phys.129, 014102 (2021). Comment 3: Regarding the mechanism of the sextuple-polarization state, the authors attribute it to different polarization configurations depending on the applied electric field.If so, why are there sextuple states rather than other numbers?The authors indicated in Fig. S12 that some configurations have relative energy levels.To effectively demonstrate this point, all polarization configurations should be taken into account and divided into six groups.In this regard, more theoretical calculations are required. Response: We apologize for the confusion raised by Fig. S12, caused by insufficient presentation on the related theory part.We would like to clarify that the sextuple states originate from the stacking of three domains but not the polarization states of a multilayer (e.g.6-ML shown in Fig. S12).According to our theory in a separate and (in fact) preceding study (listed below), stacking three layers into a trilayer with each layer having the intralayer antiferroelectric (AFE) coupling can lead to sextuple states with net polarization (see Fig. R4).The underlying physics is that the AFE ground state of the monolayer harbors a unique half-layer-by-half-layer flipping mechanism.In the present paper, the predicted mechanism is applied to our systems, but the three layers are replaced by three domain blocks, with each domain block consisting of a thin film of CuInP2S6.The primary purpose of Fig. S12 is to illustrate the plausibility of the existence of the AFE coupling within a domain block consisting of six CuInP2S6 layers.The AFE ground state is also found for other films, whose thickness is even larger than twenty layers. More details of the trilayer first-principles study can be found in Ref.48 of the revised manuscript, which now has been updated as "Yu, G., Pan, A., Zhang, Z. Chen, M. Polarization multistates in antiferroelectric van der Waals materials.arXiv.2312.13856(2023)." Both the HP and LP states share the same polarization arrangements. Their difference is caused by the absence or presence of displacement in Cu sites. To my understanding, the mechanism involving the change in polarization configurations mentioned here is distinct from that. I wonder why did they observe comparable hysteresis loops? Response: Indeed, as highlighted by the Reviewer, the mechanism underpinning the change in polarization configurations proposed in our study differs from that reported in Ref. 20.In the context of Ref. 20, despite the observation of four distinct polarization states, each of the hysteresis loops displayed in that earlier study merely illustrates the polarization transition between two of these states (Fig. R5a).The simultaneous manifestation of all four states within a single hysteresis loop was uncharted in Ref 20, but a subsequent study conducted by the same group did show a four-state three-level hysteresis loop in CIPS films via polarization switching among two ferroelectric polarization states and two antiferroelectric states (Fig. R5b) [14].In contrast, our quadruple-state hysteresis loops observed in the CIPS film with sextuple pristine polarization states (Fig. R5c) demonstrate distinctly different features, including two subloops in a single hysteresis loop that directly correspond to four polarization states, and a polarization direction that aligns antiparallel to the electric field direction when subjected to a sufficiently strong electric field.Such distinctions underscore a clear divergence in mechanisms between the earlier publications [14 and Ref. 20] and our current findings, as also pointed out by the Reviewer.Additionally, it is noteworthy that the team of Ref. 20 also obtained the phenomenon of polarization direction against that of the applied electric field in their later works [15,16].The intricate mechanism of this novel phenomenon is currently not well understood and warrants further in-depth exploration in the future. Response: We are pleased that the Reviewer finds the experimental results interesting and significant.We have addressed the Reviewer's concerns point-by-point in the following responses. Comment 1: In CVD-grown CIPS, various defects can occur during the deposition and subsequent transfer processes, for example, local variations in stoichiometry, vacancies, stacking faults, layer thickness variations, surface defects, and contaminants.These defects can significantly impact the ferroelectric domains and their switching behavior.It would be insightful if the authors can assess the impacts of these defects on the domain behavior. Response: We must admit that we couldn't agree more with the Reviewer that the CVD-grown CIPS crystal is inevitably prone to various types of defects during the growth and exfoliation processes.Achieving precise control over the stacked domains is challenging due to the presence of these defects, as evidenced by the broad distribution of coercive voltages in Fig. 2e and in many previously reported articles [7, 18~20].Despite the fluctuations of hysteresis loops caused by those defects, to our pleasant surprise, the AFE-coupled CIPS trilayers proposed in our DFT work (full paper available at arXiv.2312.13856)can be qualitatively extended to our proposed three domain block system that well explained our experimental observations of sextuple polarization states in much thicker CIPS films. Local defects like variations in stoichiometry, vacancies, and stacking faults in the CIPS system have been reported in the literature.The variations in stoichiometry can lead to local phase separation, which induces local strain and helps to stabilize the quadruple polarization states as proposed in Ref. 20. Cu vacancies and excess Cu ions have been proposed to lower the energy barrier of Cu to move across the vdW gap [15].The stacking faults can lead to local kink, edgetype, or knot-type dislocations, which can cause the formation of nanodomains in the CIPS films [21].Beyond those existing studies, the effects of layer thickness variation and surface defects have been assessed directly in our experiments.As an example, Fig. R7 illustrates the correlation between film thickness and imprints based on CIPS films showing double polarization states.We found that the thinner the film, the smaller the imprint (Fig. R7a), which can be attributed to the reduced AFE domain volume in the subsurface under the excitation of the tip-induced nonuniform electric field.In addition, the imprint value changes from negative to positive when the film thickness is down to 9.1 nm.This observation can be due to the built-in field caused by the charge transfer from the n-doped silicon substrate (surface defects), which becomes prominent in ultrathin CIPS films.The positive imprint and the opposite built-in field have been reported in CIPS using p-doped silicon substrates [22], consistent with our present findings.More interestingly, the trend of imprints with varied thicknesses is found similar to that of the exchange bias of classic ferromagnetic/antiferromagnetic bilayer [23] (Fig. R7b).These detailed findings are to be included in a more systematic publication that is under preparation. Although our experimental capabilities fall short of achieving exact control over various defects, discrete sextuple polarization states have been unambiguously and repeatably established in our experiments.While recognizing these current limitations, we acknowledge the Reviewer to raise the potential for further optimization in future works.Overall, CIPS is an intriguing material worthy of further exploration.Ultimately, we aim to prepare a CIPS trilayer as proposed by the DFT calculations to validate the sextuple polarization states with controllable defects.Such experiments, once achieved, will provide a much clearer understanding of the effect of defects on domain behaviors. To discuss the impact of defects, we have added the following sentences on page 11 of the revised manuscript (as in the responses to Reviewer 1): [18] Nat.Commun.2, 591 (2011). Comment 2: It is known that defects can pin the domain boundaries and control the domain boundary movements.I am wondering how the characteristic times of these pinning and depinning processes compare with the DC pulse width and the rise time of each pulse.It would be insightful if the authors can discuss these issues. Response: As highlighted by the Reviewer, the phenomena of pinning and depinning of domain walls, primarily induced by defects such as point defects [6, 24] and disorders [25], have been widely discussed in the literature.These defects can pin the domain boundaries and augment the energy barrier associated with polarization switching.For our ferroelectric hysteresis measurements, we set the DC pulse width at 15 ms, incorporating a rise time of 0.5 ms for each pulse.Probed by the Reviewer's comment, we did additional experiments using 50 ms DC pulse width, which demonstrated discrete sextuple polarization states as well (Fig. R3).On the other hand, the direct visualization and quantification of pinning and de-pinning times induced by defects in CIPS are beyond our current experimental capability, and notably, such characteristic times are so far also absent in other published experimental examinations of CIPS.According to the quantum-molecular-dynamics result that suggests Cu movement time of ~100 ps range across a vdW gap in CIPS with excess Cu [15], we believe that our pulse time considerably exceeds the characteristic times of the pinning and de-pinning from defects in CIPS.Given the time scale for simulated Cu ion movement [15] and our experimental observation, the influences stemming from the defect-induced pinning and depinning processes are unlikely to skew our measurements. To further discuss the issue of the characteristic time of defect pinning and applied electric pulses, we have added the following sentences on page 12 of the revised manuscript: "It is also known that defects can pin the domain boundaries and augment the energy barrier associated with polarization switching.For CIPS systems, the characteristic time associated with the domain wall pinning and depinning from defects have not been experimentally observed yet.According to a quantum-molecular-dynamics result that suggests Cu movement time of ~100 ps across a vdW gap in CIPS with excess Cu 21 , we conjecture that our applied electric pulse time (in the millisecond range as described in the Method section) considerably exceeds the characteristic times of the pinning and de-pinning from defects in CIPS". Comment 3: There are large variations in the coercive voltages for different polarization states (Figure 2e).The authors should discuss the physical origins. Response: As the Reviewer highlighted in comment 1, different types of defects can exist in the CVT-grown CIPS crystals.These defects can indeed contribute to variations in local coercive voltages measured at different locations and across different samples using Piezoresponse Force Microscopy (PFM) when they interact with domain walls.Many examples of defects-induced variation of coercive voltages have been reported in the literature [26, 27].In the case of CIPS, in addition to the effect of defects, the variation of coercive voltage can also be attributed to the active Cu movements at room temperature [9]. The data presented in Figure 2e were compiled from our comprehensive dataset, without discriminating between crystal batches, film thicknesses, measurement locations, or ranges of bias windows.It is noteworthy that the standard deviation of coercive voltages, based on hysteresis loops measured on the same sample (std1) (Fig. R8b), exhibits much narrower variations compared to that presented in Figure 2e (std2) (Fig. R8a) as shown in Table R1.We are actively engaged in ongoing efforts to precisely control these defects and plan to comprehensively report on their influence on coercive voltages in our future report. To discuss the variation of the coercive voltages illustrated in Fig. 2e, we have added the following sentences on page 8 of the revised manuscript: "We note that there are large variations in the coercive voltages for different polarization states, which can be attributed to the different types of local defects and active Cu movements at room temperature.In addition, the data presented in Fig. 2e were compiled from our comprehensive dataset, without discriminating between crystal batches, film thicknesses, measurement locations, or ranges of bias windows.It is noteworthy that the variation of Vc is much narrower based on the hysteresis loops measured on one sample (Table S3) compared to Fig. 2e".S3.Comment 4: It seems that the authors used ML models to explain quadruple and sextuple polarization states in pure CIPS (the reference is not available for the Reviewer).However, the authors used DBs (~ 10 MLs) to explain these multiple polarization states here.This change has not been properly justified.In my opinion, this is a significant weakness of the present work. Table R1 has also been added to the Supplementary Information as the new Table Response: Again, we apologize for the incompleteness of referring to the relevant theory work in our original submission.In that separate theory work based on first-principles calculations, stacking two MLs of the CIPS family with the intralayer AFE coupling into a bilayer can give rise to quadruple states with net polarization, while stacking three MLs can give rise to sextuple polarization states for a trilayer system.The theory paper now can be accessed via arXiv.2312.13856(Ref.48 in the revised manuscript).Encouragingly, and intriguingly, this mechanism can be invoked to explain the sextuple states observed in our experiments for thin films with the assumption of three AFM-coupled ferroelectric domains.The results of a film with 10 MLs are to schematically show the polarization in a domain for different states. Comment 5: The discussion on the DBs and their boundary structures and dynamics is absent. It would be insightful if the authors can discuss these issues. Just curious, are the DB boundaries associated with defects? Response: We agree with the Reviewer that the domain boundary structures and dynamics are important and warrant attention.We posit that defects (e.g.excess Cu ions, Cu vacancies, random AFE domain spots, etc.) with no net polarization contribution in CIPS can initiate boundaries between domain blocks, which work as "dead layers" between domain blocks.Our DFT work predicted that the quadruple and sextuple polarization states can be formed in pure bilayer and trilayer CIPS films even without these "dead layers" (arXiv.2312.13856).The experimental fabrication and examination of 1~3 ML CIPS with ferroelectricity have not been reported in the field yet, but we expect that the present work will stimulate future studies in more detailed investigation on domain structure modulations in the ultimate few-layer CIPS systems. To provide more insight into the DB boundary and dynamics, we have added the following sentences on page 15 of the revised manuscript: "We posit that the defects with no net polarization contribution in CIPS, such as excess Cu ions, Cu vacancies, and random AFE domain spots, can initiate the boundaries between the DBs, which work as "dead interfacial layers". Comment 6: What are the energy barriers between these different states? Can the detection voltages or thermal fluctuations affect the state stability? Response: To experimentally observe the energy barriers, constructing an Arrhenius plot that spans a range of temperatures exceeding a decade is typically necessary.Regrettably, such an accepted approach falls beyond our current experimental capabilities.Considering the challenges posed by a large film thickness of CIPS and defect conditions, directly calculating energy barriers of tens of nanometer CIPS films via DFT simulations is hindered by demanding huge computational power.However, our DFT calculations on trilayer CIPS can provide insight into the energy barriers among sextuple polarization states, falling within the range of 283~393 meV (Fig. R9, full paper available at arXiv.2312.13856).Based on the well-known compensation effect, we also speculate that the energy barriers for DB switching are in the range larger than the maximum energy barrier (393 meV), but smaller than the sum of them (676 meV) with lowered attempt frequency. Figure R9. The kinetic pathways and energy barriers for CIPS trilayers as the systems transform between the six polarization states. In addition, to address the Reviewer's concerns regarding detection voltages, we have conducted additional experiments of local ferroelectric hysteresis measurements using varied detection voltages at the same location (Fig. R10).When the Vac=0.2V, the small detection voltage leads to a weak response signal and relatively large variability, especially in the range of 2~6 V. Vac=0.3V gives a relatively repeatable signal with reduced variations among multiple loops and improved amplitude response, which was mostly used in our PFM measurements.We found that the sextuple polarization states remain stable when Vac < 1 V, but in the range of 0.5 V< Vac < 1 V the repeatability of multiple hysteresis loops gradually deteriorated.The sextuple polarization states are severely distorted when Vac ≥ 1.5 V.The excessive detection voltage can lead to poor repeatability and disrupt the intermediate states, which may cause the states to collapse into more stable states with deeper potential wells. Furthermore, we would like to acknowledge the Reviewer for raising the issue of the importance of the thermal effect.We did in situ spectroscopic PFM measurements at varied temperatures using the Heater-Cooler attachment of our SPM equipment (Fig. R11).The measurements conducted at room temperature (25 ℃) demonstrate the typical hysteresis loops showing sextuple polarization states.When the sample temperature was directly lowered to 5 ℃, the classic double-state hysteresis loop was observed with improved repeatability and nearly constant amplitude, and the sextuple polarization states vanished.We expect that at lower temperatures (say 5 ℃), the energy barrier heights defining the sextuple polarization states are too high, and the external electric field can only facilitate Cu ions to overcome the two most shallow potential wells, thereby resulting in double polarization states only.When the sample temperature was returned to 25 ℃, the sextuple polarization states emerged again and even retained at 40 and 45 ℃, until reaching the Curie temperature (~50 ℃) where ferroelectricity disappeared.At elevated temperatures, the energy barrier heights defining the sextuple polarization states are lowered, possibly due to more active Cu ions movement.Furthermore, the largely varied amplitude levels at higher temperatures also indicate the complex subsurface domain formation and interaction at the remanent state caused by the rich Cu ion movements, which also result in reduced repeatability of the loops.Importantly, all of the above additional experimental results will not change our main experimental observations described in the manuscript. In response to the Reviewer's comments, we have added the following sentences on page 9 and page 15 of the revised manuscript, respectively: "Moreover, appropriate detection voltage is necessary to observe reliable sextuple polarization states.The excessive detection voltage can lead to poor repeatability and disrupt the intermediate states, which may cause the states to collapse into more stable states with deeper potential wells (Fig. S19)."Response: We thank the Reviewer for this insightful proposal.In response, we have first conducted simulations to model the nonuniform electric field distribution under a 5 nm tip using COMSOL Multiphysics (Fig. R12).When compared to the scenario with a 25 nm tip, at the same field strength (e.g. 10 7 V/m, close to the coercive field of CIPS reported in the literature), the field extended distance along both the in-plane (x-axis) and out-of-plane (z-axis) directions is shrunk to half.In other words, the active volume of CIPS under the 5 nm conductive tip is reduced to about 1/6 of that using a 25 nm tip.In addition to the largely shrunk excited volume, the sensible range of the tip is also significantly reduced with the tip radius. To further investigate the above-mentioned field-gradient effect, we have performed PFM experimental measurements on both a classic ferroelectric ceramic (PMN-PT single crystal) and a CIPS film using a ~5 nm radius tip (AD-2.8-SS,Adama, Ireland), which has a conductive diamond coating with similar stiffness and resonant frequency as those of the ~25 nm tip (PPP-EFM, Nanosensors, Switzerland) (Fig. R13).The ~25 nm tip can detect reliable polarization switching for both materials.Unfortunately, it turns out that the polarization switching was not achieved even in the classic ferroelectric ceramic using a ~5 nm tip in our current system.We will continue to investigate the field gradient effect on the intrinsic multiple polarization orders in CIPS systems, and we speculate that, for a trilayer system, the 5-nm tip should be operative in defining multiple polarization states. To discuss the impact of tip size, we have added a new section of "Supplementary Note 3" in Supplementary Information.Finally, we would like to sincerely thank the two reviewers again for their careful and expertized reviews, as well as for their constructive comments and suggestions.We hope this further revised manuscript is now ready for publication in Nature Communications. The authors have addressed all the concerns I and another reviewer raised in the initial report and have appropriately modified the manuscript.Considering the improvements to the paper after the revision, now I would like to recommend its publication in Nature Communications. Reviewer #2 (Remarks to the Author): I would like to thank the authors for taking efforts to address my comments and concerns.However, I am still not totally convinced by their theoretical modelling. 1.The physical properties of the DBs are still not discussed in details.In the model, the theoretical minimum number of atomic layers needed to form a DB is 3.However, in the experiment, the sample with 2 states (hence 1 DB in total) has ~12 atomic layers.This suggests that, in theory, this single DB in the 2-state system should be able to be split into 3 or 4 DBs.The reason that this is not observed experimentally is rather fundamental to the idea of DB and should be further discussed. 2. The model does not describe how the switching between polarization orders is carried out. In the model, the DBs are separated by "dead interfacial layers" and each DB behaves like a unit.However, the model has not explained how the "controllable mutual transformation among polarization orders" can be achieved.For example, when the system switches from 6 states to 4 states by increasing the bias window, how do the original 3 DBs in the 6-state system interact with one another to form the 2 larger DBs in the 4-state system, and how will the "dead layers" be incorporated into the new DB?Furthermore, if the 4-state system is switched back to a 6-state system, are the DBs in the new 6-state system the same as DBs in the original 6-state system?If yes, why; if no, what are the consequences?This is important as it is associated with the robustness of the system to such polarization order switching. Addressing these concerns would greatly enhance my confidence in recommending the acceptance of the study. Responses to Reviewers' Second Reports (MS # NCOMMS-23-52264B by Tao Li et al.) Again, we thank the two reviewers for their careful and expert review of the above manuscript.The detailed responses are listed below, and the manuscript has been revised accordingly. Response: We thank the Reviewer for confirming our efforts to address the comments and concerns.We have addressed the Reviewer's new comments point-by-point in the following responses. Comment 1: The physical properties of the DBs are still not discussed in details.In the model, the theoretical minimum number of atomic layers needed to form a DB is 3.However, in the experiment, the sample with 2 states (hence 1 DB in total) has ~12 atomic layers.This suggests that, in theory, this single DB in the 2-state system should be able to be split into 3 or 4 DBs.The reason that this is not observed experimentally is rather fundamental to the idea of DB and should be further discussed. Response: We thank the reviewer for finding our experimental results interesting and significant in the first round of review.In terms of the model, we apologize for the unclear writing that raised likely confusion between the theoretical model based on 3-ML CIPS and the definition of DBs.In the experiments, each DB acts in unison like an ML in the modeling paper, irrespective of the specific number of MLs that it contains.According to current experimental observations of the thinnest CIPS films that obtained the sextuple, quadruple, and double polarization states, we speculate that each DB can contain 9~10 MLs as explained in Supplementary Note 1 in detail, which effectively act as one ML in the theoretical model.Again, we hope a direct confirmation of the predictions using a 3-ML CIPS system can be achieved in the future. To avoid potential future confusion, we have added the following sentences on page 15 of the revised manuscript: "The DB is assumed as an energetically metastable elementary unit that effectively acts like an ML in the AFE/FE model and can be collectively modulated by a moderate E-field (as exemplified in Fig. 4c)". Comment 2: The model does not describe how the switching between polarization orders is carried out.In the model, the DBs are separated by "dead interfacial layers" and each DB behaves like a unit.However, the model has not explained how the "controllable mutual transformation among polarization orders" can be achieved.For example, when the system switches from 6 states to 4 states by increasing the bias window, how do the original 3 DBs in the 6-state system interact with one another to form the 2 larger DBs in the 4-state system, and how will the "dead layers" be incorporated into the new DB?Furthermore, if the 4-state system is switched back to a 6-state system, are the DBs in the new 6-state system the same as DBs in the original 6-state system?If yes, why; if no, what are the consequences?This is important as it is associated with the robustness of the system to such polarization order switching. Response: The observed mutual transformation among polarization orders is predominantly influenced by the bias window, representing the maximum strength of the electric field serving as an initializing field for the polarization orders.As the maximum field strength increases from that required for the sextuple to the quadruple/double states, we anticipate that a more uniform domain structure can be produced and more defects can be driven to move around.Upon sweeping the electric bias repeatedly, the tip-induced nonuniform electric field generates the corresponding domain structures that form the quadruple/double polarization states.Conversely, electric field sweeping with a reduction in the maximum field strength disturbs the uniform domain structure and the defects, leading to the emergence of sextuple polarization states again.Owing to the defects and/or AFE MLs in the CIPS films, the newly formed sextuple polarization states should not be identical to the original ones, as indicated by variations in coercive biases and polarization values.Nevertheless, the sextuple polarization states remain distinctly discernible. In response, we have added the following sentences on page 18 of the revised manuscript: "Furthermore, the mutual transformation among different polarization orders is predominantly influenced by the bias window, representing the maximum strength of the electric field serving as an initializing field for the polarization orders.As the maximum field strength increases from that required for the sextuple to the quadruple/double states, we anticipate that a more uniform domain structure can be produced and more defects can be driven to move around.Upon sweeping the electric bias repeatedly, the tip-induced nonuniform electric field can produce the corresponding domain structures that form the quadruple/double polarization states.Conversely, electric field sweeping with a reduction in the maximum field strength disturbs the uniform domain structure and the defects, leading to the emergence of sextuple polarization states again". Finally, we would like to sincerely thank the two reviewers again for their careful and expertized reviews, as well as for their constructive comments and suggestions.We hope this further revised manuscript is now ready for publication in Nature Communications. Figure R1 . Figure R1.Exemplified ferroelectric synapse based on an Ag/PZT/Nd-SrTiO3 ferroelectric tunnel junction with 256 conductance states.Nevertheless, the ferroelectric hysteresis loops, including phase (a) and amplitude (b), observed by PFM using different detection biases demonstrate double polarization states only, Pup and Pdown [1].Those loop characteristics are qualitatively similar to our observed double-state hysteresis loop in CIPS by PFM, but fundamentally different from those of the quadruple and sextuple polarization hysteresis loops in the present study. Figure R2 . Figure R2.Additional dataset showing clear six minima in multiple cycles of amplitude loop. Figure R3 . Figure R3.Ferroelectric hysteresis loops demonstrating sextuple polarization states obtained under pulse widths of 15 and 50 ms.The top row shows the piezoresponse, the middle row shows the phase signal, and the bottom row shows the amplitude signal. Figure R4 . Figure R4.Multistates in trilayer CuInP2S6.There are six polarization states for a CuInP2S6 trilayer, for which a half-layer-by-half-layer flipping mechanism is operative during the transformation between the states. Figure R5 . Figure R5.(a) The hysteresis loop presented in Ref. 20 in the manuscript.(b) Hysteresis loop reported in a later work [14].(c) Hysteresis loop showing quadruple polarization states in our work.The y-axis piezoelectric constant, d, used in (a) and (b) is proportional to the piezoresponse (c) used in our work[17], thus the trend of the hysteresis loops can be compared among these different works. Figure R6 . Figure R6.Comparison of energies and atomic forces predicted by DFT and DP model for all configurations in the test dataset. " We note that the hysteresis loops observed before and after the reversible transformation are not exactly the same, which can be due to various unavoidable defects in the CVT-grown CIPS films during the deposition and subsequent transfer processes, such as local variations in stoichiometry, vacancies, stacking faults, etc.For example, variations in stoichiometry can lead to local phase separation, which induces local strain and helps to stabilize the quadruple polarization states as proposed in Ref 20.Cu vacancies and excess Cu ions have been proposed to have the effect of lowering the energy barrier of Cu to move across the vdW gaps 21 .Stacking faults can lead to local kink, edge-type, or knot-type dislocations, which can cause the formation of nanodomains in CIPS films 47 .Nevertheless, even though the polarization switching details such as the coercive voltage and polarization value can be varied because of the presence of the aforementioned defects, the polarization orders can be distinguished and transformed within an acceptable tolerance". Figure R7 . Figure R7.Imprint (Vex-FE) variation in CIPS films of different thicknesses.(a) Hysteresis loops demonstrating double polarization states from CIPS films of different thicknesses.The red dotted lines and the extended pink area represent the average imprint ( " ) and the standard deviation , respectively.(b) Vex-FE value versus the CIPS film thickness.The inset shows the trend of Vex-FE (based on the normalization of 90% saturation value) versus the normalized thickness in IrMn(t)/Co(1 nm)/Pt(2nm) (black curve) and CIPS films (red curve). Figure R8 . Figure R8.Distribution of coercive voltages from hysteresis loops showing sextuple polarization states.(a) the original figure 2e from ensemble-averaged data, (b) distribution from one CIPS film sample. Figures R10 and R11 have been added to the Supplementary Information as new Fig.S19 and S20. Figure R10 . Figure R10.Effect of varied detection voltage Vac on the hysteresis loop of sextuple polarization states.For each panel, the top, middle, and bottom rows show the piezoresponse, phase, and amplitude signals, respectively. Figure R11 . Figure R11.Temperature-dependent ferroelectric hysteresis loops.For each panel, the top, middle, and bottom rows show the piezoresponse signal, phase signal, and amplitude signal, respectively. Figures R12 and R13 have been added as Fig. S21 and S22 accordingly. Figure R12 . Figure R12.Electric field distribution in CIPS under a conductive tip of 5 nm radius.(a,c,e) Electric field distribution along the depth direction (Z).(b,d,f) Electric field distribution along the in-plane direction (X). Figure R13 . Figure R13.PFM ferroelectric loops of PMN-PT and CIPS measured using conductive tips with ~25 nm and ~5 nm radius of curvature, respectively. Table R1 . Comparison of the average and the standard deviation of coercive voltages measured from one sample and multiple samples showing sextuple polarization states.
9,317
2024-03-26T00:00:00.000
[ "Physics", "Materials Science" ]
Composite material based on the polyethylene terephthalate polymer and modified fly ash filler The possibility of filling the recycled polyethylene terephthalate (rPET) with fly ash was studied to make a polymer composite material (PCM). It is shown that high adhesion between polymeric matrix and mineral filler is the key parameter to produce high performance PCM. For this purpose the acid-basic interaction as well as the thermodynamic work of adhesion between components of PCM were calculated. The technique of modifying fly ash filler with 5% concentration solution of sulfuric acid to increase acid-basic interaction has been elaborated. The resulting behavioral patterns are listed and compared to those of composites containing untreated fly ash particles. Introduction Disposal of waste materials from different types of industry has become an actual problem. To solve this problem, two generalized routes come to mind: firstly, to reuse the disposed materials as received in some suitable applications, and secondly to recycle the waste in order to obtain a new material that may again find application in the parent or in another industry [1][2][3][4]. For instance, waste polyethylene terephthalate (PET) plastic is neither environmentally biodegradable nor compostable, which creates disposal problems. Recycling has emerged as the most practical method to deal with this problem, especially with products such as rPET beverage bottles [5]. It is well known that melted rPET material can be molded, extruded or otherwise formed to shape a variety of articles, e.g. building materials and products. On the other hand the reinforcing effect of mineral fillers for polymers has been recognized since the last few decades. Improving the mechanical, electrical, thermal, optical and processing properties of the polymer with the addition of filler materials has become a very popular research interest to make a composite material [6], which can be defined as a combination of two or more materials that results in better properties than those of the individual components used alone [7]. The two constituents are a reinforcement and a matrix. The main advantages of composite materials are their high strength and stiffness, improved fatigue life, corrosion resistance, combined with low density, when compared with bulk materials, allowing for a weight reduction in the finished part [8,9]. Developing low-cost composite materials with improved properties has been one of the primary challenges in a number of industrial applications. To achieve this goal, researchers implemented cost-effective processing methods and developed novel material systems involving low-cost fillers. One such material system is the so-called polymer concrete, which is often prepared by loading polyester resin with high levels of fillers such as fine sand, limestone or micro-marble particles (referred to as marble dust) [10,11]. Utilization of fly ash as a filler in polymer composites has received increased attention recently, particularly for price-driven/high volume applications. Incorporation of fly ash offers several advantages: it is the best way of disposing of fly ash, and as it is cheap and plentifully available, it decreases the overall cost of composites [6,[11][12][13][14]. Fly ash from thermal power plants is one of the major polluting byproducts. Its applicability is being studied extensively in various products, but with limited avenues for its reuse. As a result, it has been accumulated over the years in the surrounding areas of thermal power plants, and has created an enormous environmental problem [1]. Fly ash is used as a source of spherical fillers. As it is collected at the power plant, fly ash is a crude mixture of large and small particles of various shapes and structures. Fly ash has been reported to be a useful filler in resin systems, including phenolics, PVC, polyethylene, epoxy and polyester [15]. A composite material and method are described wherein melted waste, chemically unmodified PET material and fly ash particles are mixed in a vessel to disperse fly ash particles in the melted PET material. The resulting mixture then is cooled to solidify the melted PET material to form a polymer composite material (PCM) having a matrix comprising PET and dispersoids distributed in the matrix and comprising fly ash particles [5]. However fly ash being generated as a waste material, it needs to be benefitted before its use as filler in plastic materials. The main drawbacks of fly ash in comparison with commercial mineral fillers are its larger particle size and smooth spherical inert surface [16]. Poor adhesion does not allow the transfer of stress from the matrix to the fillers [17]. Some studies have pointed to the excellent compatibility between fly ash and polymers. Other researches have also shown the advantageous use of treated fly ash in a wide variety of polymer matrices [11]. It was found that the waste may create negative impact on some strength characteristics of the resultant composites, most likely because of poor adhesion on the "polymer/filler" interface borderline [16]. The main obstacle for utilizing fly ash in PMCs has been its surface polarity due to the presence of silanol, aluminol and other types of -OH groups attached to the metal/non-metal atoms of the constituents of fly ash. Many of the commercial polymers do not have substantial polarity that can match with that of the fly ash particles. This difficulty can be overcome by modifying the surface by suitable chemical reagents that are known as coupling agents [18]. For example, one can choose from a variety of organosilanes or titanates [19]. Addition of fly ash as filler into the polyphenylene oxide with coupling agent increased the mechanical properties such as tensile strength, impact strength, elongation at break, flexural strength and flexural modulus as compare to untreated filler filled PPO composite [12]. The good adhesion between filler (geopolymer concrete waste) and polymer was observed in the treated composites, what can be attributed to presence of oleic acid arrested onto the filler surface [20]. Fly ash with activated nano-surface (by alkali treatment -NaOH solution 2 mol/l) proved to develop uniform interfaces, with significant effect on the compression resistance and on impact [21]. Treatment of fly ash with sulfuric acid can obviously change the surface area, microstructure and phase composition [22]. Besides, it has been reported that the surface silanol groups responsible for generating Bronsted acidity are enhanced [23]. Usually, fly ash has a negative surface charge due to the predominant oxidic composition. A significant problem is that the ionic fly ash surface has a high wetting behavior, while rubber (and plastics) is hydrophobic, with very low surface charge. Therefore, building up interfaces based on electrostatic attraction is highly unlikely [21]. It is known that the formation of interphase bonds is affected by acid-base interactions [24]. According to [25] the strongest interphase interaction is achieved when one of the materials has the acid properties, and the otherthe basic one. Accordingly, if both phases have exclusively basic or acid groups or both are neutral, then acid-base interactions are absent. Therefore, the primary task for achieving high adhesion is to determine the acid and basic characteristics of the surface of the filler and matrix. To analyze the adhesion of polymer to filler, it is advisable to use the thermodynamic work of adhesion. The values of surface free energy (SFE) of components are necessary to calculate the thermodynamic work of adhesion Wa between phases of PCM. According to van Oss, Chaudhury, and Good [26,27] SFE consists of two components: the Lifshitz-van der Waals component ( LW ), and the acid-base component ( AB ). The last component is considered to be equal, γ AB =2√γ + γ -, where  + and   are the acidic and basic components respectively. As a result, the following relationship was formulated: Thus the acidic-basic properties of the fly ash surface can be affected through the chemical modification using acid threating method. Sulfuric acid solution with different concentration was used for threating which may bring about some strong chemical bond between filler and matrix. The purpose of this study was to develop the composite material based on the polyethylene terephthalate polymer and pulverized fly ash, modified and unmodified with sulfuric acid. Materials Class F fly ash filler used with bulk density 1156 kgm -3 was obtained from thermal power plant (Donetsk region). The particle size distribution of fly ash was determined by the ANALYSETTE 22 Compact laser diffraction particle size analyzer (Fig. 1). The particle size was mainly in the range of 5-10, 10-20, 20-30 and 30-40 m. Specific surface area is about 320 m 2 kg -1 . The composition of chemical elements and oxides was analyzed by using wavelength dispersive X-ray fluorescence method (spectrometer ARL OPTIM'X 200). The main constituents are silica, alumina and ferric oxides of about 57, 25 and 9%, respectively, while traces of other oxides (chiefly 3.1% K2O, 1.8% CaO, 1.5% MgO and 1.05% TiO2) are also noticed ( Table 1). The PET bottles prepared for processing by first rinsing in warm water to remove any residue. Next, the bottle caps, labels and adhesives were physically removed. Once washed and air-dried, the PET bottles were shredded to nominal square particle sizes of 5 to 15 mm using a knife crusher. Processing Modification of the filler surface was carried out with solutions of sulfuric acid (H2SO4) with different concentrations (5, 10, and 15%). Samples of the fly ash were placed into acid solution for about 1 hour then were dried to constant weight at a temperature of 105-110C. The fly ash filler and PET flakes were dried at a temperature of 105-110C to reduce their moisture content before preparation of PCM samples. Samples of PCM were made by injection molding at a temperature of 250-280C. Melted waste PET material and fly ash particles were mixed in a vessel to disperse fly ash particles in the melted PET material. Samples of PCM were made by injection molding at a temperature of 250-280C. After the molding process the samples were cooled to solidify the melted PET material to form a composite material having a matrix comprising PET and dispersoids distributed in the matrix and comprising fly ash particles [5]. The PCM samples were prepared on the base of formulations with different content (55-75 wt.%) of unmodified and modified filler. Methods Compressive strength was tested according to ASTM D695 using servo-hydraulic system ADVANTEST 9. Compressive properties were checked at speed of 50 mm/min. Load cell of 5kN was used to sense the load. Five specimens (50 mm cubes) of each formulation were tested and their average value was calculated. Density and water absorption of PCM samples were tested according to ASTM D792 and D570 respectively. The contact angles of sample surfaces were measured with the help of tensiometer KRUSS K100 and a set of test liquids with known values of the SFE [28] included distilled water, glycerin, aniline, phenol, formamide, ethylene glycol, and dimethyl sulfoxide ( Table 2). The Washburn method [29] for fly ash and Wilhelmy plate method [30] for PET samples were used to calculate contact angles of sample surfaces. SFE components and thermodynamic work of adhesion To determine the Lifshitz-van der Waals (γ S LW ), the acid (γ S + ) and basic (γ S -) SFE components of the samples, a method developed by Sokorova [31] was used. It is a graphical method based on multidimensional approximation. According to this method, equation (1) transforms to the plane equation (z=Ax + By + C:): Next, using a multidimensional approximation, a plane was constructed in the coordinates Results and discussion SFE components for PET, unmodified fly ash and modified fly ash (treated with 5 and 10% concentration solution of sulfuric acid) were calculated ( Table 3). The resulting graphs and equations are shown on figures 2-5. It was established that sulfuric acid treatment affects the filler surface SFE components. When fly ash was treated with 5% solution H2SO4, the components of SFE (γ S + ) and (γ S -) were increased in comparison to unmodified fly ash on 43.1% and 7.7%, respectively. And when fly ash was treated with 10% solution H2SO4, the components of SFE (γ S + ) and (γ S -) were increased on 89.3% and 14.5%, respectively. On the base of these data the thermodynamic work of adhesion between PET and fly ash filler was calculated in accordance with equation (1) ( Table 4). It was established that the thermodynamic work of adhesion between PET polymer and modified fly ash filler with 5% and 10% solution of acid was increased on 1.29% and 2.14%, respectively. To confirm increasing adhesion between the polymeric matrix and modified fly ash filler compressive strength of PCM samples with different filler concentration was tested (Fig. 6). The highest value of compressive strength of samples with unmodified fly ash filler has PCM with 65% filler concentration. On the other hand when fly ash filler was modified with 5% concentration solution H2SO4 compressive strength of PCM sample was higher on 11.2%. So this phenomena may confirm increasing adhesion acid/basic interaction between PCM components. Fig. 6. Compressive strength of PCM samples However, the samples containing 65% fly ash filler treated with 10% concentration solution of acid demonstrated lower results despite the higher thermodynamic work of adhesion. It is presumably because of the increase weak sulphuric formations (SO3) on the filler surface after treatment which impedes the adhesion contact of components. Table 5 presents the main properties of PCM with optimal formulation. Conclusions Sulfuric acid treatment affects the filler surface SFE components. When fly ash was treated with 5% solution H2SO4, the components of SFE (γ S + ) and (γ S -) were increased in comparison to unmodified fly ash on 43.1% and 7.7%, respectively. When fly ash was treated with 10% solution H2SO4, the components of SFE (γ S + ) and (γ S -) were increased on 89.3% and 14.5%, respectively. The thermodynamic work of adhesion between PET polymer and modified fly ash filler with 5% and 10% solution of acid was increased on 1.29% and 2.14%, respectively. Treatment of fly ash filler to improve its adhesion characteristics to the PET polymeric matrix improves the overall properties of the PCM, especially compressive strength and density. The highest value of compressive strength of samples with unmodified fly ash filler has PCM
3,284.4
2018-01-01T00:00:00.000
[ "Materials Science" ]
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Dialogue is an especially interesting area in which to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. We explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks. Introduction Pretrained large language models (LLMs) (Devlin et al., 2019;Radford et al., 2019;Brown et al., 2020) are not only few-shot learners, but can also perform numerous language tasks without the need for fine-tuning.However, LLMs are expensive to train and test.Instruction tuning has emerged as a tool for directly inducing zero-shot generalization on unseen tasks in language models by using natural language instructions (Mishra et al., 2021;Sanh et al., 2022;Wei et al., 2022;Ouyang et al 2022).Natural language instructions can contain components such as task definitions, examples, and prompts which allows them to be customized for multitask learning.Instruction tuning enables developers, practitioners, and even non-expert users to leverage language models for novel tasks by specifying them through natural language, without the need for large training datasets.Furthermore, instruction tuning can work for models that are significantly smaller than LLMs (Mishra et al., 2021;Sanh et al., 2022), making them more practical and affordable. Most recent work (Mishra et al., 2021;Sanh et al., 2022;Wei et al., 2022) on instruction tuning has focused on general NLP tasks such as paraphrase detection and reading comprehension, but not specifically on dialogue.While some work such as (Wang et al., 2022a) include a few dialogue tasks, those tasks are collected through crowdsourcing and do not provide good coverage of dialogue tasks and domains.No prior work has examined how training a model on a wide range of dialogue tasks with a variety of instructions may affect a system's ability to perform on both core dialogue tasks such as intent detection and response generation, and domain-specific tasks such as emotion classification.In this work, we introduce INSTRUCTDIAL, a framework for instruction tuning on dialogue tasks.We provide a large curated collection of 59 dialogue datasets and 48 tasks, benchmark models, and a suite of metrics for testing the zero-shot and few-shot capabilities of the models.INSTRUCT-DIAL consists of multiple dialogue tasks converted into a text-to-text format (Figure 1).These dialogue tasks cover generation, classification, and evaluation for both task-oriented and open-ended settings and are drawn from different domains (Figure 2). Instruction tuned models may ignore instructions and attain good performance with irrelevant prompts (Webson and Pavlick, 2021), without actually following user's instructions.We address this issue in two ways: (1) we train the models with a variety of outputs given the same input context by creating multiple task formulations, and (2) we propose two instruction-specific meta-tasks (e.g., select an instruction that matches with an inputoutput pair) to encourage models to adhere to the instructions. The main contributions of this work are: • We introduce INSTRUCTDIAL, a framework to systematically investigate instruction tuning for dialogue on a large collection of dialogue datasets (59 datasets) and tasks (48 tasks).Our framework is open-sourced and allows easy incorporation and configuration of new datasets and tasks.• We show that instruction tuning models enhance zero-shot and few-shot performance on a variety of different dialogue tasks.• We provide various analyses and establish baseline and upper bound performance for multiple tasks.We also provide integration of various task-specific dialogue metrics. Our experiments reveal further room for improvement on issues such as sensitivity to instruction wording and task interference.We hope that INSTRUCTDIAL will facilitate further progress on instruction tuning for dialogue tasks. Related Work Pre-training and Multi-Task learning in Dialogue Large-scale transformer models (Devlin et al., 2019;Radford et al., 2019;Brown et al., 2020) pre-trained on massive text corpora have brought substantial performance improvements in natural language processing.Similar trends have occurred in the dialogue domain, where models such as DialoGPT (Zhang et al., 2020), Blenderbot (Roller et al., 2021) and PLATO (Bao et al., 2021) trained on sources such as Reddit or Weibo, or on human-annotated datasets show great capabilities in carrying open-domain conversations.Large-scale pretraining has also shown success in task-oriented dialogue (TOD).(Budzianowski and Vulić, 2019;Hosseini-Asl et al., 2020;Ham et al., 2020;Lin et al., 2020;Yang et al., 2021) utilized pretrained language models such as GPT-2 (Radford et al., 2019) to perform TOD tasks such as language generation or act prediction.Similarly, BERT-type pretrained models have been used for language understanding in TOD tasks (Wu et al., 2020a;Mi et al., 2021b).Several of these works have shown improved performance by performing multi-task learning over multiple tasks (Hosseini-Asl et al., 2020;Liu et al., 2022;Su et al., 2022a).Multi-task pretraining also helps models learn good few-shot capabilities (Wu et al., 2020a;Peng et al., 2021).Our work covers both open-domain and TOD tasks and goes beyond multi-tasking as it incorporates additional structure of the tasks such as task definitions and constraints.Instruction Tuning Constructing natural language prompts to perform NLP tasks is an active area of research (Schick and Schütze, 2021;Liu et al., 2021a).However, prompts are generally short and do not generalize well to reformulations and new tasks.Instruction tuning is a paradigm where models are trained on a variety of tasks with natural language instructions.Going beyond multi-task training, these approaches show better generalization to unseen tasks when prompted with a few examples (Bragg et al., 2021;Min et al., 2022a,b) or language definitions and constraints (Weller et al., 2020;Zhong et al., 2021b;Xu et al., 2022).Prompt-Source (Sanh et al., 2022), FLAN (Wei et al., 2022) and NATURAL INSTRUCTIONS (Mishra et al., 2021;Wang et al., 2022b) collected instructions and datasets for a variety of general NLP tasks.GPT3-Instruct model (Ouyang et al., 2022) is tuned on a dataset of rankings of model outputs and was trained using human feedback, but it is expensive to train and test.Instead, our work is tailored to dialogue tasks and incorporates numerous dialogue datasets, tasks, and benchmarks.We show that models trained on collections such as PromptSource are complementary to instruction tuning on dialogue.For dialogue tasks, Madotto et al. (2021) explored prompt-based few-shot learning for dialogue, but without any fine-tuning.Mi et al. (2021a) designed task-specific instructions for TOD tasks that improved few-shot performance on several tasks.Our work covers a far greater variety of dialogue domains and datasets in comparison. Methodology In this section, we first discuss instruction tuning setup.Next, we discuss the taxonomy of dialogue tasks, the task meta-information schema, and discuss how dialogue datasets and tasks are mapped into our schema.Finally, we discuss model training and fine-tuning details. Instruction Tuning Background A supervised setup for a dialogue task t consists of training instances d t train ∋ (x i , y i ), where x i and y i are an input-output pair.A model M is trained on d t train and tested on d t test .In a crosstask setup, the model M is tested on test instances d t test of an unseen task t.In instruction tuning, the model M is provided additional signal or meta information about the task.The meta information can consist of prompts, task definitions, constraints, and examples, and guides the model M towards the expected output space of the unseen task t. Task Collection We adopt the definition of a task from Sanh et al. (2022), which defined a task as "a general NLP ability that is tested by a group of specific datasets".In INSTRUCTDIAL, each task is created from one or more existing open-access dialogue datasets.Figure 2 shows the taxonomy of dialogue tasks in INSTRUCTDIAL, and Table 9 shows the list of datasets used in each task.In our taxonomy, Classification tasks consist of tasks such as intent classification with a set of predefined output classes.Generation tasks consist of tasks such as opendomain, task-oriented, controlled, and grounded response generation, and summarization.Evaluation tasks consist of response selection in addition to relevance and rating prediction tasks.Edit tasks involve editing a corrupted dialogue response into a coherent response.Corrupted responses are created through shuffling, repeating, adding, or removing phrases/sentences in the gold response.Pretraining tasks involve tasks such as infilling or finding the index of an incoherent or missing utterance.They include multiple tasks covered in prior pretraining work (Mehri et al., 2019;Zhao et al., 2020b;Whang et al., 2021;Xu et al., 2021b).Safety Tasks consist of toxicity detection, non-toxic, and recovery response generation.Miscellaneous tasks are a set of tasks that belong to specialized domains such as giving advice or persuading a user. Task Schema and Formatting All tasks in INSTRUCTDIAL are expressed in a natural language sequence-to-sequence format.Every task instance is formatted with the following Figure 3 shows examples of instances from 3 tasks.For each task, we manually compose 3-10 task definitions and prompts.For every instance, a task definition and a prompt are selected randomly during test.We do not include in-context examples in the task schema since dialogue contexts are often long and concatenating long examples would exceed the maximum allowable input length for most models.Input instances are formatted using special tokens.The token [CONTEXT] signals the start of dialogue content.Dialogue turns are separated by [ENDOFTURN].[ENDOFDIALOGUE] marks the end of the dialogue and [QUESTION] marks the start of the prompt text.We also incorporate task specific special tokens (such as [EMOTION] for emotion classification task).We hypothesize that using a consistent structure and formatting across tasks should help the model adopt the structure and novel input fields for unseen tasks better. Classification Options: In classification tasks, the model is trained to predict an output that belongs to one of several classes.To make the model aware of output classes available for an unseen task, we ap-pend a list of classes from which the model should choose.We adopt the following two formats for representing the classes: (1) Name list: list the class names separated by a class separator token such as a comma, and (2) Indexed list: list the classes indexed by either alphabets or numbers (such as 1: class A, 2: class B,...) where the model outputs the index corresponding to the predicted class.This representation is useful when the classification options are long in length, such as in the case of response ranking where the model has to output the best response among the provided candidates.Custom inputs: Some tasks consist of input fields that are unique to the task.For example, emotion grounded generation consists of emotion labels that the model uses for response generation.We append such inputs to the beginning of the instance sequence along with the field label.For example, we pre-pend "[EMOTION] happy" to the dialogue context in the emotion generation task. In Table 8 in the Appendix we present the list of tasks with sample inputs for each task. Meta Tasks A model can learn to perform well on tasks during training by inferring the domain and characteristics of the dataset instead of paying attention to the instructions, and then fail to generalize to new instructions at the test time.We introduce two meta-tasks that help the model learn the association between the instruction, the data, and the task.In the Instruction selection task, the model is asked to select the instruction which corresponds to a given input-output pair.In the Instruction binary task, the model is asked to predict "yes" or "no" if the provided instruction leads to a given output from an input.We show an example for instruction selection task in Figure 3. None-of-the-above Options For classification tasks, most tasks assume that the ground truth is always present in the candidate set, which is not the case for all unseen tasks.To solve this issue, we propose adding a NOTA ('None of the above") option in the classification tasks during training as both correct answers and distractors following Feng et al. (2020b) for 10% of the training instances.To add NOTA as a correct answer, we add "none of the above" as a classification label option, remove the gold label from the options and set the output label as NOTA.To add NOTA as a distractor, we add NOTA to the classification labels list but keep the gold label as the output label. Model Details Our models use an encoder-decoder architecture and are trained using maximum likelihood training objective.We finetune the following two base models on the tasks from INSTRUCTDIAL: 1. T0-3B (Sanh et al., 2022) a model initialized from the 3B parameters version of T5 (Lester et al., 2021).T0-3B is trained on a multitask mixture of general non-dialogue tasks such as question answering, sentiment detection, and paraphrase identification.2. BART0 (Lin et al., 2022), a model with 406 million parameters (8x smaller than T0-3B) based on Bart-large (Lewis et al., 2020), trained on the same task mixture as T0-3B.We name the BART0 model tuned on INSTRUCT-DIAL as DIAL-BART0 and T0-3B model tuned on INSTRUCTDIAL as DIAL-T0.DIAL-BART0 is our main model for experiments since its base BART0 has shown comparable zero-shot performance to T0 (Lin et al., 2022) despite being 8 times smaller, whereas the 3B parameter model DIAL-T0 is large and impractical to use on popular affordable GPUs.We perform finetuning on these two models since they both are instruction-tuned on general NLP tasks and thus provide a good base for building a dialogue instruction tuned model. Training Details For training data creation, we first generate instances from all datasets belonging to each task.We then sample a fixed maximum of N = 5000 instances per task.Each instance in a task is assigned a random task definition and prompt.We truncate the input sequences to 1024 tokens and target output sequences to 256 tokens.We train DIAL-BART0 on 2 Nvidia 2080Ti GPUs using a batch size of 2 per GPU with gradient checkpointing.We train DIAL-T0 on 2 Nvidia A6000 GPUs using a batch size of 1 per GPU with gradient checkpointing.Additional implementation details are present in Appendix A. Experiments and Results We evaluate our models on multiple zero-shot and few-shot settings.We establish benchmark results for Zero-shot unseen tasks evaluation (Section 5.1) and Response evaluation task (Section 5.2) and perform error analysis.Next, we perform zero-shot and few-shot experiments on three important dialogue tasks: intent detection, slot value generation, and dialogue state tracking (Section 5.3). Zero-shot Unseen Tasks Evaluation In this experiment, we test our models' zero-shot ability on tasks not seen during training. Unseen Tasks for Zero-shot Setting We perform evaluation on the test set of the following 6 tasks not seen during training: wiki-based tasks from the training task set.The set of tasks used for training is presented in Table 10.We evaluate on the full test sets for Dialfact, relation, and answer classification, and sample 1000 instances for the rest of the tasks. Setup and Baselines We perform inference and evaluation on the 6 unseen tasks described in Section 5.1.1.We compare the following models and baselines: • BART0 and T0-3B -Models that form a base for our models, trained on a mixture of non-dialogue general NLP tasks (described in Section 4.1).• GPT-3 (Brown et al., 2020) -Davinci version of GPT-3 tested using our instruction set.• DIAL-BART0 and DIAL-T0 -Our models described in Section 4.1.• DB-Few -Few-shot version of DIAL-BART0. 100 random training set instances of the test tasks are mixed with the instances of train tasks.• DB-Full -Version of DIAL-BART0 where 5000 instances per test tasks are mixed with the instances of the train tasks.This baseline serves as the upper bound for our models' performance.We also experiment with the following ablations of DIAL-BART0: • DB-no-base -Uses Bart-large instead of using the BART0 as the base model.• DB-no-instr -Trained with no instructions or prompts.Task constraints and class options are still specified.We specify the task name instead of instructions to help the model identify the task.• DB-no-nota -Trained without None-of-theabove from Section 3.5 • DB-no-meta -Trained without the meta tasks from Section 3.4 Results and Discussion We present the results for zero-shot experiments in Table 1 and report the accuracy metric for the Eval selection, Answer selection, Dialfact classification and Relation classification tasks.For Begins with task, we report BLEU2, ROUGEL, and accuracy defined as the proportion of responses that begins with the initial phrase provided.For Knowledge grounded generation we report BLEU2, and ROUGEL metrics along with F1 as defined in (Dinan et al., 2019c).For the generation tasks we also report the automatic metric GRADE (Huang et al., 2020) (which has shown good correlation with human ratings on response coherence).For GPT-3 baseline we report the metrics on 200 randomly sampled instances per task.We average scores obtained across the instructions and prompts.We notice the following general trends in our results. Instruction tuning on INSTRUCTDIAL improves performance on unseen dialogue tasks: The DIAL-BART0 and DIAL-T0 models instruction tuned on INSTRUCTDIAL achieve better performance on all tasks compared to their base models BART0 and T0-3B.Notably, for the Eval selection, Relation classification and Begins with generation tasks, our models perform about 3 times better than the base models.Our model also performs significantly better than GPT-3 for all tasks except for Dialfact classification.In the case of the Answer selection task, the difference in performance is lower compared to other models since the base- Meta tasks and NOTA are important for better generalization: We see a large performance drop on unseen classification tasks when meta tasks (see Section 3.4) are removed.This shows that meta tasks help the model develop better representations and understanding of natural language instructions.DB-no-nota shows a slight performance drop in the classification task, indicating NOTA objective is helpful, but not crucial for performance. Pretraining on general NLP tasks helps dialogue instruction tuning: DB-no-base model shows a high performance drop on Eval selection and Answer selection tasks, and a small drop on other test tasks.We conclude that instruction tuning for general NLP tasks helps dialogue instruction tuning. Using instructions leads to better generalization DB-no-instr shows worse performance than DIAL-BART0 on all tasks, especially on Eval selection, Answer selection, and Relation classification tasks.This indicates that training with instructions is crucial for zero-shot performance on unseen tasks. Training on more seen tasks improves generalization on unseen tasks: In Figure 4 we show the impact of varying the number of seen tasks on the performance on unseen tasks.We adopt the traintest task split from section 5.1.We observe that the performance improves sharply up to 20-25 tasks and then further keeps steadily increasing with each new task.This indicates that increasing the number of tasks can lead to better zero-shot generalization and that scaling to more tasks may lead to better instruction-tuned models. Analysis Sensitivity to instruction wording: To analyze the sensitivity of our models to instruction wording, we breakdown the evaluation metrics per unique instruction used during inference for the DIAL-BART0 model.Table 2: Spearman correlation of model predictions with human ratings.Bold and underlined scores represent the evaluation sets on which our model performs the best and second best respectively.We also present the macro average scores.TU, PU, PZ, DZ, CG, DGU, DGR, EG, FT and FD are abbreviations for TopicalChat-USR, PersonaChat-USR (Mehri and Eskenazi, 2020b), PersonaChat-Zhao (Zhao et al., 2020a), DailyDialog-Zhao (Zhao et al., 2020a), ConvAI2-GRADE (Huang et al., 2020), DailyDialog-Gupta (Gupta et al., 2019), DailyDialog-GRADE (Huang et al., 2020), Empathetic-GRADE (Huang et al., 2020), FED-Turn and FED-Dial (Mehri and Eskenazi, 2020a).DIAL-T0 is ranked the first or second best in the majority of the evaluation sets. an unspecified task.Apart from the unseen task set adopted for our experiments in section 5.1.1,we tried other seen-unseen task configurations and found that both our models and baselines models cannot perform certain tasks such as Infilling missing utterance, Recovery response generation, and Ends with response generation in a zero-shot manner.However, the models could quickly learn these tasks when trained on a few task instances. In Table 7 of Appendix B we provide a sample conversation, various instructions for that conversation, and the outputs generated by DIAL-BART0 based on the specified instructions. Zero-shot Automatic Response Evaluation Development of automatic dialogue metrics that show high correlations with human judgements is a challenging and crucial task for dialogue systems.Automated metrics such as BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) correlate poorly with human judgement (Gupta et al., 2019).In this experiment, we test our model's zero-shot automatic evaluation capabilities through the Eval Relevance task.We use the evaluation ratings released in the DSTC-10 Automatic evaluation challenge (Chen et al., 2021b) that consists of 65,938 context-response pairs along with corresponding human ratings aggregated across various evaluation sets.We train a version of DIAL-T0 on tasks excluding any eval tasks (shown in Table 10).Given a dialogue context and a candidate response, we instruct the model to predict "yes" if the response is relevant to the context, otherwise predict "no".We calculate the probability of "yes" as p(yes) = p(yes)/(p(yes) + p(no)).We calculate Model Accuracy ConvERT (Casanueva et al., 2020) 83.32 ConvERT + USE (Casanueva et al., 2020) 85.19 Example-Driven (Mehri and Eric, 2021) 85.95 PPTOD base (Su et al., 2022b) 82.81 PPTOD large (Su et al., 2022b) 84.12 DIAL-BART0 (Ours) 84.30BART0 (zero-shot) 14.72 DIAL-BART0 (Ours, zero-shot) 58.02 Table 3: Intent prediction accuracy on the BANKING77 corpus (Casanueva et al., 2020).Models in the first section of the table are trained in a few-shot setting with 10 instances per intent.Models in the second section are tested in a zero-shot setting. the Spearman correlation of the model's prediction with human ratings for relevance provided in the DSTC-10 test sets, and present the results in Table 2.We compare our model with reference-free models studied in Yeh et al. (2021).DIAL-T0 is ranked the first or second in the majority of the evaluation datasets.Our model learns coherence from the variety of tasks it is trained on and demonstrates high zero-shot dialogue evaluation capabilities. Zero-shot and Few-shot Dialogue Tasks We test the zero-shot and few-shot abilities of our models on three important dialogue tasks: intent prediction, slot filling, and dialogue state tracking. Intent Prediction Intent prediction is the task of predicting an intent class for a given utterance.We conduct fewshot experiments on the Banking77 benchmark dataset (Casanueva et al., 2020) DIAL-BART0 with Convert Models (Casanueva et al., 2020) that are Bert-based dual encoder discriminative models and PPTOD (Su et al., 2022b), a model pre-trained on multiple task-oriented dialogue datasets.For this experiment, DIAL-BART0 is pretrained on the training task mixture from Section 5.1.1 that includes few intent detection datasets except for Banking77 dataset.The results in Table 3 shows that our model is able to attain competitive performance in the few-shot setting, without necessitating complex task-specific architectures or training methodology.It is notable that DIAL-BART0 performs better than PPTOD which uses about about two times more parameters and is trained similarly to our model using a Seq2Seq format.We also note that while BART0 model struggles in zero-shot setting, DIAL-BART0 shows greatly improved performance. Slot Filling Slot filling is the problem of detecting slot values in a given utterance.We carry out zero-shot experiments on the Restaurant8k corpus (Coope et al., 2020a) and few-shot experiments on the DSTC8 dataset (Rastogi et al., 2020a), demonstrating significant performance gains over prior work.In the zero-shot experiments, the training set includes several slot filling datasets except for the Restau-rant8k dataset used for testing.Table 4 shows that our approach attains a 36.9 point improvement in zero-shot slot filling.This result especially highlights the efficacy of instruction tuning at leveraging large-scale pretrained language models to generalize to unseen tasks.the DSTC8 benchmark in Table 5. We then train on 1% and 5% splits of MultiWOZ for 40 epochs with a learning rate of 5e − 5.In Table 6 we present few-shot dialogue state tracking results on the MultiWOZ test set.We find that our model obtains 29.2 and 38.1 joint goal accuracy on the 1% and 5% training data splits, respectively.Our results demonstrate that our model performs well on few-shot dialogue state tracking, and achieves competitive results against PPTOD which is twice the size of our model. Conclusion We propose INSTRUCTDIAL, an instruction tuning framework for dialogue, which contains multiple dialogue tasks created from openly available dialogue datasets.We also propose two meta-tasks to encourage the model to pay attention to instructions.Our results show that models trained on IN-STRUCTDIAL achieve good zero-shot performance on unseen tasks (e.g., dialogue evaluation) and good few-shot performance on dialogue tasks (e.g., intent prediction, slot filling).We perform ablation studies showing the impact of using an instruction tuned base model, model size/type, increasing the number of tasks, and incorporating our proposed meta tasks.Our experiments reveal that instruction tuning does not benefit all unseen test tasks and that improvements can be made in instruction wording invariance and task interference.We hope that INSTRUCTDIAL will facilitate further progress on instruction-tuning systems for dialogue tasks. Limitations Our work is the first to explore instruction tuning for dialogue and establishes baseline performance for a variety of dialogue tasks.However, there is room for improvements in the following aspects: 1) Unlike a few prior works, the instructions and prompts used in this work are not crowdsourced and are limited in number.Furthermore, our instructions and tasks are only specified in the English language.Future work may look into either crowdsourcing or automatic methods for augmenting the set of instructions in terms of both language diversity as well as quantity.2) Instruction tuning does not show significant improvements in zeroshot setting on a few tasks such as relation classification and infilling missing utterances in our experiments.Future work can look into investigating why certain tasks are more challenging than others for zero-shot generalization.Furthermore, zero-shot performance of our models on many tasks is still far from the few-shot and full-shot performance on those tasks.We hope that INSTRUCTDIAL can be lead to further investigations and improvements in this area.3) We observed a few instances of task interference in our experiments.For example, the set of tasks used for zero-shot automatic response evaluation as mentioned in Table 10 is different and smaller from the set of tasks used in our main experiments in Section 5.1.1.We found that incorporating a few additional tasks lead to a reduction in the performance on zero-shot automatic response evaluation.Furthermore, training on multiple tasks can lead to task forgetting.To address these issues, future work can take inspiration from work related to negative task interference (Wang et al., 2020a;Larson and Leach, 2022), transferability (Vu et al., 2020;Wu et al., 2020b;Xing et al., 2022) and lifelong learning (Wang et al., 2020b).4) Our models are sensitive to the wording of the instructions, especially in zero-shot settings as discussed in Section 5. Ethics and Broader Impact Broader Impact and applications: Our framework leverages instruction tuning on multiple dialogue tasks, allowing multiple functionalities to be quickly implemented and evaluated in dialogue systems.For example, tasks pertaining to both taskoriented dialogue tasks, such as slot detection and domain-specific tasks such as emotion detection can be added and evaluated against state-of-theart dialogue systems.This enables users to diagnose their models on different tasks and expand the abilities of multi-faceted dialogue systems, which can lead to richer user interactions across a wide range of applications.Our framework allows training models below billion parameter range, making them more accessible to the research community. Potential biases: Current conversational systems suffer from several limitations, and lack empathy, morality, discretion, and factual correctness.Biases may exist across datasets used in this work and those biases can propagate during inference into the unseen tasks.Few-shot and zero-shot methods are easier to train, and their use can lead to a further increase of both the benefits and risks of models.To mitigate some of those risks, we have included tasks and datasets in our framework that encourage safety such as ToxiChat for toxic response classification task and SaFeRDialogues for recovery response generation task, and that improve empathy such as EmpatheticDialogues for empathy. Appendix A Additional implementation details Data Sampling For training data creation, we first generate instances from all datasets belonging to each task.Since the number of instances per task can be highly imbalanced, we sample a fixed maximum of N number of instances per task.In our main models and experiments, we set N = 5000.Each instance in a task is assigned a random task definition and prompt.We truncate the input sequences to 1024 tokens and target output sequences to 256 tokens. Implementation Details Our models are trained for 3 epochs with a learning rate of 5e-5 with an Adam optimizer (Kingma and Ba, 2015) with linear learning rate decay.For our main experiments in Table 1, we perform checkpoint selection using a validation set created from the train tasks.For rest of the experiments we do model selection using the validation sets.We use the HuggingFace Transformers library2 for training and inference implementation and use Deepspeed library3 for improving training efficiency.We train DIAL-BART0 on 2 Nvidia 2080Ti GPUs using a batch size of 2 per GPU and an effective batch size of 72 with gradient checkpointing.We train DIAL-T0 on 2 Nvidia A6000 GPUs using a batch size of 1 per GPU and an effective batch size of 72 with gradient checkpointing.For all classification tasks, we perform greedy decoding, and for all generation tasks, we perform top-p sampling with p = 0.7 and temperature set to 0.7.The repetition penalty is set to 1.2.In Table 1, for DIAL-BART0 and DIAL-T0, we report the results over three different training runs, where each run is based on a new sample of training data. Zero-shot Automatic Evaluation Implementation Details For zero shot automatic evaluation, we calculate the Spearman correlation of the model's prediction with human ratings for relevance provided in the DSTC-10 test sets.There is no consistent "relevance" or "coherence" rating field present across the evaluation datasets.We therefore calculate the correlation with the ratings if a rating exists in any of the following fields "overall", "turing", "relevance" and "appropriateness".Where was the person going to?London Knowledge grounded generation Generate a response using the provided background knowledge. [KNOWLEDGE] Emailid for cases related to lost&found is<EMAIL_ADDRESS>can contact us at<EMAIL_ADDRESS>7: A sample conversation followed by instructions for multiple tasks for that conversation, and the outputs generated based on the specified instructions.Instruction tuning allows performing multiple tasks on an input by specifying task-specific instructions and prompts. B Sample conversation and Instructions In Table 7 we provide a sample conversation followed by instructions for multiple tasks for that conversation, and the outputs generated by DIAL-BART0 based on the specified instructions. Through this example we illustrate that instruction tuning allows performing multiple tasks on an input by specifying task-specific instructions. C Datasets used in tasks In Table 9 we present the list of tasks with datasets used in each task. D Configuration of experiments In Table 10 we provide the configurations of experiments, that is, the tasks used for training for each experiment.Table 10: List of experiments and their base models.The tasks listed in the right column are all the tasks a base model was trained with for their corresponding experiment. Figure 1 : Figure 1: We investigate instruction tuning on dialogue tasks.Instruction tuning involves training a model on a mixture of tasks defined through natural language instructions.Instruction tuned models exhibit zero-shot or few-shot generalization to new tasks. Figure 2 : Figure 2: INSTRUCTDIAL task taxonomy.Green represents classification and orange represents generation tasks. Figure 3 : Figure 3: Instruction based input-output samples for three tasks.Each task is formatted as a natural language sequence.Each input contains an instruction, instance, optional task-dependent inputs (e.g., class options in relation classification), and task-specific prompts.The instructions and the input instances are formatted using special tokens such as [CONTEXT] and [QUESTION].The Instruction Selection task is a meta-task described in Section 3.4 Figure 4 : Figure 4: Model's performance on unseen tasks improves with the number of seen tasks during training.We report average Accuracy across Eval Selection, Answer Selection, Relation Classification, and Dialfact Classification, and average RougeL scores for Knowledge Grounded Generation and Begins with Generation.line models are also trained on similar extractive and multi-choice question answering tasks.Relation and Dialfact classification are hard tasks for all models since there are no similar train tasks.Larger models are not necessarily better across tasks: Experiments across varying model size show that while T0-3B and DIAL-T0 perform better on the Eval selection and Answer Selection tasks and perform equivalently on the Begins with generation task, BART0 and DIAL-BART0 perform better on the rest of the unseen tasks.While DIAL-T0 is better at classification tasks, it has poor performance on generation compared to DIAL-BART0.We also observed that DIAL-T0 sometimes produces empty or repetitive outputs for generation tasks.Few-shot training significantly improves performance: DB-Few model that incorporates 100 instances per test task in its training data shows significant improvements in performance compared to its zero-shot counterpart DIAL-BART0.We see about 12-16% improvements on the Eval selection, Answer selection, and Dialfact classification tasks, and 30-50% improvement on the Begins with and Relation classification tasks.Full-shot training can improve performance across multiple tasks: DB-Full model achieves high performance across all test tasks.The fullshot performance of DIAL-BART0 on Dialfact and relation classification tasks are near state-of-the-art performance without using the full train datasets.Meta tasks and NOTA are important for better generalization: We see a large performance drop on unseen classification tasks when meta tasks (see Section 3.4) are removed.This shows that meta tasks help the model develop better representations ., Table 1 : Zero-shot evaluation on unseen tasks.B-2 stands for BLEU2, R-L for RougeL and GR for GRADE metric.Here ES stands for Eval Selection, AS for Answer Selection, RC for Relation Classification, DC for Dialfact Classification, BW for Begins With, KG for Knowledge Grounded generation.DB-Few and DB-Full are variants of DIAL-BART0.Our models DIAL-BART0 and DIAL-T0 outperform the baseline models and their ablated versions. Table 4 : Zero-shot slot filling results on the Restau-rant8k corpus. that contains 77 unique intent classes.Models are trained on 10 instances per test intent class.We compare our model Table 6 : The few-shot slot filling experiments on the DSTC8 datasets span four domains -buses, events, homes, rental cars and involves training on 25% of the training dataset.The set of tasks used for training the model are presented in Table10.We see significant improvement compared to the baseline in the few-shot setting on Joint goal accuracy for dialogue state tracking in few-shot setting on 1% and 5% data of Multiwoz.
7,994
2022-05-25T00:00:00.000
[ "Computer Science", "Linguistics" ]
Application of Viscous Dampers in Seismic Rehabilitation of Steel Moment Resisting Frames In structural seismic rehabilitation, the structural capacity spectrum curve can be enhanced by increasing the stiffness and strength of the structures and also the application of the energy dissipation systems such as dampers can decrease the structural demand spectrum curve. Dampers are basically used to mitigate the structural response and decrease the damages to the main structural elements under severe earthquakes through energy dissipation.At the present paper, it is aimed to evaluate the seismic behavior of two conventional steel moment resisting frames having 4, 8 stories incorporating seismic strength imperfection equipped with viscous dampers. OpenSees and nonlinear time-history analysis incorporating seven seismic records have been used to define the frame response. The results revealed that the seismic response of rehabilitated frames has been considerably improved. Where, the maximum roof displacement, the maximum story drift, the maximum floor acceleration and shears have been declined in damper equipped frames and the seismic performance of the most rehabilitated frame elements under an earthquake having return period of 475 years has been upgraded to performance level of life-safety. Introduction The structures which have been designed based on current existing codes are assumed to absorb the seismic energy through yielding or failure of the materials. For instance, the seismic energy in steel moment resisting frames is absorbed by developing of plastic hinges in beams and columns. Therefore, it can cause damages to the structural elements and also develop structural damages. So, the concentration of seismic energy dissipation in some specific devices such as dampers can reduce the damages in main structural elements and facilitate the operation of the structure after occurrence of earthquake. The viscous dampers are energy dissipating systems that are widely used in structural design and rehabilitation. At the present research, it is aimed to investigate the application of viscous dampers in seismic rehabilitation of steel moment resisting frames. At this study, the OpenSees, one of the best software to do nonlinear structural analysis under seismic loads, have been used and seven earthquake records are used to investigate the structural behavior. Viscous Damper The configuration of liquid viscous dampers is generally composed of a piston and a cylinder which the viscous liquid in cylinder is compressed by the piston. The load application to the system gradually transfers the viscous liquid via small openings on the piston which can dissipate the large amount of energy. The relationship between the viscous damper force and the velocity is as follows: Where, ‫ܨ‬ ௗ is the damper force, C is the damping coefficient, V is the relative velocity of both ends of the damper, Į is the power of the velocity (0.3~1). If Į ൌ ͳ, the relationship between the velocity and the damping force is assumed linear and if else, it is considered nonlinear. Analytical Model While the steel structures are considered in this rehabilitation study, two steel structures with less ductility having four and eight stories have been evaluated based on old version of the standard code. The height of the stories has been considered 3.5 m in all specimens. The number of the spans in all specimens has been considered 3 spans with the length of 5 m. Also, the structures have been considered as residential buildings located in Tehran having soil condition of Type II in accordance with Iranian code of practice for seismic resistant design of buildings (standard no 2008). The structural system is considered as conventional At the present research, the two dimensional models have been considered. The plan and the section details of the four and eight story frames are shown in Fig 3 and 4, respectively. Nonlinear Dynamic Analysis At the present research, seven records containing Kern county, San Fernando, Landers, Northridge, Tabas, Avoj and Bam far fault records have been used. All acceleration records are considered on soil type II. The acceleration records have been scaled in accordance with acceleration response spectrum and the presented method in Iranian code of practice for seismic resistant design of buildings (standard no 2008-3rd edition). It is required to define the scale factor due to differences in periods of selected structures. Therefore, the maximum spectrum acceleration of selected acceleration records for four and eight story frame is considered as 0.48g and 0.53, respectively. Seismic Evaluation At this section, the effective outputs of seismic evaluation of studied frames such as roof displacement, story acceleration, story drifts are presented. Using seven records of earthquake and based on rehabilitation criteria manuals, the average maximum responses have been used to estimate the response of the frames. Maximum Roof Displacement According to Fig 5, the maximum roof displacement of the frames with and without damper under selected acceleration records is shown. The results shows that average maximum roof displacement under seven acceleration records in four and eight story frames equipped with damper is decreased 36% and 39%, respectively, rather than frames without damper. a)4 story frame b)8 story frame Maximum Story Displacement According to Fig 6, the maximum story displacement of the frames with and without damper is shown. The results shows that existing maximum acceleration in four and eight story frames is decreased 24% and 25%, respectively. Consequently, the damages due to throwing of nonstructural elements during earthquake and also the value of input energy are decreased. a)4 story frame b)8 story frame Story Drift According to Fig 7, the average story drift of rehabilitated frames and base frames under seven selected records have been compared. As it can be seen the maximum story drift in four and eight story rehabilitated frames is decreased 56% and 64%, respectively. Also, the drift distribution at different stories of structure equipped with damper has been constant which revealed that the damages at different stories have been reduced and the damage distribution in structure has been constant. a)4 story frame b)8 story frame Conclusions At the present paper, the influence of viscous dampers on seismic response of steel moment resisting frames has been studied. The results of this study are summarized as follows. x Adding viscous dampers to the studied frames has caused a reduction in maximum roof displacement and permanent displacement while the average maximum roof displacement under seven acceleration records in four and eight story frames equipped with damper has been decreased 36% and 39%, respectively. x Using such dampers reduce and make the different story drifts constant while the maximum story drift in rehabilitated four and eight story frames has been decreased 56% and 64%, respectively. x The maximum roof acceleration and story acceleration has been reduced by using such dampers while the existing maximum acceleration in four and eight story frames equipped with dampers has been decreased 24% and 25%, respectively. x The values of story shear and base shear of rehabilitated frames are reduced due to an increase in damping of the structure.
1,596
2016-01-01T00:00:00.000
[ "Engineering", "Geology" ]
Molecular Dynamics Simulation for the Demulsification of O/W Emulsion under Pulsed Electric Field A bidirectional pulsed electric field (BPEF) method is considered a simple and novel technique to demulsify O/W emulsions. In this paper, molecular dynamics simulation was used to investigate the transformation and aggregation behavior of oil droplets in O/W emulsion under BPEF. Then, the effect of surfactant (sodium dodecyl sulfate, SDS) on the demulsification of O/W emulsion was investigated. The simulation results showed that the oil droplets transformed and moved along the direction of the electric field. SDS molecules can shorten the aggregation time of oil droplets in O/W emulsion. The electrostatic potential distribution on the surface of the oil droplet, the elongation length of the oil droplets, and the mean square displacement (MSD) of SDS and asphaltene molecules under an electric field were calculated to explain the aggregation of oil droplets under the simulated pulsed electric field. The simulation also showed that the two oil droplets with opposite charges have no obvious effect on the aggregation of the oil droplets. However, van der Waals interactions between oil droplets was the main factor in the aggregation. Introduction With the increase of oil production activities, oil pollution, particularly oily wastewater, has become an environmental concern nowadays. Enormous quantities of oily wastewater are generated during different industrial processes all around the world, including petroleum refining, industrial discharges, petroleum exploration, food production operations, etc. [1][2][3][4][5]. The oils in wastewater include fats, lubricants, cutting oils, heavy hydrocarbons and light hydrocarbons [6]. These oils can be further divided into free oils and emulsified oils. The free oils in wastewater are easier to separate by physical techniques such as gravity separation and skimming [7,8]. However, the emulsified oil droplets are more difficult to handle due to their high stability in water [9,10]. A widely used separation technique for emulsified oils involves the addition of chemicals, such as ferric or aluminum salts, to induce colloidal destabilization [1,5]. However, the cost is expensive, and the chemicals would dissolve in water or form-settling sludge after the treatment, which is not recommended from the perspective of green chemistry. An alternative approach is the use of an electric field, especially for the dehydration of crude oil [11][12][13][14][15][16][17]. Electric field demulsification has practical advantages such as a lack of extra chemicals, simple equipment, short process flow, etc. It can achieve physical separation of oil and water mixtures and recover oily substances to a certain extent, without the pollution from added chemicals [18,19]. The demulsification mechanism of W/O emulsion by electric fields has also been widely researched. The demulsification was attributed to the droplets' polarization and elongation under the electric field, which then induces interactions between dipoles, leading to aggregation [20][21][22]. However, the utilization of an electric field to separate oil and water in O/W emulsion is rarely studied. It is generally believed that electric field demulsification does not work for O/W emulsions, because water is conductive, and the electrical energy could dissipate easily in aqueous solution [5]. Ichikawa et al. [23] investigated the demulsification process of dense O/W emulsion in a low-voltage DC electric field and found that a mass of gas bubbles occurred and surged in the emulsion during the demulsification process. Furthermore, Hosseini et al. [24] applied a non-uniform electric field to demulsify the benzene-in-water emulsion. Bubbles were also generated in the emulsion when the electric field was introduced. The occurrence of these phenomena is attributed to the overly large electric current in the emulsion, leading to water's electrolysis. To resolve this problem, Bails et al. [25,26] applied a pulsed electric field (PEF) to W/O emulsion and found that the electric current generated by a pulsed electric field is small at a high voltage. After this, Ren et al. [27] applied bidirectional pulsed electric field (BPEF) to separate O/W emulsion; this was prepared by mixing 0 # diesel oil and SDS solution. They found that BPEF induced the aggregation of oil droplets, and BPEF had a distinct demulsification effect on O/W emulsion with surfactant. The demulsification effect under different BPEF voltages, frequencies and duty cycles were investigated by evaluating oil content and turbidity of the clear liquid after demulsification. Moreover, they put forward a hypothesis that charges on the oil drop surface would redistribute under BPEF to promote the mutual attraction and coalescence of oil drops. However, the mechanism of oil droplet movement and aggregation in O/W emulsion at the molecular level under BPEF has not been well studied; still less studied is the effect of surfactants on demulsification. Molecular dynamics (MD) simulation is considered a useful tool to carry out microscopic analysis of the dynamic behavior of nanodroplets based on the basic laws of classical mechanics [28]. Chen et al. [29] used MD simulation to study the influence of a direct current electric field on the viscosity of waxy crude oil and the microscopic properties of paraffin. They found that the electric field strength affects the distribution of oil molecules. He et al. [30] simulated the aggregation process and behavior of charged droplets under different pulsed electric field waveforms by MD simulation. They discovered that the deformation of droplets is greatly affected by the waveform. Moreover, the additive in the emulsion has an important influence on its emulsifying stability [31][32][33]. For example, an experimental study found that BPEF had a distinct demulsification effect on O/W emulsion with SDS surfactant [27] . However, to the best of our knowledge, there has been no report on the microscopic level of the demulsification of O/W systems with SDS surfactants under the action of BPEF electric field. In addition, the crude oil composition was relatively distinct when the behavior of crude oil in electric field was simulated previously [34]. Therefore, it is necessary to study the movement and coalescence behavior of oil droplets in O/W emulsion under BPEF by MD simulation. We believe that this will provide a theoretical basis for the application of BPEF in O/W emulsion demulsification. In this paper, we investigated the movement and aggregation behavior of crude oil droplets in O/W emulsion with differing contents of SDS under BPEF. First, the structural changes of oil droplets in each system and their collision time were analyzed to determine the behavioral difference between the oil droplets with and without SDS. Second, the centroid distance between oil droplets, the average elongation length of oil droplets and the MSD of SDS and asphaltene molecules of oil droplets were calculated to explain why SDS can reduce demulsification time. Finally, we investigated the aggregation behavior of oil droplets after the shut-off of BPEF and discussed the aggregation mechanism of oil droplets under BPEF. Emulsified Crude Oil Droplet It was believed that SDS increased the hydrophilicity of oil droplets by increasing the hydrophilic surface area of the droplets [34]. In order to study the surface condition of oil droplets with a different SDS content in each system, the solvent-accessible surface area (SASA) was calculated and shown in Figure 1. With an increasing number of SDS molecules, we noted that both the hydrophilic surface and the hydrophobic surface of crude oil droplets also increased. Meanwhile, the ratio of hydrophilic area to hydrophobic area also increased significantly. Therefore, adding SDS molecules could increase the hydrophilic surface area of oil droplets more significantly. In addition, the greater the number of SDS molecules, the greater the hydrophilic surface area. droplets with a different SDS content in each system, the solvent-accessible surface area (SASA) was calculated and shown in Figure 1. With an increasing number of SDS molecules, we noted that both the hydrophilic surface and the hydrophobic surface of crude oil droplets also increased. Meanwhile, the ratio of hydrophilic area to hydrophobic area also increased significantly. Therefore, adding SDS molecules could increase the hydrophilic surface area of oil droplets more significantly. In addition, the greater the number of SDS molecules, the greater the hydrophilic surface area. Dynamic Behavior of Oil Droplets under BPEF In order to study the behavior of oil droplets under electric field, BPEF with E = 0.50 V/nm was applied in the z-direction of all systems. Figure 2 displayed the conformational changes of oil droplets with differing SDS content during the electric field output stage. As can be seen from Figure 2, all oil droplets gradually deformed under the electric field, elongated in the z-direction and migrated toward the opposite direction of the electric field. Moreover, SDS and asphaltene molecules concentrated at the end of the oil droplet. Excess SDS molecules were distributed along the entire surface of the deformed oil droplets (System IV, V). To clearly see the distribution of SDS and asphaltene, oil droplets of System II were partly zoomed in. It can be seen that the SDS and asphaltene molecules aggregated at the head of the oil droplet's moving direction, with the negative sulfonic acid groups of SDS and carboxyl groups of asphaltene molecules facing the opposite direction of the electric field. Therefore, we thought it was the polar SDS and asphaltene molecules that guided the movement of oil droplets under the electric field. In Figure 2 we also noted that the states of two oil droplets in five systems were different at 400 ps. In System II and V, the two oil droplets collided at 400 ps. Whereas, for System I, III and IV, this didn't occur. To investigate the impact of SDS concentration on the coalescence of oil droplets driven by electric field, the collision time was summarized in Figure 3. A collision occurred when the minimum distance between two oil droplets was less than 0.35 nm. It was found that the addition of SDS molecules can reduce the collision time of oil droplets, especially with the 6.2% SDS concentration condition. Dynamic Behavior of Oil Droplets under BPEF In order to study the behavior of oil droplets under electric field, BPEF with E = 0.50 V/nm was applied in the z-direction of all systems. Figure 2 displayed the conformational changes of oil droplets with differing SDS content during the electric field output stage. As can be seen from Figure 2, all oil droplets gradually deformed under the electric field, elongated in the z-direction and migrated toward the opposite direction of the electric field. Moreover, SDS and asphaltene molecules concentrated at the end of the oil droplet. Excess SDS molecules were distributed along the entire surface of the deformed oil droplets (System IV, V). To clearly see the distribution of SDS and asphaltene, oil droplets of System II were partly zoomed in. It can be seen that the SDS and asphaltene molecules aggregated at the head of the oil droplet's moving direction, with the negative sulfonic acid groups of SDS and carboxyl groups of asphaltene molecules facing the opposite direction of the electric field. Therefore, we thought it was the polar SDS and asphaltene molecules that guided the movement of oil droplets under the electric field. In Figure 2 we also noted that the states of two oil droplets in five systems were different at 400 ps. In System II and V, the two oil droplets collided at 400 ps. Whereas, for System I, III and IV, this didn't occur. To investigate the impact of SDS concentration on the coalescence of oil droplets driven by electric field, the collision time was summarized in Figure 3. A collision occurred when the minimum distance between two oil droplets was less than 0.35 nm. It was found that the addition of SDS molecules can reduce the collision time of oil droplets, especially with the 6.2% SDS concentration condition Surface Charge Distribution The electrostatic potential surface of the oil droplet can reflect its charge redistribution under electric field. Considering the two oil droplets in each system are the same, only one droplet's electrostatic potential was calculated. The electrostatic potential dia- Surface Charge Distribution The electrostatic potential surface of the oil droplet can reflect its charge redistribution under electric field. Considering the two oil droplets in each system are the same, only one droplet's electrostatic potential was calculated. The electrostatic potential diagrams were obtained for different systems at the initial and specific time during the simulation ( Figure 4). It can be found that under the influence of the hydrophilic and negatively charged asphaltene molecules on the surface, some areas of the oil droplets appear electronegative (blue area) before electric field is applied. The electronegative area increases with the increase of SDS content. However, the electrostatic potential at the surface of the deformed oil droplet noticeably changed under the electric field, which is manifested as one end of ellipsoidal oil droplet being electronegative toward the opposite direction of the electric field and the other end being electropositive, as in System I and II. These revealed that the redistribution of the charge of oil droplets under electric field resulted in the droplets' polarization. This phenomenon was consistent with the experimental observation that the charge of the oil droplets under the electric field is positive in the direction of the electric field, and negative in the opposite direction. Meanwhile, we found that for Systems III, IV and V, the oil droplets' polarization was not obvious. To explain this, the number density of SDS in oil droplets under electric field was analyzed at the same time. In Figure 5, we defined the middle of the oil drop as 0 and the moving direction as the positive direction. It can be seen that with an increase in SDS, it tended to distribute on the surface of the whole deformed oil droplet, which could further explain why the electronegativity area of the deformed oil droplet increased with the increase in SDS. The dynamic behavior of SDS and asphaltene molecules of oil droplets and the electrostatic potential distribution on the surface of oil droplets displayed that the mobile negative charges on oil droplets moved toward the opposite direction of the applied electric field. However, what causes the two oil droplets moving in the same direction to collide, and its relationship with SDS content is unclear. Figure 6 presented the centroid distance between two oil droplets and the average elongation length le of the two oil droplets along the z direction from the application of the electric field to the collision of oil droplets in each system. We can find that even when the two oil droplets were deformed under electric field, the centroid distance between the two oil droplets remained approximately 10 nm in all systems. This means that due to the oil droplets having the same composition in each system, they moved along the opposite direction of the electric field at almost the same speed, so they kept almost the same initial centroid distance. However, the average elongation length le of the two oil droplets in the five systems studied were significantly different. It was found that in all systems, the oil droplets start length was about 6 nm in diameter, and their length increased with time; the average elongation length le of the oil droplets exceeded 10 nm near the collision time point. This means that when the length of the oil droplet is stretched enough, the two oil droplets are connected head to end; that is, a collision occurs. Meanwhile, we noted that in Figure 6b, the order of the growth rate of the average elongation length l e2 from largest to smallest is System II, System V, System III ≈ System IV and System I, which is similar to the trend showing the variation of the collision time of studied systems in this work. Therefore, for the O/W emulsion system with uniform distribution of oil droplets, we thought the demulsification collision time in the electric field is significantly affected by SDS. Adding appropriate SDS surfactant into O/W systems can effectively reduce the power consumption. As discussed above, SDS and asphaltene molecules guide the entire oil drop move in the opposite direction of the electric field. It was predicted that the average gation length of oil droplets in electric field is related to the diffusivity of the SD asphaltene molecules. Thus, we calculated the MSD of SDS and asphaltene molecu As discussed above, SDS and asphaltene molecules guide the entire oil droplet to move in the opposite direction of the electric field. It was predicted that the average elongation length of oil droplets in electric field is related to the diffusivity of the SDS and asphaltene molecules. Thus, we calculated the MSD of SDS and asphaltene molecules for the five systems in Figure 7. It was found that the order of SDS and asphaltene molecules' diffusion from largest to smallest was System II, System V, System IV, System III and System IV in the five systems studied; this was consistent with the order of the average elongation length of oil droplets under electric field. We thought that in System I the asphaltene molecules interacted more strongly with the surrounding oil molecules due to the influence of its structure, which decreased its mobility under electric field. However, the negative SDS molecules are smaller and demonstrate strong mobility in the electric field, thus increasing their overall mobility. However, this does not mean that the greater the SDS content in the oil droplets, the greater the mobility of negatively charged molecules. Therefore, the SDS content of the oil droplets have great significance on the demulsification effect. Meanwhile, we calculated the root-mean-square fluctuation (RMSF) of oil droplets during the electric field output stage (as shown in Supplementary Materials: Figure S1). By comparing the RMSF of the three systems, we found that the fluctuation of System II and System IV was stronger than that of System I. The addition of SDS could have accelerated the movement of oil droplets, which was similar to the calculation result of MSD. influence of its structure, which decreased its mobility under electric field. However, the negative SDS molecules are smaller and demonstrate strong mobility in the electric field, thus increasing their overall mobility. However, this does not mean that the greater the SDS content in the oil droplets, the greater the mobility of negatively charged molecules. Therefore, the SDS content of the oil droplets have great significance on the demulsification effect. Meanwhile, we calculated the root-mean-square fluctuation (RMSF) of oil droplets during the electric field output stage (as shown in Supplementary Materials: Figure S1). By comparing the RMSF of the three systems, we found that the fluctuation of System II and System IV was stronger than that of System I. The addition of SDS could have accelerated the movement of oil droplets, which was similar to the calculation result of MSD. Aggregation Behavior of Oil Droplets The purpose of electric field demulsification is to aggregate dispersed oil droplets to achieve oil/water separation. Conformations of the oil droplet at the beginning of the collision were selected as the initial structure to simulate the behavior of oil droplets after the shut-off of the electric field (see Figure 8). It can be seen that the oil droplets in contact with each other can continue to aggregate even in the absence of electric field. Taking System II as an example, we found that some asphaltene and SDS molecules, which guided the movement of the oil droplets, formed a contact surface between the two oil droplets and then migrated to the surface of the oil droplets under the influence of hydrophilic groups. At the same time, the hydrophobic components inside the interfacing oil droplets aggregated into a whole. Meanwhile, we calculated the radius of gyration (Rg) during the aggregation of oil droplets in the five systems. (as shown in Figure S2). We Aggregation Behavior of Oil Droplets The purpose of electric field demulsification is to aggregate dispersed oil droplets to achieve oil/water separation. Conformations of the oil droplet at the beginning of the collision were selected as the initial structure to simulate the behavior of oil droplets after the shut-off of the electric field (see Figure 8). It can be seen that the oil droplets in contact with each other can continue to aggregate even in the absence of electric field. Taking System II as an example, we found that some asphaltene and SDS molecules, which guided the movement of the oil droplets, formed a contact surface between the two oil droplets and then migrated to the surface of the oil droplets under the influence of hydrophilic groups. At the same time, the hydrophobic components inside the interfacing oil droplets aggregated into a whole. Meanwhile, we calculated the radius of gyration (Rg) during the aggregation of oil droplets in the five systems. (as shown in Figure S2). We found that the radius of gyration of the oil droplets gradually decreased. Therefore, the droplets that collided would gradually aggregate into a whole. Mechanism of Aggregation of Oil Droplets It had been proposed [35] that the surface charges of oil droplets will rearrange under BPEF. The positive and negative charges at the adjacent areas of the two oil droplets are opposite under the action of the electric field. Thus, the adjacent areas of oil droplets always attract each other along the BPEF direction. Here, we verified and explained the accumulation mechanism of oil droplets through theoretical methods. We calculated the interaction energy of the two oil droplets in all systems with E = 0.50 V/nm during the whole process from dispersion to aggregation. The calculated results are shown in Figure 9. The potential energy of the interaction between the two oil droplets is divided into two parts. The cyan areas represented the change in the potential energy between the oil droplets from dispersion to collision (i.e., electric field output durations), and the blue areas represented the change in the potential energy with time from collision to aggregation (i.e., electric field shut-off durations). At the same time, we calculated the root-meansquare deviation (RMSD) of crude oil droplets in the five systems ( Figure S3). It can be seen from the figure that the aggregated oil droplets were basically stable after 4.0 ns. We found that the potential energy of the electrostatic interaction between the oil droplets was almost 0 kJ/mol during the entire electric field application process. The potential energy of the van der Waals interactions between the two oil droplets from dispersion to collision was also almost 0 kJ/mol in the output electric field stage. However, the potential energy of the van der Waals interactions between the oil droplets noticeably decreased after collision during the aggregation process (electric field shut-off stage). It means that in the electric field demulsification process, the adjacent areas of the two oil droplets with opposite charges have no obvious effect on the attraction and aggregation of the oil droplets. The van der Waals forces between the oil droplets are the main force in the demulsification process. Mechanism of Aggregation of Oil Droplets It had been proposed [35] that the surface charges of oil droplets will rearrange under BPEF. The positive and negative charges at the adjacent areas of the two oil droplets are opposite under the action of the electric field. Thus, the adjacent areas of oil droplets always attract each other along the BPEF direction. Here, we verified and explained the accumulation mechanism of oil droplets through theoretical methods. We calculated the interaction energy of the two oil droplets in all systems with E = 0.50 V/nm during the whole process from dispersion to aggregation. The calculated results are shown in Figure 9. The potential energy of the interaction between the two oil droplets is divided into two parts. The cyan areas represented the change in the potential energy between the oil droplets from dispersion to collision (i.e., electric field output durations), and the blue areas represented the change in the potential energy with time from collision to aggregation (i.e., electric field shut-off durations). At the same time, we calculated the root-mean-square deviation (RMSD) of crude oil droplets in the five systems ( Figure S3). It can be seen from the figure that the aggregated oil droplets were basically stable after 4.0 ns. We found that the potential energy of the electrostatic interaction between the oil droplets was almost 0 kJ/mol during the entire electric field application process. The potential energy of the van der Waals interactions between the two oil droplets from dispersion to collision was also almost 0 kJ/mol in the output electric field stage. However, the potential energy of the van der Waals interactions between the oil droplets noticeably decreased after collision during the aggregation process (electric field shut-off stage). It means that in the electric field demulsification process, the adjacent areas of the two oil droplets with opposite charges have no obvious effect on the attraction and aggregation of the oil droplets. The van der Waals forces between the oil droplets are the main force in the demulsification process. Simulation Details All MD simulations were performed in GROMACS 2019.6 software package. The GROMOS 53a6 force field [36] was used. The force field parameters of oil droplet composition were generated by the Automated Topology Builder (ATB) [37,38]. The simple point charge (SPC) model was selected for water molecules. The parameters of sodium ions (Na + ) that neutralize negative charge have been discussed in the literature [39]. Each system was energy-minimized using the steepest descent method before the simulation. The NVT ensemble at 300 K was performed with velocity rescaling thermostat. The NPT ensemble at 0.1 MPa and 300 K was performed with Berendsen pressure coupling. In the simulation, velocity rescaling thermostat with a time constant of 0.1 ps was selected as the temperature coupling method, and Berendsen pressure coupling with a time constant of 1.0 ps was selected as the pressure coupling method; the isothermal compression factor was set to 4.5 × 10 −5 bar −1 . The periodic boundary condition was applied along three dimensions. During the simulation, van der Waals interactions used Lennard−Jones 12-6 potential, and the cutoff was set to 1.4 nm. The Coulombic interaction used particle-mesh Ewald (PME) summation method. The initial velocities were assigned according to Maxwell−Boltzmann distribution. The time step chose 2 fs. The trajectory was saved every 10 ps. VMD 1.9.3 was used for trajectory visualization. Molecular Models of Crude Oil Owing to the high complexity of crude oil, especially for asphaltene and resins, the asphaltenes demonstrate a key role in the stabilization of water-in-crude oil emulsions and significantly impact the rheological properties of crude oil [40]. Two types of asphaltene (i.e., the number of each type of asphaltene is four) and six types of resins [41] (i.e., the number of each type of resin is five) were selected based on previous studies, as shown in Figure 10. In addition to asphaltenes and resin molecules, four types of alkanes (32 Simulation Details All MD simulations were performed in GROMACS 2019.6 software package. The GROMOS 53a6 force field [36] was used. The force field parameters of oil droplet composition were generated by the Automated Topology Builder (ATB) [37,38]. The simple point charge (SPC) model was selected for water molecules. The parameters of sodium ions (Na + ) that neutralize negative charge have been discussed in the literature [39]. Each system was energy-minimized using the steepest descent method before the simulation. The NVT ensemble at 300 K was performed with velocity rescaling thermostat. The NPT ensemble at 0.1 MPa and 300 K was performed with Berendsen pressure coupling. In the simulation, velocity rescaling thermostat with a time constant of 0.1 ps was selected as the temperature coupling method, and Berendsen pressure coupling with a time constant of 1.0 ps was selected as the pressure coupling method; the isothermal compression factor was set to 4.5 × 10 −5 bar −1 . The periodic boundary condition was applied along three dimensions. During the simulation, van der Waals interactions used Lennard−Jones 12-6 potential, and the cutoff was set to 1.4 nm. The Coulombic interaction used particlemesh Ewald (PME) summation method. The initial velocities were assigned according to Maxwell−Boltzmann distribution. The time step chose 2 fs. The trajectory was saved every 10 ps. VMD 1.9.3 was used for trajectory visualization. Molecular Models of Crude Oil Owing to the high complexity of crude oil, especially for asphaltene and resins, the asphaltenes demonstrate a key role in the stabilization of water-in-crude oil emulsions and significantly impact the rheological properties of crude oil [40]. Two types of asphaltene (i.e., the number of each type of asphaltene is four) and six types of resins [41] (i.e., the number of each type of resin is five) were selected based on previous studies, as shown in Figure 10. In addition to asphaltenes and resin molecules, four types of alkanes (32 hexane, 29 heptane, 34 octane and 40 nonane molecules), two types of cyclanes (22 cyclohexane and 35 cycloheptane molecules) and two types of aromatics (13 benzene and 35 toluene molecules) were selected as light oil components, referring to Song and Miranda's work [41][42][43]. Moreover, the concentration of resins and asphaltene in the crude oil was about 38%, which met the content of heavy oil components in crude oil [41]. Molecules 2022, 27, x FOR PEER REVIEW hexane, 29 heptane, 34 octane and 40 nonane molecules), two types of cycla hexane and 35 cycloheptane molecules) and two types of aromatics (13 be toluene molecules) were selected as light oil components, referring to Song a work [41][42][43]. Moreover, the concentration of resins and asphaltene in the about 38%, which met the content of heavy oil components in crude oil [41] Emulsified Oil Droplet First, the components of crude oil including alkanes, cyclanes, aromatic and resins were randomly inserted into a cubic box (x = 10 nm, y = 10 nm, z eliminate overlapping, energy minimization was then performed. After tha ensemble simulation was performed to obtain a reasonable density. The equ figuration after NPT run was shown in Figure 11a. Second, the above crude oil was then solvated in an 8 nm × 8 nm × 8 n box with 19,230 water molecules. Energy minimization and a 20 ns NVT e simulation was carried out to obtain the emulsified oil droplet (Figure 11b). Third, emulsified oil droplets with different amounts of SDS adsorbed face was constructed. SDS micelles were constructed using Packmol. The ab oil droplets were then placed in the center of a new box (10 nm × 10 nm × 15 micelles were placed close to oil droplets (Figure 11c). Then, Na + counter ion were added. After energy minimization and a 20 ns NVT simulation, emuls let systems were derived (Figure 11d). Emulsified Oil Droplet First, the components of crude oil including alkanes, cyclanes, aromatics, asphaltenes and resins were randomly inserted into a cubic box (x = 10 nm, y = 10 nm, z = 10 nm). To eliminate overlapping, energy minimization was then performed. After that, a 30 ns NPT ensemble simulation was performed to obtain a reasonable density. The equilibrium configuration after NPT run was shown in Figure 11a. Second, the above crude oil was then solvated in an 8 nm × 8 nm × 8 nm simulation box with 19,230 water molecules. Energy minimization and a 20 ns NVT ensemble MD simulation was carried out to obtain the emulsified oil droplet (Figure 11b). Third, emulsified oil droplets with different amounts of SDS adsorbed on their surface was constructed. SDS micelles were constructed using Packmol. The above spherical oil droplets were then placed in the center of a new box (10 nm × 10 nm × 15 nm) and SDS micelles were placed close to oil droplets (Figure 11c). Then, Na + counter ions and solvent were added. After energy minimization and a 20 ns NVT simulation, emulsified oil droplet systems were derived (Figure 11d). We assumed that the oil droplets distributed in the emulsion had the following conditions: First, the centroids of the two oil droplets were approximately along the z-axis direction; Second, the centroids of the two drops were about 10.0 nm apart; Finally, two identical emulsified oil drop models with counter ions were placed in a 10 × 10 × 50 nm 3 box with a separation distance of about 10 nm, as shown in Figure 11e. Afterward, water molecules were added to solvate the system. The energy minimization and a 10 ns NVT simulation were applied to ensure emulsion system equilibrium. Subsequently, BPEF was imposed on all systems to study coalescence of the two droplets. The composition of each emulsified oil droplet system was shown in Table 1. Conclusions In this paper, molecular dynamics simulations were performed to study the behavior of oil droplets in O/W emulsion. The differences in oil droplets emulsified by different amounts of SDS were compared. Three major conclusions were derived. First, the hydrophilicity of oil droplets increases with increasing SDS content in the oil droplet. When We assumed that the oil droplets distributed in the emulsion had the following conditions: First, the centroids of the two oil droplets were approximately along the z-axis direction; Second, the centroids of the two drops were about 10.0 nm apart; Finally, two identical emulsified oil drop models with counter ions were placed in a 10 × 10 × 50 nm 3 box with a separation distance of about 10 nm, as shown in Figure 11e. Afterward, water molecules were added to solvate the system. The energy minimization and a 10 ns NVT simulation were applied to ensure emulsion system equilibrium. Subsequently, BPEF was imposed on all systems to study coalescence of the two droplets. The composition of each emulsified oil droplet system was shown in Table 1. Conclusions In this paper, molecular dynamics simulations were performed to study the behavior of oil droplets in O/W emulsion. The differences in oil droplets emulsified by different amounts of SDS were compared. Three major conclusions were derived. First, the hydrophilicity of oil droplets increases with increasing SDS content in the oil droplet. When electric field is applied, oil droplets move in the opposite direction of the electric field. The molecules in the oil droplet underwent redistribution. SDS and asphaltene with negatively charged functional groups were transferred to the head of the droplet along the direction of movement. The electrostatic potential surface of the oil droplet proved that the BPEF made the molecules redistribute in the droplet, which resulted in its surface potential redistribution as well. This is consistent with the theoretical hypothesis proposed by this experiment. Meanwhile, the collision time of oil droplets in all simulation systems was different due to the different SDS mass fraction, and the collision time was the shortest for the oil droplets with 6.2% SDS. The average elongation length le of the two oil droplets along the z direction explained that SDS molecules could change the elongation length of the oil droplets in the electric field. The MSD of SDS and asphaltene molecules under electric field showed that the mobility was the strongest in System II. Therefore, the elongation length of the oil droplets in System II was the largest, and this system was the least time consuming. Second, the oil droplets after collision can self-aggregate after electric field shut-off. SDS and asphaltene molecules on the contact surface between the two oil droplets migrated to the surface of the oil droplets under the influence of hydrophilic groups. Lastly, the adjacent areas of the two oil droplets with opposite charges have no obvious effect on the attraction and aggregation of the oil droplets, and the van der Waals forces between oil droplets are the main force in the demulsification process. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27082559/s1, Figure S1: Root-mean-square fluctuation (RMSF) of oil droplets in system I, system II and system IV, Figure S2: Radius of gyration (Rg) of oil droplets in five systems, Figure S3: Root-mean-square deviation (RMSD) of oil droplets in five systems.
8,316
2022-04-01T00:00:00.000
[ "Physics" ]
DNA Methylation Sustains “Inflamed” Memory of Peripheral Immune Cells Aggravating Kidney Inflammatory Response in Chronic Kidney Disease The incidence of chronic kidney disease (CKD) has rapidly increased in the past decades. A progressive loss of kidney function characterizes a part of CKD even with intensive supportive treatment. Irrespective of its etiology, CKD progression is generally accompanied with the development of chronic kidney inflammation that is pathologically featured by the low-grade but chronic activation of recruited immune cells. Cumulative evidence support that aberrant DNA methylation pattern of diverse peripheral immune cells, including T cells and monocytes, is closely associated with CKD development in many chronic disease settings. The change of DNA methylation profile can sustain for a long time and affect the future genes expression in the circulating immune cells even after they migrate from the circulation into the involved kidney. It is of clinical interest to reveal the underlying mechanism of how altered DNA methylation regulates the intensity and the time length of the inflammatory response in the recruited effector cells. We and others recently demonstrated that altered DNA methylation occurs in peripheral immune cells and profoundly contributes to CKD development in systemic chronic diseases, such as diabetes and hypertension. This review will summarize the current findings about the influence of aberrant DNA methylation on circulating immune cells and how it potentially determines the outcome of CKD. INTRODUCTION Over the past decades, the incidence of chronic kidney disease (CKD) has rapidly increased worldwide (GBD Chronic Kidney Disease Collaboration, 2020), likely due to the huge changes in human living habits and the environment. A subset of CKD is characterized by a gradual loss of kidney function over time even with intensive supportive treatment and thereby irreversibly progresses to end-stage renal disease (ESRD). Epidemiological studies have revealed that all stages of CKD are correlated with greater risks of cardiovascular morbidity, premature death rates, declined quality of life, and tremendous economic burden (Cockwell and Fisher, 2020). In 2017, the number of deaths caused by CKD reached 1.2 million, known as the 12th leading causes of global death (DALYs GBD and Collaborators H, 2018). Undoubtedly, CKD is one of the biggest threats to global health as well as one of the top challenges to limited medical resources in most countries. Because multiple factors contribute to the disease progression, current therapeutic strategies to manage CKD mostly rely on the control of the detectable abnormalities, like proteinuria, hyperglycemia, hypertension, and so on. However, a proportion of CKD still progresses to ESRD even when these mentioned disadvantages are fully under control. For example, compelling evidence from multiple large-scale clinical trials remains insufficient to definitively conclude a relative risk reduction by intensive glycemic control for long-term diabetic kidney disease (DKD) exposures, which are generally accompanied by chronic hyperglycemia (Hemmingsen et al., 2011). A more in-depth understanding of the underlying molecular mechanisms implicated in the pathogenesis of CKD remains necessary for the development of novel therapeutic strategies. Chronic kidney inflammation in the process of CKD is featured by the diffusive interstitial infiltration of various immunocytes, including T lymphocytes, B lymphocytes, neutrophils, and monocytes. In general, the function of leukocytes trafficking to the kidney is to eliminate pathogens, remove necrotic cells and tissue debris from the original insult, and finally facilitate kidney tissue repair. The infiltrated leukocytes produce abundant cytokines and growth factors to establish an inflammatory milieu. Meanwhile, they also secrete antiinflammatory and pro-regenerative cytokines to promote inflammation resolution as well as tissue repair (Peiseler and Kubes, 2019). Usually, transient activation of kidney recruited immune cells is beneficial for tissue repair and functional recovery because they are helpful in removing the pathogenic factors of kidney injury. However, the accumulation of recruited leukocytes in the renal interstitial compartment promotes chronic inflammation and ultimately leads to renal fibrosis (Gieseck et al., 2018). Emerging evidence has identified altered the trafficking of pathogenic immune cells as crucial drivers of tubulointerstitial inflammation and tissue destruction in the progression of CKD (Schnaper, 2017;Tang and Yiu, 2020). Therefore, the recruited leukocytes might facilitate or undermine the kidney repair process under different conditions. An intriguing issue is which underlying mechanism determines the role of recruited immune cells in the kidney. CKD IS AN INFLAMMATORY DISEASE Chronic inflammation is generally characterized by persistent production of pro-inflammatory cytokines from both circulating and resident effector cells (Anderton et al., 2020). Emerging evidence has demonstrated that systemic chronic inflammation (SCI) is a major pathological event implicated in the development of most chronic diseases or pathological conditions (e.g., chronic heart disease, diabetes mellitus, and CKD; Furman et al., 2017Furman et al., , 2019Bennett et al., 2018). Under SCI, the low-grade but persistent activation of effector immune cells consistently compromise the normal tissue at the cellular level by direct contact or paracrine of pro-inflammatory cytokines (Kotas and Medzhitov, 2015). Of note, a gradual loss of renal function per se can initiate SCI in disease progression, which is commonly mixed with some other inflammatory conditions, including diabetes mellitus, hypertension, and obesity. For example, DKD is the leading cause of CKD, which has also been considered as an inflammatory disease (Tuttle, 2005). In the condition of DKD, hyperglycemiainduced oxidative stress pathologically activates circulating immune cells, which infiltrate into the involved kidney and aggravate tissue inflammation by abundant production of pro-inflammatory cytokines and chemokines (Donate-Correa et al., 2020). The accumulation of macrophages in the kidney has been correlated to a decline of renal function in DKD patients (Klessens et al., 2017). Furthermore, these infiltrated cells account for the huge release of cytokines, growth factors, reactive oxygen species (ROS), and metalloproteinases, which initiate and amplify the irreversible process of renal fibrogenesis (Matoba et al., 2019). Another common cause of CKD is hypertension that is likewise featured by progressive SCI (Harrison et al., 2011;Chen et al., 2019b). In the progression of hypertension-associated kidney involvements, predominant accumulation of different immune cells, including antigen-presenting cells and T cells, can be detected at the early stage of kidney inflammatory response (Loperena et al., 2018;Norlander et al., 2018). In the pathogenesis, hypertension-associated influence initially activates dendritic cells (DCs) in the kidney largely by promoting the exuberant formation of isoketals. The activated DCs produce abundant cytokines, including interleukin (IL)-6, IL-1β, and IL-23, to recruit T cells from secondary lymphoid organs to the kidney (Kirabo et al., 2014). Meanwhile, hypertension per se can promote T cell infiltration into the kidney by increasing glomerular perfusion pressure (Evans et al., 2017). As a vicious cycle, infiltrated T cells enhance the production of angiotensin (ANG) II and further aggravate hypertensionassociated kidney involvements (De Miguel et al., 2010). Collectively, regardless of its pathogenesis, SCI plays a detrimental role in the progression of CKD by promoting renal infiltration of circulating immune cell and aggravating chronic kidney inflammation. It is of clinical significance to further understand the regulatory mechanism of immune cells recruitment in the context of CKD progression. ABERRANT DNA METHYLATION PARTICIPATES IN CKD DEVELOPMENT DNA methylation is a common type of epigenetic modification that reversibly affects gene expression without changes in the sequence of nucleotides (Berger et al., 2009;Chen and Riggs, 2011). This process of adding a methyl group to the cytosine is catalyzed by DNA methyltransferases (DNMT), including DNMT1, DNMT3A, and DNMT3B. Generally, DNMT3A and DNMT3B are the major de novo DNA methyltransferases, whereas DNMT1 acts as a maintenance enzyme, restoring hemi-methylated DNA to full methylation after replication (Jones and Liang, 2009;Jones, 2012). In the course of cell division, DNA demethylation occurs in the absence of DNMT1 activation. On the other hand, active DNA demethylation can be induced by the mammalian ten-eleven translocation (TET) family, which catalyzes the stepwise oxidation of 5-methylcytosine in DNA to 5-hydroxymethylcytosine (5hmC; Ambrosi et al., 2017). In somatic cells, functional DNA methylation mostly occurs in clusters of CpG dinucleotides (termed CpG islands), and approximately 60-70% of human gene promoters contain a CpG island (Saxonov et al., 2006;Illingworth et al., 2010). DNA methylation is generally believed to induce transcriptional downregulation, either by impairing the interaction between transcription factors and their targets or by recruiting transcriptional repressors with specific affinity for the methylated DNA. At present, known transcriptional repressors can be classified into three families: the methyl-CpG binding domain (MBD) proteins (Hendrich and Bird, 1998;Defossez and Stancheva, 2011), the UHRF proteins (Hashimoto et al., 2008), and the zinc finger proteins (Hudson and Buck-Koehntop, 2018). In brief, DNA methylation, by altering DNA accessibility to gene promoters, induces transcriptional suppression while demethylation is associated with transcriptional activation. In recent decades, a surge in epigenome-wide association studies (EWAS) has highlighted that DNA methylation can be markedly influenced by environmental exposures, like CKD and SCI (Ligthart et al., 2016;Heintze, 2018). A Renal Insufficiency Cohort (CRIC) identifies enhanced DNA methylation in genes of IQ motif and Sec7 domain 1 (IQSEC1), nephronophthisis 4 (NPHP4), and transcription factor 3 (TCF3) in participants with stable renal function while compared to those with rapid loss of eGFR (Wing et al., 2014). Meanwhile, differential DNA methylation profiles between the two groups can also be detected in the genes associated with oxidative stress and inflammation. Using whole blood DNA, recent EWAS on a large CKD cohort demonstrated that abnormal DNA methylation of 19 CpG sites is significantly associated with CKD development. Importantly, five of these differential methylated sites are also associated with fibrosis in renal biopsies of CKD patients (Chu et al., 2017). The concordant DNA methylation changes can be further identified in the kidney cortex. In animal studies, targeting DNA methylation, either global or gene-specific, can effectively attenuate renal inflammation and fibrosis in progressive CKD (Tampe et al., 2014(Tampe et al., , 2015Yin et al., 2017). For example, low-dose hydralazine induces promoter demethylation in the gene of RAS protein activator like 1 (RASAL1), and subsequently attenuates renal fibrosis in the context of AKI to CKD (Tampe et al., 2017). Although hydralazine is an anti-hypertensive medication, the optimum demethylating activity seems to be independent of its blood pressure-lowering effect. Consistently, altered DNA methylation patterns in the renal outer medulla have been shown to induce differential gene expression regulating metabolism and inflammation in the hypertension animal model (Liu et al., 2018), further supporting that DNA methylation is involved in chronic kidney inflammation and a subsequent loss of kidney function. A number of studies have also highlighted the importance of DNA methylation in the pathogenesis of polycystic kidney disease (PKD; Li, 2020). For instance, downregulation of PKD1 in kidney tissue by hypermethylation may contribute to cyst formation and progression (Woo et al., 2014). Given its relevance to environmental influences, DNA methylation has been intensively explored in DKD. Cumulative evidence suggests that progressive loss of renal function is closely correlated to abnormal DNA methylation in DKD subjects (Swan et al., 2015;Qiu et al., 2018;Gluck et al., 2019;Gu, 2019;Kim and Park, 2019;Park et al., 2019). A recent genome-wide analysis of DNA methylation on 500 DKD subjects reveals that DNA methylation-mediated gene expression likely determines the disease phenotypes, including glycemic control, albuminuria, and kidney function decline. Importantly, further functional annotation analysis indicates that distinct DNA methylation patterns are involved in the pathogenesis of DKD-associated inflammation (Sheng et al., 2020). Collectively, DNA methylation participates in the development of CKD and chronic kidney inflammation in particular. DNA METHYLATION IN PERIPHERAL IMMUNE CELLS Chronic kidney inflammation occurs in the process of CKD development regardless of its pathogenesis. Pathologically, it is featured in the cumulative infiltration of diverse immune cells from the circulation into the tubulointerstitial compartment. The recruited immune cells are major participants in the progression of chronic kidney inflammation. Upon infiltration, these cells produce abundant chemokines to establish a pro-inflammatory milieu; meanwhile, they also secrete anti-inflammatory cytokines and pro-regenerative growth factors to promote inflammation resolution as well as tissue fibrosis (Gieseck et al., 2018;Tang and Yiu, 2020). It is of clinical interest to understand the underlying mechanism that regulates the intensity and the time length of the inflammatory response in these circulating immune cells. The current status of epigenetic research acknowledges that altered DNA methylation induces permissive or negative expressions of target genes, which result in pathogenic activation of effector immune cells and the consequential loss of inflammatory homeostasis (Stylianou, 2019). Compelling evidence has revealed that circulating immune cells experience dynamic epigenetic changes in their response to the challenge of either acute insult or chronic pathogenic factors (Keating et al., 2016). The epigenetic "memory" of the previous stimuli can sustain for a long time and affect the future gene expression profile even after their migration from the circulation into the involved kidney. Recent emerging findings support the fact that an aberrant DNA methylation pattern of diverse peripheral immune cells is closely associated with CKD development in multiple disease settings (summarized in Table 1). Firstly, we have recently reported that chronic hyperglycemia induces over-expression of DNMT1 and subsequent aberrant DNA methylation of multiple regulator genes of the mechanistic target of rapamycin (mTOR) in peripheral blood mononuclear cells (PBMCs). These effector cells in turn activate and migrate into the involved kidney with the abundant secretion of inflammatory cytokines, resulting in persistent kidney inflammatory injuries and progressive fibrosis (Chen et al., 2019a). By adoptive transfer, we confirm that circulating PBMCs with "inflammatory memory" can aggravate DKD progression in the recipient animals. Of clinical importance, we demonstrate that the inhibition of DNA methylation by targeting DNMT1 promotes the regulatory phenotype of circulating immune cells and improves the diabetic inflammatory state and the longterm outcome of DKD. Aberrant DNA methylation is also observed in PBMCs from lupus nephritis (LN) patients. Hypomethylated CpG sites can be detected in the promoter region of interferon (IFN)-and toll-like receptor (TLR)-related genes, which are highly associated with the pathogenic inflammatory condition of LN progression (Mok et al., 2016;Zhu et al., 2016). These findings highly support the fact that the differential methylation of genes regulating the inflammatory activity of PBMCs has a causal role in the pathogenesis of LN. In addition, we have observed that mRNA expression of DNMT3B is notably increased in PBMCs isolated from immunoglobulin A nephropathy (IgAN) patients (Xia et al., 2020). Based on these findings, we propose that SCI occurs and progresses in the condition of CKD derived from multiple primary and secondary diseases, such as hyperglycemia, hypertension, autoimmune disorder, and chronic infection. These chronic stimuli substantially alter the DNA methylation profile of circulating immune cells, leading to enhanced activities of pro-inflammatory genes and a cell-type switch toward inflammatory effectors. The altered DNA methylation might act as "epigenetic memory" and sustain in circulating immune cells for a long time. It thereby pathologically and persistently activates the inflammatory response of immune cells, which continue to participate in chronic tissue injury after their kidney recruitments. It might partly explain why a subset of CKD is characterized by ongoing kidney inflammation and irreversibly progresses to ESRD even when treatment targets have been achieved, like glycemic recovery and blood pressure control (Figure 1). Of note, leukocytes are composed of a variety of circulating immune cells and DNA methylation affects genes transcription activity by a cell type-specific manner. Although emerging evidence has revealed abnormal DNA methylation in both B cells (Absher et al., 2013;Fali et al., 2014;Scharer et al., 2019;Breitbach et al., 2020;Wardowska, 2020) and neutrophils (Lande et al., 2011;Coit et al., 2015bCoit et al., , 2020 in the condition of SLE, there is a lack of data derived from studies with kidney involvements by far. Therefore, we next discuss the potential role of DNA methylation in CKD development with a focus on T cell and monocyte lineages. DNA METHYLATION IN T CELL LINEAGES Upon antigen stimulation, naïve T cells differentiate into several lineages, including T helper (Th)1, Th2, Th17, and regulatory T (Treg) cells. Th1 cells control intracellular bacterial infection, while Th2 cells initiate antibody response against the extracellular pathogens. During the polarization of CD4 + T cells toward Th1, DNA hypomethylation occurs in Th1 cytokine genes (such as interferon gamma, IFNγ) whereas Th2 cytokine genes achieve DNA methylation, and vice versa in the polarization of Th2 cells. Evidence showed that the imbalance of Th1/Th2 cytokine profiles play a crucial role in the pathogenesis of IgAN (Suzuki and Suzuki, 2018). In the early stage of IgAN studied in ddY mice, strong polarization toward Th1 can be observed (Suzuki et al., 2007). A genome-wide screening for DNA methylation shows that the ratio of IL-2 to IL-5 is significantly elevated, indicating a Th1 shift of CD4 + T cells in IgAN (Sallustio et al., 2016). In brief, this Th1/Th2 polarization is associated with three specific aberrantly methylated DNA regions in peripheral CD4 + T cells from IgAN patients. Low methylation levels are observed in genes involved in T cell receptor (TCR) signaling, including tripartite motif-containing 27 (TRIM27) and dualspecificity phosphatase 3 (DUSP3). Meanwhile, a hypermethylated region can be detected in the miR-886 precursor and is associated with decreased CD4 + T cell proliferation following TCR stimulation. Therefore, aberrant DNA methylation causes reduced TCR signal strength and the low activation of CD4 + T cells in the pathogenesis of IgAN. Th17 cells are characterized by the signature production of cytokines such as IL-17A and IL-17F and the expression of the key transcription factor retinoic orphan receptor γt (RORγt; Cua et al., 2003). Due to their pro-inflammatory phenotype, Th17 cells are capable of protecting against infections on mucosal surfaces (Park et al., 2005) but contribute to the development of renal inflammatory diseases . On the other hand, Treg cells are characterized by the expression of forkhead box P3 (Foxp3) and the production of anti-inflammatory cytokines (e.g., IL-10 and transforming growth factor-β; Lu et al., 2017) and usually have a pivotal role in dampening chronic kidney inflammation Sharma and Kinsey, 2018). Changes in epigenetic status at the Foxp3 and IL17 gene loci are essential for the polarization of CD4 + T cells toward the Treg or Th17 cells (Yang et al., 2015;Lu et al., 2016). Peripheral CD4 + T cells from SLE patients were presented with decreased expression of regulatory factor X 1 (RFX1), which causes DNA demethylation in the IL17A locus of CD4 + T cells and thereby promotes Th17 cell differentiation (Zhao et al., 2018). On the other hand, abnormal epigenetic regulation of Foxp3 in Treg cells has been documented in SLE patients, which suggests that hypermethylation of the Foxp3 + promoter region is associated with a decreased proportion of Treg cells and increased disease activity (Zhao et al., 2012). Of clinical significance, DNA methylation levels of the Foxp3 promoter region can be markedly suppressed by effective treatment, which consequently downregulates Foxp3 expression and promotes CD4 + CD25 + Treg cells. In addition, recent EWAS has revealed that unique DNA methylation patterns in CD4 + T cells are closely related to disease activity. In SLE, the DNA methylation state in peripheral naïve CD4 + T cells is significantly different between patients with and without renal involvement (Coit et al., 2015a). Increased DNA methylation in multiple IFN-regulated genes is closely associated with the onset of LN. Moreover, a lupus susceptibility gene, the type-I interferon master regulator gene (IRF7), is specifically demethylated as shown in patients with LN. Consistently, the modification of DNA methylation, by targeting DNMT1 expression in CD4 + T cells, contributes to the development of LN-like glomerulonephritis in animals (Strickland et al., 2015). As described above, abnormal epigenetics is implicated in the pathogenesis of hypertensive renal injury due to its influence on immune homeostasis. It is known that high salt intake is the major cause of hypertension and intriguingly associated with obesity, independent of energy intake (Ma et al., 2015). An intriguing question is whether and how environmental influences, like unhealthy diet, might induce aberrant epigenetic changes in immune cells that subsequently participate in hypertension-associated kidney inflammatory involvement. The Dahl salt-sensitive (SS) rat is a genetic model of hypertension and renal disease that is accompanied with immune cell activation in response to a high-salt diet (Mattson et al., 2006). In SS rats, a high-salt diet induced increasing global methylation rate in circulating and kidney T cells, of which differentially methylated regions (DMRs) are more prominent in animals with a pronounced hypertensive phenotype. Importantly, the application of decitabine, a hypomethylating agent, significantly attenuates hypertension and renal inflammatory injury in SS rats (Dasinger et al., 2020). In-depth RNA-seq analysis on kidney T cells has revealed the upregulation of multiple inflammatory and oxidative genes in response to a high-salt diet, which are inversely correlated with DNA methylation levels. These genes are known to play an important role in the development of salt sensitivity in the SS rat (Zheleznova et al., 2016). Collectively, these findings thereby highlight the important role of DNA methylation in linking the influence of abnormal environment/diet to the clinical manifestations of hypertensionassociated involvements, which might be at least partly mediated by pathologically activated T cells. DNA METHYLATION IN MONOCYTE LINEAGES Monocytes, representing the mononuclear phagocyte system, are the largest type of circulating immune cells and can differentiate into macrophages (Mϕ) and myeloid lineage dendritic cells (DCs). Multiple lines of evidence have confirmed the fundamental roles of monocyte lineage in the inflammatory progression of CKD (Heine et al., 2012;Kinsey, 2014;Bowe et al., 2017). Generally, Mϕ can be divided into two subsets, classically activated Mϕ (M1) and alternatively activated Mϕ (M2), depending on their activation paradigm and cellular functions. The classic M1 macrophages commonly produce pro-inflammatory cytokines and cytotoxic mediators contributing to acute and chronic tissue inflammation. On the other hand, M2 macrophages are mostly implicated in inflammation resolution, tissue remodeling, and fibrogenesis by secreting various anti-inflammatory cytokines, growth factors, and proangiogenic cytokines Tian and Chen, 2015). In the context of DKD, Mϕ constitutes a major part of infiltrated leucocytes and their accumulation is associated with the progression of diabetic status and renal pathological changes (Chow et al., 2004;Tesch, 2010). Importantly, M1/ M2 ratio is positively associated with the progression of chronic inflammation into pathogenic fibrosis during CKD development (Tang et al., 2019;Zhang et al., 2019). Recent studies have revealed an essential role of epigenetic regulation in the phenotype switch of M1 and M2. For example, DNMT3B plays an important role in regulating macrophage polarization and is expressed relatively less in M2 compared to M1 (Yang et al., 2014). Deletion/inhibition of DNMT1, either pharmacologically or genetically, contributes to M2 alternative activation in obesity (Wang et al., 2016), which is known as a typical type of SCI. Under the pathological conditions of hyperlipidemia and type 2 diabetes mellitus, DNA methylation alterations steer the Mϕ phenotype toward pro-inflammatory M1 as opposed to the tissue repairing M2 phenotype by differentially methylating gene promoters of M1 and M2 (Babu et al., 2015). Besides differentiation into Mϕ, monocytes can be classified into various subsets with diverse inflammatory phenotypes based on their cell surface markers expression (Zawada et al., 2012), which similarly can be interfered with in the stage of CKD. Accumulation of uremic toxins during CKD progression induces aberrant DNA methylation that affects some transcription regulators that are important for monocyte differentiation (Zawada et al., 2016). Similar to other chronic diseases, CKD can promote a pro-inflammatory phenotype of monocytes via the DNA hypomethylation of CD40, which activates and contributes to inflammatory involvements and disease progression . DCs can be generally divided into two groups, myeloid dendritic cells (mDCs) and plasmacytoid dendritic cells (pDCs; Kitching and Ooi, 2018). Although the majority of DCs within the kidney are cDCs, active pDCs can migrate and contribute to tissue inflammation in nephritic kidneys (Fiore et al., 2008;Tucci et al., 2008). Myeloid DCs (mDCs, BDCA1 + or BDCA3 + DCs) are also shown to increase in the renal tubulointerstitium of patients with LN (Fiore et al., 2008). DNA methylome of peripheral DCs reveals that global DNA hypermethylation in LN patients is associated with severe kidney involvement (Wardowska et al., 2019). Taken together, current evidence supports the fact that aberrant DNA methylation induces an inflammatory switch of monocyte lineage, which contributes to the development of chronic kidney inflammation in multiple chronic disease settings, like obesity, hypertension, diabetes, lupus, and CKD. SUMMARY AND PERSPECTIVES In summary, a variety of pathological conditions induce an aberrant DNA methylation profile in circulating immune cells with a cell-type specific manner, leading to a phenotype switch toward the inflammatory side (Figure 2). These "inflamed" immune cells sustain enhanced inflammatory activity upon the recruitment into diseased kidneys and consequentially participate in chronic kidney inflammation and CKD progression. DNA methylation-targeted treatment by either inhibiting methylation (e.g., 5-azacytidine) or activating demethylation (e.g., hydralazine) have been explored to ameliorate kidney injury in several preclinical studies ( Table 2), though some of the interventions have nephrotoxic potential in the clinical setting. A series of novel therapeutic methods, such as modified oligonucleotide inhibitors and small RNA molecules targeting DNMTs, have yet to be tested in the setting of kidney disease (Xu et al., 2016). Meanwhile, there is a lack of intervention strategies specifically targeting immune cells. Given its complex roles in cell biology, clinicians should Frontiers in Physiology | www.frontiersin.org FIGURE 2 | The relevant DNA methylation profiles in immune cells from CKD patients are summarized by different chronic pathogenic conditions, including LN, IgAN, hypertensive kidney injury, DKD, and uremia. Demethylation or methylation of certain genes regulates immune cell phenotype shift/differentiation, or pro/antiinflammation signal, therefore contributes to uncontrolled kidney inflammation and CKD progression. The mechanism boxed off with solid lines is documented in CKD with different etiology, whereas the one with dashed lines is speculated to relate to the development of kidney diseases founded on circumstantial evidence. CKD, chronic kidney disease; LN, lupus nephrites; IgAN, IgA nephropathy; DKD, diabetic kidney disease; CVD, cardiovascular disease; Treg, regulatory T; Th, T helper; NE, neutrophil; DC, dendritic cell; TCR, T cell receptor; M1, classically activated macrophage; M2, alternatively activated macrophage; IFN, interferon; Foxp3, forkhead box P3; DNMT, DNA methyltransferase; MBD, methyl-CpG binding domain; RFX, regulatory factor X; HRES-1, human T cell lymphotropic virus-related endogenous sequence-1; IRF, interferon regulatory factor; GALNT18, polypeptide nacetylgalactosaminyltransferase 18; TRIM27, tripartite motif-containing 27; DUSP3, dual-specificity phosphatase 3; VTRNA2-1, vault RNA 2-1. comprehensively assess the therapeutic value, as well as the potential risk of targeting DNA methylation in immune cells. An in-depth understanding of DNMTs functions in different scenarios might help to develop effective strategies to restore immune homeostasis with consideration of the timing, the signaling intensity, and the disease settings. In future mechanistic research, it remains necessary to clarify the causal relationship between DNA methylation and CKD development, since it is technically hard to separate "driver" events from "passenger" events in the setting of SCI. A combined application of current cutting-edge technologies, like single-cell epigenomic methods of ATAC-seq (Mezger et al., 2018) and single-cell RNA-seq (Kolodziejczyk et al., 2015), may be able to provide a solution to this problem. AUTHOR CONTRIBUTIONS GC conceived the review. X-JC and HZ collected literature data, interpreted literature, and wrote the manuscript. FY and YL created and revised the figures and tables. GC oversaw the work and revised the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by grants from the National Natural Science Foundation of China to Dr. Guochun Chen (81770691, 81300566).
6,283.2
2021-03-02T00:00:00.000
[ "Medicine", "Biology" ]
Astrocytes Protect Human Brain Microvascular Endothelial Cells from Hypoxia Injury by Regulating VEGF Expression Hypoxic-ischemic stroke has been associated with changes in neurovascular behavior, mediated, in part, by induction of the vascular endothelial growth factor (VEGF). The objective of this study was to investigate the effects of human astrocytes on the proliferation, apoptosis, and function of human microvascular endothelial cells (hBMEC) in vitro. Human microvascular endothelial cells (hBMEC) and human normal astrocytes (HA-1800) were used to establish in vitro cocultured cell models. The coculture model was used to simulate hypoxic-ischemic stroke, and it was found that astrocytes could promote hBMEC proliferation, inhibit apoptosis, reduce cell damage, and enhance antioxidant capacity by activating the VEGF signaling pathway. When VEGF is knocked out in astrocytes, the protective effect of astrocytes on hBMEC was partially lost. In conclusion, our study confirms the protective effect of hBMEC and laid a foundation for the study of hypoxic-ischemic stroke. Introduction Hypoxic-ischemic stroke refers to cerebral ischemia and hypoxia injury caused by vascular obstruction, which leads to focal or whole-brain dysfunction. It is characterized by high morbidity, high disability rate, high mortality rate, and high recurrence rate and seriously endangers human health [1]. Hypoxic-ischemic stroke occurs in the absence of blood flow of oxygen and nutrients [2]. As the body will undergo angiogenesis to restore blood flow when it lacks blood flow, angiogenesis is essential for the repair of hypoxic-ischemic stroke. Despite significant improvements in medicine and intravascular recanalization, treatment options for hypoxicischemic stroke remain limited [3,4]. erefore, the promotion of angiogenesis is considered to be an effective therapeutic target for hypoxic-ischemic stroke [5]. A deeper understanding of the of angiogenesis after hypoxic-ischemic stroke will help to facilitate the arrival of such therapies. Coculture is to mix and coculture two kinds of cells, so that the morphology and function of one kind of cells can be expressed stably and maintained for a long time. It has been found that the cotransplantation of neural stem cells and olfactory nerve sheath cells can attenuate the apoptosis of rat neurons, promote the survival of host neurons, and promote neuronal recovery in traumatic brain injury through antiinflammatory mechanisms [6]. e coculture and transplantation of umbilical cord blood pluripotent stem cells and lymphocytes can improve the symptoms of neurological defects, reduce the volume of cerebral infarction, and alleviate the inflammatory response of ischemic brain death rats [7]. Coculture of endothelial progenitor cells and neural progenitor cells increased VEGF expression and activated the PI3K/Akt pathway, synergically protecting brain endothelial cells from hypoxia/reoxygenation-induced injury [8]. e BMECs and astrocyte coculture model is the most widely used the in vitro BBB model. Astrocytes have been found to support neuronal repair and participate in and maintain the blood-brain barrier properties of BMECs. Coculture of hCMEC/D3 with astrocytes reduces paracellular permeability to enhance the ability of the blood-brain barrier to screen for neurotoxicity [9]. However, the effect of astrocytes on the proliferation and apoptosis of hBMEC and its mechanism remain unclear. Angiogenesis refers to the formation of new blood vessels from the differentiation of vascular endothelial cells of existing capillaries and posterior capillary venules [10]. Angiogenesis after brain injury can promote the recovery of neurons and brain functions, so the study of angiogenesis is of great significance in brain injury. During this process, astrocytes are involved in the regulation of endothelial cell growth, the regulation of tight junctions between endothelial cells and the chemotaxis of phagocytes [11]. After brain injury, astrocytes play a double-edged role. On the one hand, excessive activation produces a large number of inflammatory mediators leading to cell injury. On the other hand, various neurotrophic factors are secreted to activate the proliferation of endogenous neural stem cells and directly affect the repair of injured cells and nerve regeneration [12,13]. In addition, astrocytes themselves also secrete factors that promote nerve production in vitro, such as the epidermal growth factor, basic fibroblast factor, brainderived nerve growth factor, insulin-like growth factor, and vascular endothelial growth factor [14,15]. It has been found that under hypoxic conditions, induction of VEGF mRNA and protein in cerebral astroglial cultures occurs [16,17]. However, the mechanism of interaction between cerebral microvascular endothelial cells and astrocytes under hypoxia remains unclear. In this study, we established an in vitro coculture model of human astrocytes and human brain microvascular endothelial cells by the transwell technique to observe the effects of astrocytes on proliferation, apoptosis, and antioxidant capacity of hypoxia-mediated brain microvascular endothelial cells. ese results lay a foundation for further study on the protective mechanism of astrocytes against brain microvascular endothelial cells and provide a potential treatment for hypoxic-ischemic stroke. Cell Culture. hBMEC and HA-1800 cells (FuHeng Cell Bank, China, Shanghai) were cultured in the DMEM medium containing 10% FBS and 1% double antibody, and the medium was changed every 2 days. Normally, cells were maintained in an incubator filled with 95% air and 5% CO 2 at 37°C. For hypoxia treatment, hBMEC was incubated in a hypoxic incubator filled with 94% N 2 , 5% CO 2 , and 1% O 2 at 37°C. 2.2. Coculture of hBMEC and HA-1800. hBMEC was digested by trypsin to prepare a single cell suspension with a cell density of 2 × 10 5 /L and inoculated on a 24-well plate. A noncontact cell coculture system was established using transwell cells. HA-1800 trypsin was digested into a single cell suspension with a cell density of 2 × 10 5 /L. e suspension was inoculated on the underside of the transwell cells coated with collagen type I, and then, the cells were placed in the pores inoculated with hBMEC. For hBMEC cultured separately, no compartment was inserted into the hole. Cell Viability by CCK-8. Cell viability was determined by CCK-8 assay (Sigma-Aldrich, St. Louis, MO, USA). In short, cells were digested and removed from each group, and hBMEC were planted in 96-well plates and incubated for 2 hours at 37°C in 100 ul DMEM containing 10 ul CCK-8 solution. Absorbance at 570 nm was measured on a microplate reader (Bio-Rad, Hercules, CA, USA). All experiments were performed three times. TUNEL. e coculture model of hBMEC and HA-1800 was established by the Transwell technique. In normoxic (5% CO 2 , 95% air) and hypoxic (1% O 2 , 5% CO 2 , 94% N 2 ) conditions, the cells in the culture group and the VEGF knockout group were fixed after specified treatment. en, fixed and permeabilized with 4% paraformaldehyde and 0.1% triton X-100, hBMEC were incubated with the TUNEL reaction mixture for 1 hour at 37°C in the dark and stained with DAPI for 15 minutes. Confocal laser scanning microscopy (FV300, Olympus, Japan) was used to detect the fluorescence of the cells. Knockdown of VEGF with siRNA. e siRNA specifically targeting VEGF was designed and constructed by Geneseed (Guangzhou, China). e sequences of siRNAs used were as follows: HA-1800 was transfected with certain vectors using Lipofectamine 2000 (Invitrogen) in accordance with the manufacturer's instructions and then was used for further experiments. Western blot analysis confirmed the specific silencing of VEGF expression. Superoxide Dismutase (SOD) Assay. e cells treated by different groups were digested, and the cell suspension was lysed with RIPA ice for 15 min, 12000 rpm at 4°C, and centrifuged for 10 min. e supernatant was collected, and the content was determined using the SOD kit (Nanjing Jiancheng Bioengineering Institute). Western Blot. After the indicated treatment, hBMEC were collected and lysed in RIPA lysis and extraction buffer ( ermo Fisher Scientific, Waltham, MA, USA). Protein concentrations were evaluated by the BCA method (Micro BCA Protein Assay Kit, ermo Fisher Scientific), and 50 μg of each sample was separated by SDS-PAGE on a 12% gel. en, the proteins were transferred to PVDF membranes (Millipore, MA, USA). After blocking with nonfat milk, the membranes were incubated with a primary antibody overnight at 4°C, washed, and then incubated with a secondary antibody for 1 h each at room temperature. e antibodies used were anti-VEGF (ab32152), ERK (ab17942), p-ERK (ab50011), Akt (ab8805), p-Akt (ab38449), and goat anti-rabbit secondary antibody (ab150077). e results were quantified, and the images were processed using ImageJ software. GAPDH was used as an internal loading control. Statistical Analysis. Statistical analysis was conducted using SPSS 16.0 software (SPSS Inc., Chicago, IL, USA). e measurement data are expressed as mean ± SD and were subjected to statistical analysis using one-way analysis of variance (ANOVA). When significant interactions were detected in any ANOVA paradigm, t-tests were used to demonstrate effects between individual groups. Values of P < 0.05 were considered statistically significant. Establishment of the Coculture Model of Astrocytes (HA-1800) and Human Microvascular Endothelial Cells (hBMEC). In order to investigate the effect of astrocytes on hBMEC, transwell technology was used to establish the indirect coculture model of HA-1800 and hBMEC in vitro. Cells inoculated with HA-1800 were removed at 0 h, 6 h, 12 h, and 24 h under normoxia to detect the effect of coculture of astrocytes at different times on hBMEC. First, CCK-8 assay showed that HA-1800 significantly increased hBMEC proliferation activity over time (Figure 1(a)). Next, superoxide dismutase (SOD) level, an antioxidant index, was detected in cell lysates, and the results showed that the SOD level increased over time (Figure 1(b)). At same time, TUNEL observed that the addition of HA-1800 significantly inhibited apoptosis (Figures 1(c) and 1(d)). Coculture of HA-1800 and hBMEC Promoted Functional Repair of Brain Microvascular Endothelial Cells under Hypoxia. e structure and cell base of blood-brain barrier (BBB) are brain microvascular endothelial cells (BMEC). e tight connection between endothelial cells is the fundamental guarantee of BBB's characteristic structure and maintenance of barrier function. Endothelial cells are often the direct target cells of pathological damage factors such as hypoxia, and hypoxia will bring a series of changes in the internal environment of cell growth [18][19][20][21]. To investigate the effect of astrocytes on hBMEC under hypoxic conditions, we cocultured HA-1800 and hBMEC under normoxic and hypoxic conditions for 24 h. It was found that the proliferative activity and SOD levels of hBMEC cells were significantly lower under hypoxia compared to normoxia. However, coculture with astrocytes under both normoxia and hypoxia significantly enhanced the proliferative activity and SOD levels of hBMEC (Figures 1(a) and 1(b)). Next, TUNEL observed that hypoxia promoted hBMEC cell apoptosis, while coculture reversed hypoxia-induced apoptosis (Figures 2(c) and 2(d)). In conclusion, we found that HA-1800 can significantly enhance hBMEC cell activity under normoxia, and coculture of HA-1800 and hBMEC can repair cell function damage caused by hypoxia. HA-1800 Affects hBMEC Function through VEGF Sig- naling Pathway. VEGF signaling pathway plays an important role in the process of angiogenesis [22]. We speculated that coculture may promote hypoxia-induced angiogenesis through the VEGF pathway. In addition, ERK and Akt pathways are key intracellular signal transduction pathways for angiogenesis after activation of the VEGF signaling pathway [23]. erefore, we also studied the effects of coculture on the regulation of ERK and Akt pathways under hypoxia. Western blot was used to detect the protein levels of related pathways, and it was found that the expression of VEGF and the phosphorylation of ERK1/2 and Akt increased in both the coculture group and hypoxia treatment for 24 h. is induction was more significant in the hypoxia coculture group than in hypoxia or coculture only (Figures 3(a) and 3(d)). Overall, our data indicate that VEGF signaling pathway can be activated in both coculture and hypoxia, with the induction of HA-1800 and hBMEC being more significant in hypoxia. Knockout of VEGF in HA-1800 Can Eliminate the Effect of Hypoxia on the Functional Integrity of hBMEC Cells. To establish a causal relationship between the production of cell-derived VEGF in HA-1800 and hypoxia-induced hBMEC dysfunction, we used siRNA methods to knockout VEGF in HA-1800. Western blot analysis confirmed specific silencing, showing a more than 80% reduction in the level of HA-1800 VEGF protein transfected with VEGF siRNA (Figure 4(a)). Most importantly, VEGF knockout significantly reduced cocultured-induced cell proliferation, inhibition of apoptosis, and antioxidant capacity under hypoxia (Figures 4(b)-4(d)). ese results suggest that HA-1800 may protect the functional integrity of hBMEC through VEGF in hypoxia. Discussion Acute blood-brain barrier (BBB) disruption occurs in the first few hours of hypoxic-ischemic stroke and has received increasing attention. BBB is composed of endothelial cells arranged on the cerebral microvessels, and the peripheral cells, basal membrane, and foot processes of astrocytes outside the endothelial cells also participate in BBB formation [24,25]. It has certain barrier structures that limit the ability of substances to pass through, regulate, and maintain the stability of the central nervous system microenvironment [26]. At present, it is the most common method to construct the blood-brain barrier model of ischemic stroke by coculture of animal primary endothelial cells and glial cells [27]. It has been found that the expression level of c-GT enzyme decreased significantly during the passage of primary cells, suggesting that cells may gradually lose some BBB characteristics during primary culture [28]. e use of passage cell lines is cost-effective, fast, and allows for extensive experiments by passage and proliferation of cells without the need for responsible cell separation. erefore, in this study, representative human cerebrovascular endothelial cell line hBMEC and human positive astrocyte HA-1800 were cocultured to establish in vitro BBB, and the effect of hypoxia on BBB was evaluated. After brain injury, the blood-brain barrier is hypoxic and ischemic. In this state, brain microvascular endothelial cells are affected by astrocytes and various angiogenesis factors during hypoxia. At the same time, angiogenesis-related growth factors and cytokines secreted by glial cells are regulated by hypoxia-inducible factors, and activation of hypoxia-inducible factors induces the tolerance of glial cells to ischemic hypoxia [29]. Vascular endothelial growth factor (VEGF) is one of the most important proangiogenic factors in the microenvironment of choroidal angiogenesis. A large number of studies have confirmed that VEGF plays a key role in the process of pathological neovascularization [30]. As a specific mitogen of endothelial cells, VEGF can induce the division and proliferation of vascular endothelial cells and promote the migration of endothelial cells, which is conducive to the formation of a large number of blood vessels by budding of new vessels. Brain microvascular endothelial cells, pericytes, and astrocytes can produce VEGF. In the state of hypoxia, analyzing the intracellular environment of cerebral microvascular angiogenesis from the cellular level, it can be found that vascular endothelial cells are regulated by a variety of surrounding cells [31], one of which is AS cells. e hypoxia-regulated nature of VEGF makes it an important neovascularization factor, which can specifically bind to vascular endothelial cells and promote the growth of endothelial cells, thus participating in hypoxia-induced choroidal neovascularization. It has been found that both hypoxia and astrocytes can promote VEGF expression [16,17]. We detected the protein expression level of VEGF by Western blot and obtained consistent results. e expression of VEGF in hBMEC increased under hypoxia or coculture, and the increase of VEGF expression was more significant when hypoxia and HA-1800 were cocultured at the same time. Meanwhile, when VEGF is knocked out in HA-1800, ha-1800 loses part of its protective function against hypoxia injury. ese results suggest that HA-1800 may affect the proliferation, apoptosis, and antioxidant capacity of hBMEC by regulating the expression of VEGF. In conclusion, our results confirm the important role of astrocytes in mediating ischemia barrier damage. We found that under normal conditions, coculture of HA-1800 and hBMEC significantly increased cell proliferation activity, inhibited apoptosis, and promoted ROS. Under hypoxia, the proliferation activity of hBMEC cells decreased, apoptosis Journal of Healthcare Engineering cells increased, and intracellular ROS level decreased, while coculture could partially reverse the cell damage caused by hypoxia. is function may be related to ERK and Akt phosphorylation and VEGF protein expression. We knocked out VEGF in astrocytes and significantly reduced their ability to resist hypoxia injury. ese results suggest that astrocytes can protect hBMEC from hypoxia injury by activating the VEGF signaling pathway. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they no conflicts of interest.
3,743.6
2022-03-18T00:00:00.000
[ "Medicine", "Biology" ]
Stepping and Stretching The ability of kinesin to travel long distances on its microtubule track without dissociating has led to a variety of models to explain how this remarkable degree of processivity is maintained. All of these require that the two motor domains remain enzymatically “out of phase,” a behavior that would ensure that, at any given time, one motor is strongly attached to the microtubule. The maintenance of this coordination over many mechanochemical cycles has never been explained, because key steps in the cycle could not be directly observed. We have addressed this issue by applying several novel spectroscopic approaches to monitor motor dissociation, phosphate release, and nucleotide binding during processive movement by a dimeric kinesin construct. Our data argue that the major effect of the internal strain generated when both motor domains of kinesin bind the microtubule is to block ATP from binding to the leading motor. This effect guarantees the two motor domains remain out of phase for many mechanochemical cycles and provides an efficient and adaptable mechanism for the maintenance of processive movement. Members of the kinesin family of molecular motors are capable of taking over 100 steps on their microtubule track without dissociating, a feature that would be necessary for a transport motor that operates in isolation (1)(2)(3)(4)(5)(6). A variety of kinetic, structural, and mechanical studies have revealed that this processive behavior requires that two motors of kinesin remain in different structural and enzymatic states during a processive run (7-10, 11, 12). This would ensure that, at any given time, at least one of the two heads would remain strongly attached to its track, preventing the motor from prematurely detaching. Such coordination requires a way for the two motor domains to communicate their structural states to each other while walking processively. Several lines of evidence suggest that this allosteric communication is mediated through the internal load generated when both heads attach to the microtubule (1,(13)(14)(15)(16). As illustrated below in Fig. 1, kinesin initiates its mechanochemical cycle with its attached head (green) nucleotide free and its tethered head (magenta) containing ADP in the active site. ATP binding to the attached head reorients its neck linker (blue), which swings the tethered head forward to the next tubulin-docking site. ADP is then released from the new, weakly bound leading head (magenta) to produce an intermediate in which both heads are strongly bound to the microtubule. This situation would generate rearward strain on the neck linker of the leading head, depicted as a left pointing arrow, and forward strain on the corresponding structure of the trailing head, depicted as a right pointing arrow. It has been proposed that this strain generates processivity by accelerating release of the trailing head (13,17). In this mechanism, release of the ADP-containing trailing head would be very slow in the absence of forward strain and fast in its presence. In such a system, the greater that forward strain accelerates k dMT , the greater the degree of processivity. However, there is an internal inconsistency with this scheme. If kinesin's processivity were dependent solely on this mechanism, the motor would dissociate from the microtubule after only a few steps. The reasoning behind this is illustrated in Fig. 1. We have recently shown (18) that the effective rate of trailing head dissociation (k dMT ϳ 50 s Ϫ1 ) is appreciably slower than that for ADP release (k dADP ϭ 170 s Ϫ1 , this work, and Refs. 8, 19 -21), and ATP hydrolysis (k h , 100 s Ϫ1 , Refs. 8,19,21,22). This would lead to accumulation of a kinesin intermediate with both heads attached to the microtubule, with the leading head nucleotide free, and with ADP-P i in the active site of the trailing head. Given millimolar intracellular ATP concentrations and an apparent second order rate constant of Ͼ 1 M Ϫ1 s Ϫ1 (2,8,15,19,20,23), ATP would then rapidly bind to the new leading head (Ͼ1000 s Ϫ1 ) and be hydrolyzed. This would generate an intermediate with both heads weakly bound, and dissociation would rapidly follow, as indicated in Fig. 1 by the red X. The fact that kinesin is highly processive (1)(2)(3)(4)(5)(6) argues that there is a mechanism that prevents it from proceeding down this pathway. An alternative possibility is that rearward strain on the leading head slows ATP binding and subsequent hydrolysis, insuring that the leading head would remain strongly attached until the trailing head had dissociated. ATP would then rapidly bind to the leading head and cause the trailing head to swing forward to the next tubulin-docking site. Processive movement would be favored, because the rate of this forward stepping movement, at ϳ800 s Ϫ1 , is nearly sixteen times faster than the rate of trailing head dissociation (18). Furthermore, blocking ATP binding to the forward head while it was experiencing rearward strain would prevent the motor from proceeding down the pathway marked by the red X in Fig. 1. Determining whether processivity depends on the first mechanism, the second, or to some degree on both requires the ability to unambiguously measure the rates of key steps in the mechanochemical cycle and the effects of strain on these rates. These include the rates of trailing head dissociation, of ADP release from the tethered head after it attaches to the microtubule, and of ATP binding to the new, leading head. In this study, we apply the spectroscopic approaches developed in our prior work to make these measurements (15,18,23). This has allowed us to generate a model that provides an efficient yet adaptable mechanism for insuring processivity in this motor. EXPERIMENTAL METHODS Generation of K413FBIO-K413W340F, a cysteine-light recombinant kinesin construct with tryptophan 340 replaced by phenylalanine, consists of the first 413 amino-terminal residues of human kinesin, and was generated as described in our previous study (18). K413BIO is a construct in which K413W340F is fused at the carboxyl terminus to a biotinyl transferase recognition peptide, followed in turn by a hexahistidine tag for affinity purification. The peptide sequence, with its attached biotin, was incorporated into the K413 sequence to allow for attachment of the motor to streptavidin-coated beads used in motility assays in vitro. The plasmid containing the K413W340F mutant kine-sin construct served as a PCR template to amplify the kinesin insert with the following primers: upstream, 5Ј-AGATATACATATGGCGGA-CCTGGCC-3Ј; downstream, 5Ј-AAGTTGCATGTGCTCGAGAAAATTT-CCTATAACTCCAAT-3Ј. The underlined sequences are the NdeI and XhoI restriction sites, respectively. The fragment was cloned into pCR2.1-TOPO (Invitrogen, Carlsbad, CA) and excised with NdeI and XhoI. The fragment was purified and then ligated into a pET-21 vector with an in-frame biotinyl transferase recognition peptide sequence at the XhoI (carboxyl terminus) site. The resultant construct was verified by sequence analysis. In Vitro Motility Studies of K413BIO-Single molecule kinesin bead motility assays were performed essentially as described previously (24). 500-nm diameter carboxy-modified latex beads (Bangs Laboratories, Fishers, IN) were covalently biotinated with biotin-x-cadaverine (Molecular Probes, Eugene, OR), coated with avidin-DN (Vector Laboratories, Burlingame, CA), and purified by repeated pelleting and resuspension followed by sonication to eliminate clumping. Diluted kinesin constructs were mixed with the beads and incubated at 4°C for 4 h in assay buffer containing 80 mM PIPES, 1 pH 6.9, 50 mM potassium acetate, 4 mM MgCl 2 , 2 mM dithiothreitol, 1 mM EGTA, 7 M Taxol, various ATP concentrations, and 2 mg/ml bovine serum albumin as a blocking protein. The beads were diluted to 80 fM, and final kinesin dilutions were chosen such that, on average, fewer than half of the beads moved (typically 1:500,000 to 1:1,000,000 from ϳ100 M stock). An oxygen-scavenging system (37) was added to the kinesin:bead mixture just prior to measurement. Flow cells with a volume of ϳ20 l were constructed by using two strips of doubly adhesive tape to form a channel between a microscope slide and a No. 1 1 ⁄2 coverslip (cleaned by sonication in 5 M ethanolic KOH and coated with polylysine). Ingredients were introduced in the following order: taxol-stabilized MTs (polymerized from purified bovine brain tubulin, Cytoskeleton, Inc., Denver, CO) followed by a 10-min incubation, assay buffer wash followed by a 10-min incubation, and then kinesin:bead mixtures. To ensure that measurements reflected single molecule properties, data was only collected from assays in which fewer than half of the tested beads moved. All chemicals used were from Sigma, except bovine serum albumin and the ingredients used in the oxygen scavenging system (Calbiochem, San Diego, CA). Kinesin velocities and run lengths were measured by centroid video tracking (25) with sub-pixel resolution using a commercial video tracking software package (Isee Imaging Systems, Raleigh, NC). Fluorescence Methodologies-Labeling of K413W340F and K413W340FBIO with 5-(((2-iodoacetyl)amino)ethyl)aminonaphthalene-1 sulfonic acid or tetramethyl rhodamine maleimide was carried out as described previously (15,18,23). Labeling of phosphate-binding protein by MDCC was carried out as described previously (26). Transient kinetic measurements were made in an Applied Photophysics SX.18 MV stopped-flow spectrometer with instrument dead time of 1.2 ms as described previously (15,18,23). Unless otherwise described, complexes of kinesin and microtubules were formed with a 5-to 10-fold molar excess of microtubules over active sites. ADP was added to 3 M to ensure that the tethered kinesin head of a kinesin-microtubule complex contained ADP in the active site. Experiments with phosphate-binding protein were carried out in phosphate mop (0.2 unit/ml purine nucleoside phosphorylase plus 1 mM 7 methylguanosine) with a 10-fold molar excess of MDCC-labeled phosphate-binding protein over kinesin active sites in both syringes. In Vitro Motility of K413BIO The experiments in this study utilize a cysteine-light dimeric kinesin construct (K413), and it was necessary to establish that this kinesin construct is processive. In our previous study, we provided enzymatic evidence that K413 is capable of undergoing multiple enzymatic cycles per diffusional encounter with the microtubule, a prerequisite for processivity (18). Furthermore, attaching the biotin transferase recognition sequence to the carboxyl terminus of K413 had no appreciable effect on k cat 1 The abbreviations used are: PIPES, 1,4-piperazinediethanesulfonic acid; AMPPNP, 5Ј-adenylyl-b,g-imidodiphosphate; 2ЈdmD, 2Ј-deoxymant-ADP; 2ЈdmT, 2Ј-deoxy-mant-ATP; FRET, fluorescence resonance energy transfer; MDCC, N- FIG. 1. Kinesin's first two steps. The two motor domains of kinesin are distinguished by magenta and green shading and are depicted walking on a track of tubulin dimers. The neck linkers are depicted by the blue lines that connect the motor domains to the black coiled coil dimerization segment. The mechanochemical cycle is initiated by ATP binding to the green, attached motor domain and is characterized by the equilibrium constant K ATP . This leads to rapid (ϳ800 s Ϫ1 ) docking of this motor domain's neck linker (depicted as a straightening of the blue neck linker), which throws the tethered magenta motor forward toward the next tubulin docking site. Release of ADP from the new leading (magenta) motor, occurring with forward rate constant k dADP (170 Ϯ 17 s Ϫ1 ), is followed by ATP hydrolysis (k h , ϳ100 s Ϫ1 ), which leads to binding of both heads to the microtubule. This places the two neck linkers under mechanical strain (depicted as the rightward and leftward pointing blue arrows). In the absence of any mechanism to prevent it, ATP could then bind to the empty, leading motor domain (magenta) and then become rapidly hydrolyzed. This would produce motor dissociation from the microtubule after only two turnovers. That this does not happen (symbolized by the red X) implies that a mechanism must exist to prevent ATP from binding to the leading head while it is experiencing rearward strain (see text for details). Instead, hydrolysis is followed by dissociation of the rear head, characterized by rate constant k dMT , which occurs concomitantly with phosphate release. However, enzymatic measures of processivity are indirect. To directly assess processive behavior, we examined the in vitro motility properties of K413BIO. As with wild-type kinesin, average in vitro velocities showed a Michaelis-Menten dependence on ATP concentration, defining values of V max and K mATP of 703 Ϯ 73 nm/s and 23 Ϯ 9 M, respectively ( Fig. 2A). These compare with values of 650 -800 nm/s and ϳ80 M for wild-type squid kinesin measured in our laboratory (6,27). Fig. 2B illustrates the distribution of run lengths for K413BIO. Run length did not vary beyond experimental error over the range of ATP concentrations examined, and fitting data from all ATP concentrations to an exponential decay revealed a mean run length of 276 Ϯ 22 nm. Thus, although in vitro velocities of K413BIO are similar to wild-type, mean run length is reduced by a factor of 2-3. Having established that K413BIO was processive, we decided to use it to test the two models of how kinesin uses strain to walk processively: that forward strain accelerates trailing head release or that rearward strain inhibits ATP binding to the leading head. Evaluation of the Effect of Forward Strain on the Trailing Head Dissociation of K413BIO:Microtubule by ATP-If forward strain on the trailing head accelerates its release, then we would predict that release of the trailing head of a processively moving kinesin dimer should be faster than that for a nonprocessive monomeric construct, because the latter is incapable of generating internal strain. The steps leading to ATP-induced dissociation from the microtubule are summarized in Fig. 1. In this scheme, the observed dissociation rate, ATP , depends on the binding constant for ATP (K ATP ) and on the effective rate constant for ATP-induced dissociation, k e . The value of k e depends in turn on the values of the forward rate constants for the two irreversible steps that lead to dissociation, ATP hydrolysis (k h ), and subsequent dissociation from the microtubule (k dMT ). Under these conditions, ATP is defined by the following, where k e ϭ ͑k h ⅐k dMT ͒/͑k h ϩ k dMT ͒. If strain accelerates trailing head dissociation, k e should be greater for K413BIO than for K349. We measured the rate of trailing head dissociation for K413BIO and compared these results to those for monomeric K349. For K413BIO, this was accomplished using two spectroscopic probes, AEDANS and TMR, whose use we have previously described (18) The AEDANS probe monitors motor-microtubule association, and results with this probe for K349 are nearly identical to those using turbidity (data not shown). For K413BIO, the TMR probe monitors neck linker-neck linker reassociation, a step whose rate we have shown is kinetically controlled by dissociation of the trailing head (18). We would therefore predict that results using the TMR probe (closed triangles) should be superimposable on those using the AEDANS probe (open boxes), and Fig. 3 confirms this. Table I summarizes the values of K ATP and k e for both probes. Nearly identical results were seen at 10 mM KCl (data not shown). Phosphate Release by K413BIO-Microtubule-Phosphate release occurs concomitantly with trailing head dissociation (1, 2, 20). Therefore, measuring its release kinetics should provide an independent measure of k e . We accomplished this by mixing a complex of K413BIO-microtubules with a range of ATP concentrations in the presence of a fluorescently labeled phosphatebinding protein (28) and compared our results to K349. The resulting fluorescence transient consisted of an initial exponential, burst phase, followed by a linear increase, which could be described by the following relationship, where F(t) is the time-dependent fluorescence, p is the rate of phosphate release in the burst phase, and k ss is the steadystate rate at the microtubule concentration achieved after mixing in the stopped flow. If phosphate release were tightly coupled to dissociation, values of P and ATP should be nearly identical. This is confirmed in Fig. 3 for both K413BIO (open circles) and K349 (closed circles). Furthermore, the extrapolated value of p at infinite [ATP], k p , should be essentially identical to k e . As Table I indicates, this is the case for both monomeric and dimeric kinesin constructs. Finally, our results with phosphate release kinetics confirm the results from turbidity and fluorescence, namely that the effective rate constant for trailing head dissociation is not accelerated by forward strain. Dissociation of K413BIO-Microtubule by ADP-In the absence of added nucleotide, kinesin attaches to the microtubule via only one motor domain (7,8,29,30). This is illustrated in Fig. 1 by the upper left kinesin-microtubule complex. If ADP is added, it will bind to the empty catalytic site of the attached (green) head and dissociate the complex (17). This reaction would occur in the absence of internal strain, because ADP does not induce forward stepping and strong attachment of the other, tethered (magenta) head. We would, therefore, predict that the kinetics of K413BIO and K349 dissociation from the microtubule should be identical. This is confirmed by comparing the solid (K413BIO) and dashed (K349) curves in the inset of Fig. 4 ADP dissociates wild-type kinesin from the microtubule at a rate of ϳ12 s Ϫ1 (17,31). This is considerably slower than k cat under processive conditions, under conditions where strain would be present, and this finding has been used to support the argument that trailing head dissociation is accelerated by forward strain (17). If processivity depended solely on this mechanism, it would follow that any mutation in kinesin that accelerates the rate of ADP-induced dissociation in the absence of strain should reduce average run length proportionally. Nevertheless, our data show that, although the rate of ADP-in-duced dissociation for K413BIO is nearly 19-fold larger than for wild-type, mean run length is only reduced 2-to 3-fold (Fig. 2). Evaluation of the Effect of Rearward Strain on the Leading Head We next set out to examine the effect of rearward strain on the leading head by measuring the kinetics of 2Ј-deoxy-mant-ATP (2ЈdmT) binding to a K413BIO-microtubule complex. Binding of 2ЈdmT was monitored by FRET from kinesin tyrosine residues to the mant fluorophor, as previously described (18), and the experimental design is illustrated in Fig. 5A. In the absence of microtubules, binding of 2ЈdmT to nucleotide-free K413BIO produced a fluorescence increase characterized by a single phase (Fig. 5B, Ϫmicrotubules). The rate depended hyperbolically on [2ЈdmT], defining a maximum of 1033 Ϯ 153 s Ϫ1 (Fig. 5B, inset, dotted curve). By contrast, mixing a 1:10 K413BIO-microtubule complex with 2ЈdmT produced a fluorescence increase that occurred in two distinct phases of similar amplitudes, separated by a lag (Fig. 5B, ϩmicrotubules). The rate of the first phase showed a hyperbolic dependence on 2ЈdmT concentration, defining a maximum rate of 457 Ϯ 56 s Ϫ1 , an apparent affinity of 80 Ϯ 49 M, and an apparent dissociation rate constant of 107 Ϯ 50 s Ϫ1 (Fig. 5B, inset, solid curve). The amplitude of this phase is approximately half of that for an equal concentration of K413 in the absence of microtubules. Repeating these experiments in the presence of microtubules alone produced no fluorescence change (data not shown). These findings led us to conclude that the first phase in this transient is due to 2ЈdmT binding to the attached, nucleotide-free head. The rate of the second rising phase also showed a hyperbolic dependence on [2ЈdmT], defining a maximum rate of 39 Ϯ 4 s Ϫ1 and an apparent affinity of 39 Ϯ 10 M (Fig. 6, inset, dotted curve). Given that the amplitudes and the apparent affinities of the two phases of the fluorescence transient are similar, we propose that the second phase in the transient is due to binding of 2ЈdmT to the leading head of a doubly attached kinesin-microtubule complex. Why is the rate of ATP binding to the leading head so much slower than that for the trailing head? One possibility is that it is rate-limited by the dissociation of bound ADP. To determine if this is the case, we measured the rate of 2ЈdmD dissociation from the tethered head by mixing a complex of K413BIO-2ЈdmD plus a 10-fold molar excess of microtubules with varying concentrations of ATP in the stopped flow. The resulting fluorescence transient consisted of a single falling phase whose rate depended hyperbolically on ATP concentration, defining a maximum rate constant of 170 Ϯ 17 s Ϫ1 (Fig. 6, inset, solid curve). This is over four times faster than the rate of binding of 2ЈdmT to this head (Fig. 6, inset, dotted curve). Thus, nucleotide binding to the leading head of a doubly attached kinesin-microtubule complex is rate-limited by some process other than ADP release, and we propose that this process consists of a rearward strain imposed on this head. We can test our hypothesis that ATP binding to the leading head is strain-inhibited by examining the effect of AMPPNP on ADP-induced kinesin dissociation. Adding AMPPNP to a kinesin-microtubule complex induces the two neck linkers to separate from each other, in a manner similar to what is seen when kinesin takes a forward step with ATP binding (18). This occurs hand-in-hand with an acceleration of ADP release from the tethered head (8,19) and leads to strong binding of both heads to the microtubule (29,30) and to immobilization of both neck linkers (18). Furthermore, at equilibrium, the stoichiometry of nucleotide binding is 1 mol of AMPPNP:2.4 mol of active sites (32). Taken together, these results indicate that AMPPNP binding to the tethered head causes both heads to bind strongly, with the trailing head containing AMPPNP, with the leading head nucleotide-free, and with both heads under strain. This is illustrated in the left half of Fig. 7A. Furthermore, adding ADP to this system will cause one of the two heads to dissociate, leaving one head strongly bound, as illustrated in the right half of Fig. 7A (29, 30). If rearward strain inhibits nucleotide binding to the leading head, we would predict that mixing a kinesin-microtubule complex plus 1 mM AMPPNP in the stopped flow with ADP will dissociate from the leading head very slowly when compared with ADP-induced dissociation in the absence of AMPPNP (Fig. 4). We measured the kinetics of leading head dissociation by mixing a complex of 1:10 AEDANS-labeled K413BIO:microtuble plus 1 mM AMPPNP with a range of ADP concentrations, as illustrated in Fig. 7A. An example of the fluorescence transient produced by mixing with 400 mM ADP is depicted as the red jagged curve in the figure. Its rate demonstrated a hyperbolic dependence on ADP concentration, defining a maximum of 0.28 s Ϫ1 , nearly three orders of magnitude slower than seen in the absence of AMPPNP (Fig. 4, inset). The apparent second order rate constant for this process, at 0.016 M Ϫ1 s Ϫ1 , compares to a value of 1.31 M Ϫ1 s Ϫ1 in the absence of AMPPNP (Fig. 3, inset). Similar results were also seen using the rhodamine probe (data not shown). To be sure that the fluorescence changes detected with the AEDANS probe are indeed due to the effects of ADP binding, we directly measured the kinetics of 2ЈdmD binding to a 1:10 kinesin-microtubule complex in the presence of 1 mM AMPPNP as described above (Figs. 5 and 6). As shown in Fig. 7B, the rate of the fluorescence rise produced by mixing with 400 M 2ЈdmD (final concentration), at 0.29 s Ϫ1 , was nearly identical to the rate of the fluorescence decrease seen with the AEDANS probe. DISCUSSION The most significant finding of this study is that strain appears to affect one discrete step in the kinesin mechanochemical cycle: binding of ATP to the leading head. This conclusion is supported not only by direct evidence from 2ЈdmT binding kinetics (Figs. 5 and 6) but also from the effect of AMPPNP on nucleotide binding and nucleotide-induced dissociation (Fig. 7). If rearward strain effectively blocks ATP bind- ing to the leading head, we can predict how fast ATP binding can occur by using the values of the rate constants we measured in this study. According to our model, ATP binding can only occur after ADP dissociation from the leading head (k dADP ) and trailing head dissociation (k e ). Because these two steps are irreversible under the conditions of our experiments, at infinite ATP concentration, the rate of binding will be equal to ͑k dADP ⅐ k e )/(k dADP ϩ k e ). Inserting the data from Table I and Fig. 6, we arrive at a value of 37 Ϯ 2 s Ϫ1 , which is in remarkable agreement with the measured value of 39 Ϯ 4 s Ϫ1 (Fig. 6). Our conclusion is directly supported by recent single molecule mechanical studies, which show that external load imposed against the direction of motility reduces ADP binding (38). Hence, we conclude that internal load insures that the two heads of a processive kinesin remain out of phase for many mechanochemical cycles by hindering nucleotide binding to the leading head. The inset of Fig. 7 demonstrates that the second order rate constant for ADP binding is reduced at least two orders of magnitude in the presence of rearward strain. This implies that strain makes the catalytic site relatively inaccessible to nucleotide. Furthermore, we have shown that kinesin:nucleotide is an equilibrium mixture of two states (15). Taken together, these results suggest that the effect of rearward strain is to drive an equilibrium distribution of catalytic site conformations to favor one that is relatively "closed" and inaccessible to nucleotide binding. Our results also provide a critical test of a recently proposed "inchworm" model of kinesin movement (33). In this model, the leading motor is always leading, the trailing motor is always trailing, and one motor remains enzymatically inactive throughout a processive run. Our data show that binding of 2ЈdmT occurs in two distinct phases (Fig. 5B, red transient), and the rates of both of these phases are considerably faster than k cat . This means that a processive kinesin moving on a microtubule reaches the steady state after two nucleotide-binding events. This is both consistent with and required by a hand-over-hand mechanism, such as the one depicted in Fig. 1. However, it is inconsistent with an inchworm mechanism, which would predict only one nucleotide-binding event before the steady state is reached. Our model explains how processive movement by kinesin can be both efficient and adaptable. By preventing ATP binding to the lead head, internal strain guarantees that this head will remain strongly attached to the microtubule at the moment that the trailing head dissociates. ATP would then bind rapidly to the leading head (Ͼ1000 s Ϫ1 ), but hydrolysis and subsequent dissociation would still be relatively slow (k e ϭ 48 -55 s Ϫ1 , Table I). This disparity would give the trailing head time to swing forward and associate with the microtubule, because we have shown (18) that this process is very rapid (ϳ800 s Ϫ1 ). Processivity would therefore result from two features of the mechanochemical cycle: blocking of ATP binding to the leading head by strain, and very rapid forward stepping of the trailing head and its concomitant docking to the microtubule surface (Fig. 1). A particular advantage of this arrangement is that, if the tethered head were to come across an obstacle during its forward swing, the entire kinesin molecule would dissociate at a rate defined by k e . This feature would enable kinesin to sidestep an obstruction, diffuse to another microtubule, and continue on with its journey. Does forward strain have any effect on the trailing head? A variety of mechanical studies have suggested that it accelerates trailing head dissociation (1,9,29). However, our data with K413BIO does not support this. The effective rate constant for dissociation, k e , was in fact slower for dimeric kinesin than for a monomeric construct. As we have shown (Equation 1), k e is a composite rate constant and depends on the rates of ATP hydrolysis (k h ) and microtubule dissociation (k dMT ). Direct measurements using chemical quench methods have consistently shown that, although k h is ϳ100 s Ϫ1 for dimeric constructs, it is considerably faster for monomers, with estimates placing it at Ͼ250 s Ϫ1 (2,20,22). On the other hand, our previous studies with K349 show that the rate of docking of the neck linker places an upper limit on k h of ϳ800 s Ϫ1 (23). We have performed fitting to the data in Fig. 3 to obtain values of k dMT for monomeric and dimeric kinesins, using values of k h of 100 s Ϫ1 for K413 and the limiting values of 300 s Ϫ1 and 800 s Ϫ1 for K349. These reveal values of k dMT of 122 Ϯ 27 s Ϫ1 for K413 and 143 Ϯ 16 s Ϫ1 (k h ϭ 300) and 111 Ϯ 10 s Ϫ1 (k h ϭ 800) for K349. Thus, even when correcting for differences in the kinetics of ATP hydrolysis between monomeric and dimeric constructs, we find that k dMT is relatively unaffected by forward strain. However, K413BIO is a mutant construct that has eliminated all the surface-reactive cysteines. Hence, it may still be possible that forward strain has some effect on the processivity of wild-type kinesin. Our kinetic characterizations of K349 and K413 has shown that only one step in the mechanochemical cycle is affected (18, 23, and this work). This is the rate of ADP-induced dissociation, which is accelerated 19-fold com- pared with wild-type (Fig. 4). Furthermore, although K413BIO is processive and has near wild-type in vitro velocities, its mean run length is reduced ϳ2to 3-fold (Fig. 2). Thus, it is possible that forward strain may accelerate trailing head dissociation in wild-type kinesin. However, even if this were the case, the degree of acceleration would be relatively small, amounting to no more than a factor of 2 or 3. This degree of acceleration is almost identical to the value predicted by Uemura et al. (29) using unbinding force measurements. Thus, our data with K413BIO clearly shows that, although a forward strain-induced dissociation mechanism may modulate the length of a processive run, it is not required for processivity. In summary, this study has shown that the internal strain generated by kinesin during its mechanochemical cycle provides a mechanism that supports processivity. The major effect of strain is to markedly slow ATP binding to the leading head, an effect that guarantees that the two motor domains remain out of phase of each other during multiple mechanochemical cycles. Although strain may also accelerate dissociation of the trailing head, our results show that this effect is not necessary for processive movement. Finally, the strain-dependent mechanism that we describe may have more general applicability. Other molecular motors, such as myosins V and VI are also processive (34 -36). Like kinesin, these motors need a mechanism to keep their individual motor units out of phase enzymatically to prevent premature dissociation from actin. Several of the methods developed in this study are directly applicable to these motors and may be useful in future studies to elucidate the mechanisms underlying their processivity.
7,142.8
2003-05-16T00:00:00.000
[ "Biology" ]
Statistical Score Calculation of Information Retrieval Systems using Data Fusion Technique Effective information retrieval is defined as the number of relevant documents that are retrieved with respect to user query. In this paper, we present a novel data fusion in IR to enhance the performance of the retrieval system. The best data fusion technique that unite the retrieval results of numerous systems using various data fusion algorithms. The study show that our approach is more efficient than traditional approaches. Introduction A retrieval system is a mach ine that receives the user query and generate the relevance score fo r the querydocument pair. The process of finding the needy information fro m a repository is a non-trivial task [1][2][3] and it is necessary to formulate a process that effectively submits the pertinent docu ments. The p rocess of ret riev ing germane articles [4] is termed as Info rmation Retrieval (IR). It deals with the representation, storage, organization of and access to the info rmat ion items [3]. Fusion is a technique that me rges results retrieved by d ifferent systems to form a unique list of documents. Document Clustering is based on particular ran ked list and does not take benefit of mu ltip le ranked list. The fusion function accepts these score as its output for the query docu ment pair. A static fusion function has only the relevance scores for a single query-document pair as its inputs. A dynamic fusion function can have mo re inputs. To construct a dynamic fusion function that can adjust the way it fuses mult iple retrieval systems relevance scores for a query docu ment pair using additional input features such as query, retrieved documents and joint distribution of retrieval systems relevance score for the query. Various models, schemes and systems have been proposed to represent and organize the document collection in order to reduce the users' effort towards finding relevant information [5]. In this study we present three different data fusion methods namely Rank Position, Borda Count, and Condorcet method in ranking retrieval systems. There are four feature selection techniques including Fisher Criterion, Go lub Related Work Fo x and Shaw showed the five co mbination function for combin ing scores [6]. They are as fo llo ws: Co mbMIN = Min imu m o f Indiv idual Similarities Co mbMAX = Maximu m of Individual Similarit ies Co mbSUM = Su mmat ion of Ind ividual Similarit ies Co mbANZ = Co mbSUM ÷ Nu mber of non zero Similarities Co mbMNZ = Co mb SUM × Nu mber of non zero Similarities. Fusion functions which are different fro m Co mb -functions with respect to the generation of answer sets, are also found in the literatures [8]. These functions assign ranks to the documents in the answer set against the relevance score assignment mechanism adapted in Co mb-functions. Few such fusion techniques which emulate the social voting schemes, are the Borda and Condorcet fusions [8]. Borda Fuse and Condorcet's fuse, and showed that the use of social welfare functions (Roberts, 1976) as the merg ing algorithms in data fusion generally outperforms the Co mbMNZ algorith m. Extensive work on Co mb functions has been carried out by Lee [9][10][11] and based on the results he proposed few new rationales and indicators for data fusion. He concluded that CombMNZ is the better performing function than the others. The Probabilistic approach [12] differs fro m the Co mb-functions in the way it selects a best performing strategy from a pool based on a predetermined probability value. The probabilistic model selects only one strategy from the pool wh ile all other strategies remain unused. Hence, evolutionary algorithms are used to select the best performing strategies [13]. Meng and his co-workers (2002) indicate that metasearch software involves four components: 1. Database search engine selector: the search engines [database] to be mingle selected using some system selection methods 2. Query reporter: the queries are submitted to underlying search engines. 3. Docu ment Select ion tool: Documents to be used from each search engine are determined. The simplest way is the use of the top documents. 4. Unificat ion of Result : The results of search engines are combined using merging techniques. (1) Lees Overlap Measure where R i is the number of relevant documents and N i is the number of nonrelevant documents returned by the system i respectively. The ratio of the two systems found to be an important predictive factor for the improvement of the combination. The similarity measure is the two systems on relevant document is less important than on relevant ones. After normalizing the scores for each system on each query by dividing their respective means we found the optimal combination for each possible. For each feature, we use one of the statistical methods such as the traditional t-test. Large score suggests that the corresponding feature has different expression levels in the relevant and irrelevant documents and thus is an important feature and will be selected for further analysis. Besides that some researchers used a variation of correlation coefficient to select features, for examp le Fisher Criterion [13] and Golub Signal-to-Noise. Rank position method The rank position of the retrieved documents are used to merge the documents into a single list . The rank position is determined by the retrieval system. We call d as the original document, while its counterparts in all other documents list are called Reference docu ments of d. The following equation shows the statistical score calcu lation of document I using the position information of this document in all the systems (j=1,2,3,4…n). In this summat ion, systems not ranking a document are omitted. The unite of the top documents is treated as reproduced results. Borda Count Method Borda count and Condorcet method are based on democratic election strategies. The person with h igh score gets n votes and each successor gets one vote less than the predecessor i.e (n-1). If there are persons who are not interested in voting process, then the score is evenly divided among unranked candidates.Then, for each subsequent, all the votes are supplemented and the alternative with the highest number of score wins the election. Condorcet method In the Condorcet election method, voters rank the candidates in the order o f part iality. It is a distinctive method that denote the winner as the candidate. Which p revail each of the other candidates in a pair-wise evaluation. To rank the documents we use their win and lose values. Selection of Information Retrieval Systems for Data Fusion Technique We consider three approaches for the selection of in format ion retrieval systems to be used in data fusion. Best: The best performing retrieval systems that achieve high percentage of the relevant documents retrieved are emp loyed for statistical score calculation. Normal : All systems to be ranked are used in data fusion. Bias: The dissimilarity measure of the retrieval systems are used in data fusion. The Fisher Criterion, Golub Signal-to-No ise , trad itional t-test and Mann-Whitney rank sum statistic were applied to calculate the statistical score, S, fo r the IRs. In these techniques, each system was measured for correlat ion with the class according to some measuring criteria in the formu las. The systems were ran ked according to the score, S, and the top ranked relevant documents in the IRs were selected. The Fisher Criterion, fisher is a measure that indicates how much the class distributions are separated. The coefficient has the following formu la: Whereµ is the mean and is the variance of the given IR whose documents are top ranked or otherwise in class i. There were two IR classes in this experiment, i.e. the relevant documents in IR and the non-relevant documents in IRs. The statistic gives higher scores to IR system that returns relevant document that are retrieved with respect to the user query, whose mean differ great ly between the t wo classes, relative to their variances. Go lub used a measure of correlation that emphasizes the "Signal-to-Noise" ratio, signaltonoise, to rank the relevant documents that are retrieved from the IRs. It is very similar to the Fisher Criterion but use another related coefficient formula as shown below: Where µ is the mean and is the standard deviation of the relevant documents retrieved in class i. Traditional t-test,ttest assumes that the values of the two class variances are equal. The formu la is as follows: Where µ is the mean of the relevant documents in class i and is the pooled variance. The Mann-Whitney rank sum statistics, mann has the following formu la: Where is the sizes of class i, and 1 is the sum of the ranks in class i. The score,S, for each relevant documents retrieved in the IR is thus calculated by using the formu la in these statistical techniques. The bias concept is used for the selection of IR system for data fusion. The cosine similarity measure is given by the following equation: The bias between these two vectors is defined by subtracting the similarity value form 1. ( , ) = 1 − ( , ) (9) We may use any of the combination of the above measure to calculate the statistical score of the in formation ret rieval systems. Discussion So far, our study suggest that, for our choice of retrieval systems, there is an opportunity to improve the retrieval performance by fusing the above mentioned approach. Our preferred design of effective statistical score calculation of informat ion retrieval systems is a multilayer technique to maximize precision and imp rove the retrieval performance that satisfies the user needs. In this paper we have summarized various methods that are used in different art icles published in the journal thereby incorporating and integrating few of the approaches may lead to better precision and recall values. Our significant contribution is thereby invoking the methods thereby integrating few of the techniques from various research articles so that it will be useful to the researchers for their valuable work in the future.
2,347.8
2012-08-09T00:00:00.000
[ "Computer Science" ]
Sympathoexcitation Associated with Renin-Angiotensin System in Metabolic Syndrome Renin-angiotensin system (RAS) is activated in metabolic syndrome (MetS), and RAS inhibitors are preferred for the treatments of hypertension with MetS. Although RAS activation is important for the therapeutic target, underlying sympathetic nervous system (SNS) activation is critically involved and should not be neglected in the pathogenesis of hypertension with MetS. In fact, previous studies have suggested that SNS activation has the interaction with RAS activation and/or insulin resistance. As a novel aspect connecting the importance of SNS and RAS activation, we and other investigators have recently demonstrated that angiotensin II type 1 receptor (AT1R) blockers (ARBs) improve SNS activation in patients with MetS. In the animal studies, SNS activation is regulated by the AT1R-induced oxidative stress in the brain. We have also demonstrated that orally administered ARBs cause sympathoinhibition independent of the depressor effects in dietary-induced hypertensive rats. Interestingly, these benefits on SNS activation of ARBs in clinical and animal studies are not class effects of ARBs. In conclusion, SNS activation associated with RAS activation in the brain should be the target of the treatment, and ARBs could have the potential benefit on SNS activation in patients with MetS. Introduction Metabolic syndrome (MetS) is characterized by visceral obesity, impaired fasting glucose, dyslipidemia, and hypertension [1, 2]. The increasing number of patients with MetS is a worldwide health problem because patients with MetS are considered to be at a high risk for cardiovascular disease. In the pathogenesis of MetS, renin-angiotensin system (RAS) is activated in various organs and tissues [3][4][5][6], and RAS inhibitors, such as angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), are preferred for the treatments of hypertension with MetS because of the prominent depressor effect with the improvement of insulin resistance [7][8][9]. Furthermore, in the pathogenesis of hypertension with MetS, underlying sympathetic nervous system (SNS) activation is critically involved [10][11][12][13][14], and previous studies have suggested that SNS activation has the interaction with insulin resistance [15] and/or RAS activation [16,17]. In the animal studies, SNS activation is regulated by angiotensin-II-type-1-receptor-(AT 1 R-) induced oxidative stress in the brain [18][19][20][21][22][23], and recently, we have demonstrated that SNS activation is strongly mediated by AT 1 Rinduced oxidative stress in the brain of animal models with MetS [24]. As the novel aspect connecting the importance of SNS and RAS activation, in the present paper, we focused on the SNS activation mediated by RAS activation in the brain of MetS. 2 International Journal of Hypertension [27]. Sympathetic neural discharge is markedly potentiated [25], leading to increased insulin levels and elevated blood pressure [10]. Elevated levels of muscle sympathetic nerve activity (MSNA) are associated with obesity-induced subclinical organ damage, even in the absence of hypertension [30]. Interestingly, central obesity demonstrates augmented sympathetic outflow when compared to noncentral adiposity body types [27, [31][32][33] even when hypertension is not present. Furthermore, the presence of hypertension in MetS results in a further augmentation of the SNS activation [25,33]. It should be noted that activation of the SNS is supposed to decrease the body weight. However, this does not occur in MetS with obese subjects. Recently, this is because of the interruption of the SNS activation as an action of energy expenditure suggested by Grassi [14] who modified the scheme originally made by Landsberg. Although it is difficult to prove this action in humans, activation of the brown adipose tissue, which increases energy expenditure, does not occur in obese subjects despite the fact that renal and lumbar SNS activation occur [34]. The accumulation of body fat with a positive energy balance was first shown in animal models to result in SNS activation [35,36]. The chronic increase in basal SNS activation is presumably aimed at stimulating -adrenergic thermogenesis to prevent further fat storage [37] but can also stimulate lipolysis to increase nonesterified free fatty acids, contributing to insulin resistance. Adipose tissue itself can act as an endocrine organ and express various adipokines, which may directly or indirectly activate SNS [29]. A chronically elevated SNS activation could in turn impair -adrenergic signaling, reduce stimulation of metabolism, and contribute to obesity and insulin resistance [10,29]. Moreover, evidence demonstrates that insulin release increases MSNA and enhances the arterial baroreflex gain of SNS activation [38]. Furthermore, SNS activation is important for the occurrence and progression of hypertension leading to hypertensive organ damage in MetS [15]. Thus, treatments targeting the SNS activation are reasonable for patients with MetS. Sympathetic Overactivation in Animal Models with MetS It has been well documented that insulin can augment sympathetic outflow in animals via intracerebroventricular administration [39,40]. Sympathetic outflow increases upon the injection of insulin into the third cerebral ventricle of rats [39]. A recent study also has demonstrated that the insulin affects arcuate nucleus, via the paraventricular nucleus of the hypothalamus, to increase the SNS activation and increase baroreflex gain of SNS activation [41]. While very little insulin is produced in the central nervous system, central insulin receptors are found on the hypothalamus [42] and can cause a coactivation of the SNS activation through transport-mediated uptake across the blood-brain barrier of peripherally secreted insulin [29]. In addition, the arcuate nucleus is unusual in that it contains highly permeable capillaries [43], such that insulin may directly activate receptors in this area without a specific transport mechanism [44]. These results suggest that the increase in plasma insulin causes sympathoexcitation via central mechanisms in animal models with MetS. As other mechanisms, we should discuss about leptin. Leptin is an adipocyte-derived hormone that has a key role in the regulation of the body weight through its actions on appetite and metabolism in addition to increasing blood pressure and SNS activation [40]. Rahmouni et al. suggested that mice with diet-induced obesity exhibit circulating hyperleptinemia and resistance to the metabolic actions of leptin. Recently, it was also demonstrated that RAS in the brain selectively facilitates renal and brown adipose tissue sympathetic nerve responses to leptin while sparing effects on food intake [45] and that hypothalamic arcuate nucleus plays an important role in mediating the sympathetic nerve responses to leptin and in the adverse sympathoexcitatory effects of leptin in obesity [46]. In the other possible central mechanisms of sympathoexcitation in MetS, oxidative stress in the brain would be considered to play a pivotal role. Oxidative stress in the hypothalamus contributes to the progression of obesityinduced hypertension through central sympathoexcitation [47]. We also have demonstrated that AT 1 R-induced oxidative stress in the rostral ventrolateral medulla (RVLM) induces sympathoexcitation in rats with obesity-induced hypertension [24,48]. RVLM is known as a major vasomotor center in the brainstem, and SNS activation is mediated by neuronal activity in the RVLM [49,50]. In the RVLM, AT 1 Rinduced oxidative stress has been determined to be a major sympathoexcitatory [21][22][23]51]. Neurons in the RVLM contribute to elevated sympathetic outflow in rats with dietaryinduced obesity [52]. In obesity-induced hypertension, systemic oxidative stress is increased and is associated with the development and progression of hypertension in various organs [53][54][55][56]. Taken together, it could be considered that SNS activation is increased in animal models with MetS via AT 1 R and oxidative stress in the brain. Renin-Angiotensin System Activation in MetS Previous many studies have demonstrated that RAS is activated in various organs and tissues in MetS [3-6, 29,57]. Several peptides involved in the RAS have been implicated in insulin resistance [58][59][60] or hypertension [61,62]. Hypercholesterolemia can increase AT 1 R gene expression on vascular smooth muscle cells [63,64]. Low-density lipoprotein receptor-deficient mice fed a diet enriched in fat and cholesterol exhibited elevated plasma concentrations of angiotensinogen, angiotensin II [65], and brain angiotensinogen [66]. These results indicate that hypercholesterolemia stimulates the expression of several components of the RAS. Prolonged hyperglycemia and hyperinsulinemia could upregulate RAS [67][68][69][70]. Furthermore, angiotensin II can reduce whole body glucose utilization and insulin sensitivity, increase skeletal muscle and adipose tissue insulin resistance, and impair insulin signaling and action. Recent studies suggest that the RAS activation influences glucose homeostasis independent of its ability to regulate blood flow. Figure 1: A schema presenting our concept in the regulation of sympathetic nervous system mediated by brain renin-angiotensin system in the metabolic syndrome. The numbers on the different arrows are the references from the bibliography. Angiotensin II infusion into the interstitial space of skeletal muscle in dogs could result in insulin resistance independent of changes in blood flow [71]. Chronic angiotensin II infusion into insulin-sensitive rats was shown to reduce peripheral glucose use and insulin-induced glucose uptake [72]. In a model of angiotensin-II-induced hypertension, significant reduction in tyrosine phosphorylation of the insulin receptor and the insulin receptor substrate 1 in skeletal muscle was consistent with a whole-body reduction in insulin-mediated glucose transport [73]. Furthermore, RAS inhibitors could ameliorate insulin resistance [74,75]. These studies could strongly suggest that RAS activation may contribute to insulin resistance in the MetS. Additionally, several largescale clinical trials have demonstrated that the use of ARBs or ACE inhibitors can significantly reduce the incidence of new-onset diabetes in hypertensive patients and/or patients with MetS [76][77][78][79]. Renin-Angiotensin-System-Induced Sympathetic Overactivation in MetS Both SNS and RAS are activated in obesity, and both systems can upregulate the action of the other [16, 17]. RAS is not only implicated in the observed sympathetic overdrive in obesity but may also provide a mechanism through which sympathetic overactivation leads to chronic hypertension [29]. In a previous clinical study, the inhibition of angiotensin II for three months in patients with MetS reduced MSNA activity by 21% [80]. With regard to the central SNS regulation, sympathetic outflow is strongly mediated by RAS activation in the brain. It has already been demonstrated that RAS in the brain mediates SNS activation via oxidative stress in animal models with hypertension and/or heart failure [18-23]. In rats with obesity-induced hypertension, AT 1 R-induced oxidative stress in the RVLM induces sympathoexcitation [24,48]. Taken together, it could be considered that SNS activation could be mediated by RAS activation and oxidative stress in the brain of MetS. Angiotensin II Receptor Blockers Cause Sympathoinhibition in MetS In hypertensive patients with MetS, RAS inhibitors such as ACE inhibitors or ARBs are preferred [7-9]. In our recent study, we have found several new findings as follows: (1) telmisartan, but not candesartan, reduced plasma norepinephrine concentrations in the patients with MetS in spite of the similar depressor effects; (2) amelioration of baroreflex dysfunction in patients with MetS was significantly greater in the telmisartan-treated group than in the candesartan-treated group [28]. Our findings provide novel insight indicating that ARBs have beneficial effects on autonomic function in patients with MetS. Moreover, sympathoinhibitory effect of ARBs might not be a class effect. We also previously demonstrated that telmisartan inhibits SNS activation in hypertensive rats [22,23]. In the animal studies, direct microinjection of ARBs into the RVLM or intracerebroventricular infusion of ARBs inhibits SNS activation in hypertensive rats [21, [81][82][83]. Interestingly, a previous study found that telmisartan can penetrate the blood-brain barrier in both a dose-and time-dependent manner to inhibit the centrally mediated effects of angiotensin II following peripheral administration [84]. We demonstrated that oxidative stress in the RVLM causes sympathoexcitation and baroreflex dysfunction [23,51]. Taken together, these findings lead us to speculate that orally administered telmisartan, but not candesartan, could cause sympathoinhibition due to a reduction in the oxidative stress in the brain. Although other ARBs also inhibit the central actions of angiotensin II in the brain [84][85][86][87][88][89], these effects might differ depending on the pharmacokinetics and properties of each drug [84]. For example, in terms of agonist activity of peroxisome proliferator-activated-receptor-(PPAR-) gamma, a previous study suggested that orally administered rosiglitazone, PPAR-gamma agonist, promotes a central antihypertensive effect via upregulation of PPARgamma and alleviation of oxidative stress in the RVLM of spontaneously hypertensive rats [90]. Although both of telmisartan and candesartan have the function as a partial agonist of PPAR-gamma, only telmisartan can achieve this effect with therapeutics doses [91], and telmisartan might have benefits associated with agonistic effect of PPAR-gamma to a greater extent than candesartan [87,88]. Further studies are necessary to clarify whether the ARBs-induced sympathoinhibitory effect is dependent on the central PPARgamma in MetS or not. Summary RAS and SNS are abnormally activated in MetS, and there are interactions between RAS, insulin resistance, and SNS activation. Among these interactions, SNS activation is mainly augmented by RAS activation and oxidative stress in the brain (Figure 1). In patients with MetS, SNS activation mediated by RAS activation and oxidative stress in the brain should be the target of the treatments for hypertension, and ARBs could have the potential benefit on SNS activation. [3] R. Sarzani, F. Salvi, P. Dessì-Fulgheri, and A. Rappelli, "Reninangiotensin system, natriuretic peptides, obesity, metabolic syndrome, and hypertension: an integrated view in humans, " Journal of Hypertension, vol. 26, no. 5, pp. 831-843, 2008.
2,962.4
2013-02-13T00:00:00.000
[ "Biology", "Medicine" ]
Geological development of the Limpopo Shelf (southern Mozambique) during the last sealevel cycle Paleo-shorelines on continental shelves give insights into the complex development of coastlines during sealevel cycles. This study investigates the geologic development of the Limpopo Shelf during the last sealevel cycle using multichannel seismic and acoustic datasets acquired on the shelf in front of the Limpopo River mouth. A detailed investigation of seismic facies, shelf bathymetry, and a correlation to sea level revealed the presence of numerous submerged shorelines on the shelf. These shorelines are characterized by distinct topographic ridges and are interpreted as coastal dune ridges that formed in periods of intermittent sealevel still-/slowstand during transgression. The shorelines are preserved due to periods of rapid sealevel rise (melt water pulses) that led to the overstepping of the dune ridges as well as due to early cementation of accumulated sediments that increased the erosive resistance of the ridges. The high along-shelf variability of the submerged dune ridges is interpreted as a result of pre-existing topography affecting shoreline positions during transgression. The pre-existing topography is controlled by the underlying sedimentary deposits that are linked to varying fluvial sediment input at different points on the shelf. The numerous prominent submerged dune ridges form barriers for the modern fluvial sediment from the Limpopo River and dam sediment on the inner shelf. They may also facilitate along-shelf current-induced sediment transport. Introduction The shape of continental shelves worldwide is strongly influenced by sediment deposition and erosion as well as shoreline development in response to numerous sealevel cycles in the Pleistocene. Ancient shorelines on the shelf that have been drowned during transgressive stages may form distinctive remnants that are potentially buried by subsequent sedimentation (e.g., Locker et al. 1996;Gardner et al. 2005Gardner et al. , 2007Nichol and Brooke 2011;Brooke et al. 2014;Green et al. 2014). These shorelines allow insights into coastline behavior in times of rising sea level. Their development depends on various factors such as pre-existing topography, available sediment, and the global dynamics of sealevel rise (Cooper and Pilkey 2004;Pretorius et al. 2016; Green et al. 2018). Paleo-shorelines on the shelf are preserved due to an overstepping during periods of rapid sealevel rise (Kelley et al. 2010;Zecchin et al. 2011;Mellett et al. 2012;Green et al. 2013). Additionally, they often show a high resistance to erosion due to early cementation, coarse sediment grain sizes, and the geometry of the surrounding shelf (Storms et al. 2008;Mellett et al. 2012;Green et al. 2013Green et al. , 2018. In this study, we reconstruct the geological development of the shelf in front of the Limpopo River mouth during the last sealevel cycle. Multichannel seismic and acoustic data are presented that allow a detailed mapping of seismic units in the sub-seafloor. Additionally, bathymetric data reveal coastparallel topographic ridges on the shelf. These ridges are presumably formed as coastal dunes during periods of sealevel slow-/stillstand during the transgression after the Last Glacial Maximum (LGM) and show a high lateral variability over short distances. This variability in shelf topography is attributed to differences in outer shelf sediment accumulation/ erosion and pre-existing topography on the shelf that affect coastline development during sealevel rise. This study for the first time explores the geological development of the southern Mozambique shelf area and extends previous work on shelf evolution on the southeastern African margin. Regional setting The Delagoa Bight is a large ocean embayment created by the westward offset of the southern Mozambique coastline (Fig. 1). It is characterized by a narrow Limpopo Shelf area of 15-35 km in its western and northern part with a shelf break at~− 120 m ( Figs. 1 and 2). The extensive Inharrime Shelf to the east transitions into a large contourite deposit on the upper slope of the Inharrime Terrace ( Fig. 1) (Preu et al. 2011). The region is dominated oceanographically by the Agulhas Current (AC), which is formed by the Mozambique Current (MC) and the East Madagascar Current (EMC) (Fig. 1) (Martin 1981;Lutjeharms 2006). The topographically trapped cyclonic lee eddy in the Delagoa Bight ( Fig. 1) (Saetre and Da Silva 1984;Lamont et al. 2010) resembles similar settings along the southeastern African margin (Flemming 1981;Lutjeharms and Da Silva 1988). The northern coast of the Delagoa Bight is formed by the Mozambique coastal plain, which is characterized by Neogene coast-parallel linear dune ridges, separated by swamp areas or shallow flat basins (Tinley 1985;Hughes and Hughes 1992;Botha et al. 2003;Armitage et al. 2006;Porat and Botha 2008). Active coastal dune systems at the shoreline reach heights of > 100 m due to the stacking of dune generations during successive sealevel highstands (Cooper and Pilkey 2002;Sudan et al. 2004;Armitage et al. 2006). The main source of sediment input to the Delagoa Bight is the Limpopo River that drains 370,000 km 2 of hinterland (Flemming 1981). Estimates of the sedimentary load vary between 33 × 10 6 (Flemming 1981) and 48.8 × 10 6 t/year (Milliman and Meade 1983), being comparable to sediment loads of the Niger and Congo Rivers with 40 × 10 6 and 43 × 10 6 t/year, respectively (Milliman and Meade 1983). Limpopo River sediment is thought to form the Limpopo Cone (Martin 1981); however, a large fraction of the sediment is being dispersed in the ocean (Walsh and Nittrouer 2009). Recently, it has been shown that a large part of Limpopo sediment is transported eastwards on the shelf, attributed to the Delagoa Bight eddy (Schüürman et al. 2019). The southern Mozambique continental shelf has not been a focus of geoscientific research in the last decades, with the exception of studies on coastal development south of Maputo (Cooper and Pilkey 2002;Armitage et al. 2006;Green et al. 2015) and sediment provenance in the Delagoa Bight (Schüürman et al. 2019). However, the neighboring KwaZulu-Natal shelf in South Africa has been investigated by a number of studies. These focus mainly on the effects of shoreline variations on shelf deposits during the last sealevel cycle and sediment dynamic processes on the shelf (Flemming 1980(Flemming , 1981Ramsay 1994;Green and Uken 2005;Green et al. 2008;Green 2009;Green and Luke Garlick 2011;Green et al. 2013Green et al. , 2014Green et al. , 2018Salzmann et al. 2013;Cooper et al. 2016;Pretorius et al. 2016Pretorius et al. , 2018. These studies reveal widespread paleo-coastlines on the shelf including beachrock and aeolianite ridges on the seafloor. These ridges were formed during the post-glacial transgression in the course of sealevel stillstands and preserved due to overstepping during periods of rapid sealevel rise (Salzmann et al. 2013;Green et al. 2014;Pretorius et al. 2016). Methods The multichannel seismic data presented in this study were acquired during R/V Meteor Cruise M75/3 in 2008 with the University of Bremen high-resolution multichannel seismic equipment (Fig. 1). A Sercel GI-Gun with a volume of 2 × 0.125 l was used as a source, yielding a frequency range of 100-600 Hz with a central frequency of~200 Hz. Data was recorded with a 50-m-long streamer containing 48 single hydrophones with a 1 m channel spacing. The data were processed applying standard techniques including common midpoint sorting (distance 1 m), band pass filtering, stacking, and post-stack time migration. Vertical data resolution resulting from the source frequency is between 2 and 4 m. Data processing was carried out with the Schlumberger Vista seismic processing software package. For data interpretation, the IHS Kingdom software package was used. Time depth conversions were done using a seismic velocity of 1500 ms −1 , which is a suitable velocity for the shallow sub-seafloor. Additionally, 4 kHz parametric sediment echosounder data were recorded in parallel to the multichannel seismic data acquisition using the hull-mounted Parasound system. This data was used to image the shallowest sediment deposits to a depth of~20 m at a decimeter-scale resolution. Furthermore, bathymetric data was acquired with the hull-mounted Kongsberg EM710 multibeam echosounder (Fig. 2). The data were cleaned and gridded with a grid cell size of 10 × 10 m. Bathymetry of the Limpopo shelf The Limpopo Shelf is~20 km wide (Fig. 1) and its surface is characterized by a large number of coast-parallel ridges (Fig. 2). These ridges were observed in water depths between~40 and 100 m. They show a maximum height of 20 m and a width of between 100 and 700 m (Fig. 2). Their shape is variable with some showing flat tops and an extended width while others appear narrow with rounded tops. The ridges occur individually or in sets of several individual ridges that are located in close proximity to each other and sometimes overlap (Fig. 2B). The lateral continuity of ridges is highly variable making correlation of ridges between bathymetric profiles difficult (Fig. 2). Sets of ridges show a high degree of variability with individual ridges terminating, splitting or merging within few hundred meters distance (Fig. 2B). In front of the Limpopo River mouth, the character of these topographic ridges changes from numerous prominent ridges in the west to sets of individual ridges that appear partly buried by sediment in the east (Fig. 2C). The outer shelf is 3 km wider and relatively flat in the east compared to its narrow width in the west with topographic ridges close to the shelf break (Fig. 2C). The position of individual ridgelines on the shelf differs between profiles (Fig. 2). Three ridgelines can be observed on the eastern bathymetric profiles at~− 95 m,~− 65 m, and~− 40 m water depth (Fig. 2). They can tentatively be correlated to ridges on the western profiles ( Fig. 2C) although these show significantly more ridges with a more diverse morphology. Seismic units The seismic data, as presented in Figs. 3, 4, 5, and 6, reveals a complex shelf stratigraphy to depths of greater than 250 m. However, in this study, we focus on the upper~50 m of the seismic profiles and examine five seismic units (SU 1 to SU 5, Table 1) above the first observed major unconformity (MIS-6 unconformity). Seismic unit 1 Seismic Unit 1 (SU 1) lies on the outer shelf and shelf edge (Figs. 3,4,and 5). It shows a thickness of up to 100 m and comprises Profiles were projected to an across-shelf transect orientation, allowing spatial comparison. All profiles are anchored at a similar distance from the coast at their landward end. An increase in shelf width and ridge burial as well as a low-relief outer shelf is visible towards the east medium to high amplitude parallel reflectors which onlap landwards onto a distinct unconformity (MIS 6 unconformity) (Figs. 3 and 4). The unit is overlain by SU 5 on the outer shelf (Fig. 4A). The nature of the upper boundary of SU 1 changes from east to west. On Fig. 3, the easternmost profile, SU 5 conformably overlies SU 1 while on Fig. 4, the westernmost profile, SU 1 is truncated at its upper boundary. The unit shows a significant change in appearance further west where it is much thinner and downlaps onto the underlying unconformity (Fig. 5). Seawards, the reflectors dip steeply (~12°) from the shelf edge to the upper slope and show truncation there with convolute and contorted reflectors further seaward (Fig. 3A). Amplitude blanking presumably induced by gas in the sediments can be observed in the thick eastern parts of SU 1 (Figs. 3 and 4). Seismic Unit 2 Seismic Unit 2 (SU 2) occurs in the eastern part of the study area where it is preserved extensively on the middle shelf and otherwise limited to small patches within topographic depressions in the MI6 unconformity on the outer shelf ( Figs. 3 and 4). It shows thicknesses of up to 15 m and consists of low to medium amplitude parallel to subparallel reflectors. SU 2 reflectors downlap seawards onto the MIS-6 unconformity (Fig. . SU 2 is truncated and overlain by SU 3-5 (Figs. 3C and 4C). The geometry of SU 2 appears to dictate the severity of the truncation, which appears most intense on the seaward side of the SU 2 middle shelf occurrence. Seismic Unit 3 Seismic Unit 3 (SU 3) is present on the inner and middle shelf and shows a thickness of 10-20 m. While SU 3 can be observed over the whole shelf width in the western part of the study area (Fig. 5), it is not present in the eastern part on the middle shelf where SU 2 deposits are present (Figs. 3 and 4). It consists of steeply seaward dipping (5°-10°) reflector packages of medium to high amplitude (Figs. 3, 4, and 5). The reflector dip angles decrease towards the base of the unit (Fig. 5). SU 3 reflectors downlap onto the underlying major (MIS 6) unconformity, especially in the northern parts of the profiles (Figs. 3 and 5). Further seaward, SU 3 truncates SU 2 reflectors ( Fig. 3B and 4B). SU 3 is overlain by SU 5 at its most seaward occurrence on the middle shelf (Figs. 3B and 4B) and Seismic Unit 4 Seismic Unit 4 (SU 4) is very variable in its appearance and thickness. It occurs from the outer shelf to the inner shelf and shows thicknesses of 1-20 m. The thickness varies significantly as the unit comprises thin draping deposits as well as the prominent topographic ridges on the shelf (Fig. 3D). The unit appears acoustically transparent in large parts and infills the incisions that have been observed in the upper part of SU 3 (Fig. 3D). The topographic ridges likewise show a transparent interior and a high amplitude upper boundary. SU 4 generally shows an upper boundary that is characterized by high amplitude peak-like events which are partly buried either by SU 4 or SU 5 deposits (Figs. 3C and 4B). SU 4 also contains parallel to subparallel medium amplitude reflectors in some parts of the middle shelf (Figs. 4D and 5D). The unit is generally overlain by SU 5 (Figs. 3, 4, and 5) and the rounded to flat tops of the ridges suggest some degree of truncation of SU 4. Seismic Unit 5 Seismic Unit 5 (SU 5) occurs uppermost in the seismic and acoustic data. It is a thin draping unit that is present on the whole shelf. The thickness of this unit varies between < 1 m on parts of the outer shelf (Fig. 4B) and several meters on the inner shelf (Fig. 4D). SU 5 is beyond the resolution capabilities of the multichannel seismic data; however, the echosounder data shows it to be an acoustically transparent unit for the majority of the shelf (Fig. 3B-D). A notable exception is on the inner shelf landward of prominent ridges where it is characterized by high amplitude horizontal parallel reflectors (Figs. 4D and 5D). SU 5 shows a convex shape at the seafloor on the middle shelf ( Fig. 3C and 4C) and moats run along topographic ridges (Fig. 4D). Locally, the acoustic signal is attenuated beneath high amplitude reflectors of this unit. (Fig. 5D). Seismic stratigraphy The observed seismic units have all been deposited during different periods of sealevel development, as interpreted from their appearance in the seismic and acoustic data. All interpreted seismic units occur above a major unconformity that is present on the whole continental shelf. Due to the nature of the encountered seismic units above and the implied sealevel development, this unconformity has been attributed to the MIS 6 sealevel lowstand, which has not been imaged and interpreted in the region previously. SU 1 on the outer shelf and the shelf edge is interpreted as a shelf-edge sediment wedge, due to its medium amplitude parallel reflections (Figs. 3 and 4) that suggest relatively lowenergy sediment accumulation. The thickness of the unit, especially in the eastern profiles (Figs. 3 and 4) indicates considerable sediment input to the outer shelf. Slope failures at the upper slope and associated mass transport deposits are apparent as convolute and contorted reflectors (Figs. 3 and 4). Sea level appears to have been at least~− 100 m or higher during SU 1 deposition, based on the landward limit of the unit. Furthermore, a lower sea level would have resulted in a higher energy environment at the outer shelf and the shelf edge, resulting in more disturbed reflection patterns. The unit does not appear to be active during high sea level as it is restricted to the outer shelf and shelf edge (Figs. 3, 4, and 5), where it is overlain by SU 5 (Fig. 4A). Thus, SU 1 formation is attributed to a time of lowered sea level but not LGM sea level. SU 2, with its medium amplitude parallel reflectors that indicate a relatively low-energy environment and that downlap onto a (Figs. 3 and 4). This sediment accumulation presumably formed during falling sea level at medium water depths, which corresponds to the medium amplitude parallel reflectors which indicate a relatively low-energy environment. Overlying SU 1 and 2, the packages of steep seaward dipping reflectors of SU 3 show a high amplitude that indicates a highenergy depositional environment and are interpreted as shallow water regressive deposits. Based on the reflection geometry, SU 3 may be interpreted as a shore face setting (e.g., Novak and Pedersen 2000), where the steeper upper part and the flatter lower part of SU 3 reflections may represent the upper and lower shore face, respectively. These could only be formed and preserved during regression, which is also consistent with the truncation of SU 2 by this unit (Figs. 3 and 4), indicating the erosion of deposits formed in medium water depth on the shelf during continued sealevel fall (Figs. 3B and 4B). The shallow incisions visible in the top of SU 3 (Figs. 3D and 6) resemble fluvial valleys (e.g., Nordfjord et al. 2006;Darmadi et al. 2007) and suggest a subaerial exposure of the surface, supporting the interpretation of SU 3 as a regressive unit. The overlying SU 4 shows high amplitude surfaces and transparent unit interiors, suggesting coarse sediments and/or hard surfaces. This suggests a high-energy environment for the formation of the unit as can be expected for a transgressive phase. The prominent topographic ridges of SU 4 resemble lithified coastal dune ridges that formed during sealevel slow-/stillstands on the shelf (Salzmann et al. 2013;Green et al. 2014;Pretorius et al. 2016). SU 4 also fills the fluvial incisions in SU 3 ( Fig. 3C and 6), indicating a flooding of the former land surface (Fig. 3D). SU 4 thus represents a generally thin transgressive deposit with numerous submerged coastal dune ridges which show heights of up to 20 m. The base of these dune ridges corresponds generally to the sea level during their formation, which is used for a sealevel reconstruction ("Post-LGM sealevel rise" section). The thickness of SU 5 is comparably small (Fig. 3B-D), except behind dune ridges on the middle to inner shelf where material is apparently dammed (Fig. 4D). The unit is interpreted as highstand sediments formed by the modern sedimentation on the Limpopo Shelf, supported by its presence as the uppermost sedimentary deposit in most locations of the shelf (e.g., Green et al. 2013). The high amplitudes of landward SU 5 reflectors (Fig. 4D) may represent coarse sediments from the Limpopo River that accumulate behind submerged coastal dune ridges, similar to examples from the South African continental shelf (Flemming 1981). These interpreted seismic units suggest that SU 1-5 represent a full sealevel cycle from regressive shelf deposits (SU 2), regressive shore face sediments (SU 3) to transgressive (SU 4) and modern highstand sediments (SU 5). Thus, the major unconformity underlying all described seismic units corresponds to the sub-aerially exposed shelf surface of the penultimate glacial sealevel lowstand (MIS 6). The erosive surface associated with the LGM sealevel lowstand (MIS 2) is less apparent between SU 2/3 and 4. This may be due to the relatively short duration of maximum sealevel lowstand during the LGM (Ramsay and Cooper 2002). Late Pleistocene evolution of the South Mozambique continental margin Pre-LGM sealevel fall Based on the interpretation of seismic units and their geometric relationship, the development of the Limpopo Shelf in the Late Pleistocene has been reconstructed (Fig. 7). As mentioned above, the uppermost major erosive unconformity has been attributed to the MIS 6 (191-130 ka, Lisiecki and Raymo 2005) lowstand surface during the penultimate glaciation when the complete shelf had been sub-aerially exposed (Waelbroeck et al. 2002). This unconformity is directly overlain by regressive deposits of SU 2 on the middle and parts of the outer shelf (Figs. 3E and 4E), which are interpreted as having formed during the sealevel fall of MIS 4 and 3 (MIS 4: 71-57 ka; MIS 3: 57-29 ka; Lisiecki and Raymo 2005) (Fig. 7B). Transgressive and highstand deposits between the MIS 6 unconformity and SU 2 could not be identified, possibly due to the thin nature of such deposits and limited resolution and signal penetration of the seismic and acoustic data, respectively. The regressive deposits (SU 2) are generally eroded and only preserved in some locations on the middle and outer shelf where they are directly overlain by SU 3 (Fig. 3A, B, and E). The shallow water regressive deposits of SU 3 are thus younger and were presumably also formed during the regression of MIS 4-2 landward of SU 2 deposits (MIS 4: 71-57 ka; MIS 3: 57-29 ka; MIS 2: 29-14 ka; Lisiecki and Raymo 2005) (Fig. 7B). During the regression, most deposits of SU 2 were eroded as they came into the high-energy shallow water zone and SU 3 often overlies directly the MIS 6 unconformity (Figs. 3, 4, 5, and 7A, B). The preservation of SU 2 strata on the middle shelf may be due to a period of relatively stable sea level at~− 50 m during MIS 3 before continued rapid sealevel fall towards MIS 2 (Ramsay and Cooper 2002). A relatively stable sea level for an extended time period during MIS 3 at~− 50 m resulted in a truncation of the upper boundary of SU 2 but allowed the preservation of SU 2 strata below the wave base. The rapid sealevel fall from − 50 to − 90 m at the onset of MIS 2 (Ramsay and Cooper 2002) allowed SU 2 deposits to remain in place on the middle shelf ( Fig. 7B and C). The following drop of the sea level to the LGM minimum (Fig. 7D) was relatively short and did not allow sufficient time for more extensive subaerial truncation of the SU 2 deposits. However, preserved SU 2 deposits are not present towards the east of the study area (Fig. 5) which suggests that this configuration may be locally caused by e.g. pre-existing seafloor morphology. Such morphology could be caused by additional sediment input from the Limpopo River leading to thick SU 2 deposits in the east, contrasting with originally relatively thin SU 2 deposits in the west. Several incisions were observed in the upper boundary of SU 3 on the middle and inner shelf which may correspond to fluvial valleys (Figs. 3A, D and 6). The incisions show widths of 100-1000 m and depths of up to 20 m (Figs. 3A, D and 6). The modern Limpopo shows a width of around 100 m close to its termination with a flood plain of several kilometers width including numerous meander loops (Spaliviero et al. 2014;Sitoe et al. 2015). Thus, these fluvial channels probably represent the course of the Limpopo River during shelf exposure (Figs. 3A, D and 6). This suggests an eastward course of the paleo-Limpopo from the modern Limpopo River mouth on the exposed shelf during sealevel lowstand. The further paleo-Limpopo course on the shelf including the termination at the shelf break was not imaged but is expected towards the east of the study area ( Fig. 2A). The LGM maximum sealevel lowstand in the study area was at − 125 m based on truncated reflectors and a potential small lowstand sediment wedge at the shelf break (Figs. 3A and 7D). This observation agrees with the − 125 m LGM sea level established in the region (Ramsay and Cooper 2002;Green and Uken 2005). During sealevel lowstand, the complete shelf was exposed and the coastline was situated just beyond the shelf break at the time (Fig. 7D), resulting in Limpopo River sediment being exported directly to the slope. In other areas, coastal dune ridges were interpreted to have formed during the MIS 3 sealevel fall, e.g., on the Western Australian shelf (Brooke et al. 2014), which contrasts with our observations. There are no indications for such coastal dune ridges forming during sealevel fall on the southern Mozambique shelf. The only indications for individual sealevel positions during the overall sealevel fall of MIS 4-2 are the truncations of SU 2 on the middle shelf (Fig. 3). Furthermore, examples of preserved dunes on the exposed shelf are known from the Western Australian shelf (Nichol and Brooke 2011), but could not be observed here, either due to limited dune activity in the vegetated coastal hinterland or due to their erosion during the following transgression. Post-LGM sealevel rise The transgression during the deglaciation occurred in several episodes of sealevel rise at different rates. It includes so-called slowstands, times of very slow sealevel rise, and times of rapid sealevel rise, associated to melt water pulses (MWPs) ( Fig. 7A; Camoin et al. 2004;Liu and Milliman 2004;Bard et al. 2010;Zecchin et al. 2011). The deposits of SU 4, which overlie shallow water regressive deposits (SU 3), are interpreted to form during transgression in a high-energy regime in relatively shallow water similar to units mapped further south on the shelf (Green 2009;Green and Luke Garlick 2011). The transparent character of SU 4 without apparent internal structure and a high amplitude upper boundary is interpreted to correspond to reworked sands. Areas with thin SU 4 deposits (Figs. 3B, D and 4B) may represent remnants of such reworked sands during ongoing transgression. Thicker parts of SU 4 are part of the submerged ridges that are related to paleocoastlines. The submerged ridges are interpreted as lithified submerged coastal dunes, probably containing beachrock components, which formed directly at the coastline when sea level remained stable for a certain period of time (Mauz et al. 2015;Green et al. 2013). Similar examples have been attributed to various sealevel positions on the shelf along the SE African margin (Flemming 1981;Ramsay 1994;Green 2009;Green et al. 2014Green et al. , 2018Pretorius et al. 2016). The deepest discernible sealevel indication within SU 4 deposits can be observed at about − 95 m (Figs. 3, 4, and 5), which corresponds to a − 95 m coastline on the shelf (Fig. 2). This shoreline correlates with high amplitude truncation surfaces of the upper boundary of SU 3 (Fig. 3B) and seafloor topography of SU 4 deposits (Fig. 4B) in front of a set of dune ridges (Figs. 3 and 4). The observed distance to the first major dune ridge of several hundred meters (Figs. 3 and 4) may correspond to an extended beach area or foredune plain in front of a major coastal dune belt (Hesp and Walker 2013). Such a foredune plain may have formed as a prograding beach system as sea level remained stable at − 95 m for several decades to centuries (Fig. 7A and E). Contrastingly, no 4). However, variability of dune ridges along the shelf is significant (Fig. 2), indicating the large impact of local topography on shoreline development. (G) Further rapid sealevel rise (MWP-1B) to~− 40 m oversteps existing dune ridges on the middle shelf. Fluvial channels are infilled during transgression and a new dune ridge at − 40 m develops. (H) After the transgression, the submerged dunes are covered by a Holocene sediment drape (SU 5) that shows damming behind dune ridges on the inner shelf extended beach area or foredune plain is observed at the − 95 m coastline on the western profile where a major dune ridge is present directly at the coastline (Fig. 5E). This may be due to the steep shelf relief in close proximity to the shelf break (Fig. 5) compared to the area further east (Figs. 3 and 4). This setting facilitated sediment export from the coastline directly beyond the shelf break. This paleo-shoreline, including the associated dune ridges, is interpreted to have formed during a slowstand in sealevel rise (Fig. 7A) at~− 95 m that corresponds to the Bølling Interstadial (~14.5 ka) (Salzmann et al. 2013). This slowstand would have allowed the coastline to form considerable topography in the form of aeolianites and early-cementation beachrocks, as suggested for other locations along the SE African margin (Salzmann et al. 2013). Sealevel rise to this position from − 125 to − 95 m appears to have occurred rapidly as no indications for intermittent coastline stability can be observed (Figs. 3, 4, and 5). Continued transgression often erodes coastal dune ridges if sealevel rise occurs gradually and allows for sufficient time for wave ravinement and sediment reworking Pretorius et al. 2018). Contrastingly, shoreline features may be preserved during rapid sealevel rise and associated overstepping, due to the reduced time for wave ravinement (Gardner et al. 2005(Gardner et al. , 2007Storms et al. 2008;Green et al. 2013). This pattern of overstepping has been widely observed along the SE African coast and seems to be a prevailing mechanism for the preservation of such coastal barrier systems (Salzmann et al. 2013;Green et al. 2013Green et al. , 2014Cooper et al. 2016). Additionally, early cementation of coastal dunes also increases their potential resilience to erosion (Gardner et al. 2005(Gardner et al. , 2007Salzmann et al. 2013;Green et al. 2014Green et al. , 2018. Cementation of calcareous sands in coastal dunes has even been associated to allow dunes formed during the regression or sealevel lowstand to withstand the following transgression (Nichol and Brooke 2011;Brooke et al. 2014). A rapid sealevel rise from − 96 to − 75 m during the transgression is associated to a prominent melt water pulse (MWP-1A) between 14.3 and 14.0 ka BP (Fig. 7A) (Fairbanks 1989;Bard et al. 1990). This range of rapid sealevel rise coincides with the depth range of preserved coastal dune ridges in our data ( Fig. 7E and F) and has been associated with the overstepping of similar coastal barrier complexes further south (Salzmann et al. 2013;Green et al. 2014;Cooper et al. 2016). A second consistent line of shoreline indications at~− 65 m is visible as a second line of submerged ridges on the Limpopo Shelf (Figs. 2, 3, 4, and 5). In the eastern part of the study area, this coastline consists of a set of submerged dune ridges, very similar in appearance to the deeper one at − 95 m (Figs. 3 and 4). In the western part (Fig. 5), the − 65 m shoreline corresponds to a prominent set of ridges, although the higher number of individual ridges suggests a more complex coastline development there. This difference indicates a distinct variation in shoreline evolution within the study area (see "Lateral variability of dune ridge occurrence on the Limpopo Shelf" section). The position of the shoreline at − 65 m agrees well with the sealevel slowstand of the Younger Dryas (12.7-11.6 ka, Fig. 7A) (Camoin et al. 2004). Similar to the submerged ridges at − 95 m, a stable sea level or a very slow sealevel rise would have allowed the formation of a coastal dune complex (Fig. 7F). Parts of SU 4 showing medium amplitude parallel and subparallel reflectors on the middle shelf (Figs. 4D and 5D) may represent lagoonal deposits that formed during the sealevel slowstand of the Younger Dryas (Fig. 7A). Protected lagoons and tidal flats may have formed behind barrier islands resulting from coastal dune complexes during slow sealevel rise . This setting is similar to currently present swamp areas behind the coastal dune systems onshore where rivers flow through low-lying areas between dune ridges ( Fig. 1; Tinley 1985;Hughes and Hughes 1992;Botha et al. 2003). Correspondingly, channel incisions can be observed behind submerged dune ridges offshore representing such coast-parallel river courses (Fig. 5C). The preservation of this second set of submerged coastal dune ridges at -65 m is probably associated to overstepping during a second phase of rapid sealevel rise, i.e., MWP-1B (11.5-11.2 ka BP) when sea level rose from − 58 to − 45 m (Fig. 7A, Liu and Milliman 2004). After the rapid sealevel rise of MWP-1B, another period of sealevel slowstand occurred at~11 ka BP (Fig. 7A) and probably led to the formation of the shallowest set of shoreline indicators on the Limpopo Shelf at~− 40 m (Fig. 2, 3, 4, and 5). The rise in sea level during MWP-1B apparently led to varying shoreline development in relatively close proximity on the shelf with three intermediate dune ridges in the east (Fig. 5) and two ridges in the center (Fig. 4), contrasting with a lack of intermediate shoreline indicators in the west (Figs. 3 and 7G). The subsequent sealevel rise to modern highstand conditions led to the complete drowning of the shelf area. Highstand sediments consist of SU 5 deposits, a Holocene drape (e.g., Lobo et al. 2004;Cawthra et al. 2012) that covers underlying units and infills topography that was present on the exposed shelf (Fig. 7H, see "Holocene sediment deposition on the Limpopo Shelf" section). Simultaneously with the post-LGM sealevel rise, sedimentation continued at the shelf break, building up SU 1 deposits there, probably resulting from Limpopo River sediment export during the early transgression ( Fig. 7E and F). Lateral variability of dune ridge occurrence on the Limpopo shelf The number, geometry, and extent of submerged dune ridges on the Limpopo Shelf is variable over short distances (Fig. 2). Similarly, the overall seafloor morphology varies along the shelf with an extended and low-relief outer shelf in the east compared to the western profiles (Fig. 2C). The numerous dune ridges in the west (Fig. 5) apparently coalesce eastwards to form the sets of ridges observed there (Figs. 2, 3, and 4). Individual dune ridges are thought to represent specific semistable sea levels, thus corresponding to the overall sealevel evolution (see "Post-LGM sealevel rise" section). However, the higher number of ridges compared to sealevel still-/ slowstands during the post-LGM transgression (see "Post-LGM sealevel rise" section) suggests a more complex coastline development than is implied by sealevel evolution alone. A significant factor for shoreline development during transgression is the pre-existing topography of the shelf as it governs accommodation space and rate of shoreline translation . The geometry of the exposed shelf before the onset of transgression is formed by the upper boundary of SU 3 and the erosive truncation of SU 2 (Fig. 7D). SU 3 closely follows the topography of the underlying MIS 6 unconformity, which shows considerable variations along the shelf, especially on the outer shelf (Figs. 3,4,and 5). In the eastern profiles (Figs. 3 and 4), a low-relief MIS 6 surface is due to increased thicknesses of underlying pre-MIS 6 sediments. These pre-MIS 6 shelf-edge sediment accumulations show large changes of volume along the shelf, leading to a lower relief outer shelf on the east and a steeper outer shelf in the west. Such volumes of sediment probably originate from the Limpopo River which has flowed eastward on the exposed shelf during the LGM, and possibly also during earlier sealevel lowstands ( Fig. 2A, "Pre-LGM sealevel fall" section). Additionally, the preserved regressive SU 2 deposits on the middle shelf (Figs. 3 and 4, "Pre-LGM sealevel fall" section) form a step in shelf topography before the deglacial transgression (Fig. 7D). Due to this topographic configuration of the shelf prior to the onset of sealevel rise relatively long time intervals of the sealevel rise occurred at steep sections of the shelf, e.g., the truncated SU 2 sediments, with a relatively stable coastline at this location. This increased geographical stability of the coastline over time allowed the formation of large coastal dune complexes in the east (Figs. 3 and 4), contrasting with relatively equidistant individual coastal dune ridges in the west (Fig. 5). A similar effect may be observed in onshore dune ridge geometry in the area east of the Inharrime River (Fig. 1A) where dune ridges merge as they and the coastline change their strike from SW-NE to SSW-NNE. This may be due to a more stable eastern coastline compared to the southern coast, probably over various sealevel cycles, resulting in the lateral variability in the number and morphology of coastal dune ridges. In the western profile (Fig. 5), the MIS 6 unconformity shows a steeper topography on the outer shelf due to the lack of pre-MIS 6 shelf-edge sediment accumulations that are present further east. Furthermore, the western profile does not show remnant SU 2 sediments preserved on the shelf (Fig. 5). Therefore, the steeper outer shelf and the relatively smooth topography of the SU 3 regressive deposits led to relatively equally distributed dune ridges across the shelf, including ridges close to the shelf break (Figs. 2 and 5). The numerous ridges are located close to each other and often show overlapping depth ranges, i.e. the sea level expected for a landward ridge also intersects with a seaward ridge (Fig. 5C). Thus, it appears likely that this area of the shelf showed numerous lagoon settings during transgression with barrier islands formed by previously active dune ridges, which resisted the erosion during continued transgression. Numerous ridges in the western part of the study area occur within~10 m above the − 65 m shoreline (Figs. 4 and 5) and may represent different stages of sealevel evolution during the extended sealevel slowstand of the Younger Dryas (Fig. 7A). Commonly, the preservation of such ridges has been attributed mainly to their overstepping by phases of rapid sealevel rise (Salzmann et al. 2013;Green et al. 2014;Pretorius et al. 2016). However, the data on the Limpopo Shelf show that these ridges must be resistant to wave erosion as consecutive landward dune ridges are formed even after only minor sea level rises. The causes for this high resistance to erosion even under continued wave action are unclear but may be due to increased early cementation ). Holocene sediment deposition on the Limpopo shelf The thickness of Holocene sediments (SU 5) varies considerably on the Limpopo Shelf. Maximum thickness (~10 m) occurs where sediments, most probably riverine input from the Limpopo River as the major sediment source, is accumulated landward of submerged coastal dune ridges (Fig. 4D). Amplitude blanking in sediment echosounder data in this unit is probably due to biogenic methane forming in organic rich fluvial sediments (Fig. 5D). Holocene deposits are much thinner towards the outer shelf (Figs. 3B, 4B, and 5B). The damming effect results from a capture of Limpopo River sediments on the inner shelf and a limited export towards the shelf break, similar to examples from the South African shelf (e.g., Flemming 1981). Additionally, the convex shape of Holocene sediment bodies on the middle shelf (Figs. 3C and 4C) as well as moats along the dune ridges (Fig. 4D) suggest the shaping of seafloor sediments by along-shelf bottom currents (Cattaneo et al. 2003;Liu et al. 2007). This observation together with the absence of large Holocene sediment accumulations on the outer shelf indicates a possible current-driven along-shelf transport of Limpopo River sediments. This modern lack of sediment accumulations on the outer shelf contrasts with the thick SU 1 sediment accumulations at the shelf edge that formed as the Limpopo River could deliver sediment to the outer shelf during periods of lower sea level. This inferred sediment transport agrees with the prevailing northeastward current direction along the coast associated to the Delagoa Bight Lee Eddy (Fig. 1A) (Lutjeharms and Da Silva 1988;Lamont et al. 2010). An efficient eastward transport of Limpopo sediment has also been suggested based on sediment sampling in the area. Provenance analyses of seafloor sediment samples showed a predominantly eastward transport of suspended Limpopo sediments towards the eastern Inharrime Terrace (Schüürman et al. 2019). Conclusion The topography and geologic appearance of the Limpopo Shelf has been shown to be the result of sealevel development and local variations in pre-existing topography. The dynamics of sealevel rise during the post-LGM transgression shape the shelf deposits by leading to the formation of various paleo-shorelines during periods of relatively stable sea level. Prominent paleoshorelines have been identified at − 95, − 60, and − 40 m water depth, corresponding to periods of sealevel slow-/stillstand during the overall transgression. The preservation of these shorelines and their associated coastal dune ridges have usually been attributed to their overstepping during times of rapid sealevel rise, i.e., melt water pulses. However, the number and position of individual ridges on the Limpopo Shelf suggests a high resistance of these ridges to erosion while landward coastal dune systems are active and older dune systems exist as offshore barriers. Also, a high variability in number and appearance of ridges along the shelf has been documented. This is attributed to local variations in the pre-existing topography during transgression, which is shaped by outer shelf sediment accumulations and remnant MIS 4/3 regressive sediments on the middle shelf. These findings highlight the importance of the local geological setting such as the position of fluvial sediment sources (Limpopo River) on the shelf, in defining shoreline development during transgression. During times of highstand sea level, the prominent submerged dune ridges form barriers for Holocene sediment export from the river mouth to the shelf edge. Sediment is dammed on the inner shelf and along-shelf current-induced sediment transport is facilitated by the coast-parallel ridges, stressing the impact of shelf development on sediment dynamics on modern shelves. Acknowledgments R/V Meteor Cruise M75/3 was funded by the Deutsche Forschungsgemeinschaft (DFG) and was associated to the SAFARI IODP proposal. We would like to thank the captain, crew and scientific party of R/V Meteor Cruise M75/3 for their excellent work and support of the scientific research. We also thank Schlumberger and IHS for providing academic licenses for their software Vista 2D/3D seismic processing and The Kingdom Software, respectively. We would like to thank Prof. Ralph Schneider for fruitful discussions and general support of this study. Funding Information Open Access funding provided by Projekt DEAL. Data availability The used data may be accessed by contacting the corresponding author or through the PANGAEA website. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,991.8
2020-03-01T00:00:00.000
[ "Geology", "Environmental Science", "Geography" ]
Impact in Oral Cavity due to the Use of Hydrogen Peroxyde in Dental Treatment Chemicals agents are commonly found in people activities. It has a big role, especially in the oral cavity. Chemical agents are often... 377 and concentration, the pH of the substance, the quantity applied, the manner and duration of tissue contact, and the extent of penetration into tissue. These oral mucosal changes can vary from diffuse erosive lesions ranging from simple mucosal sloughing to complete mucosal detachment with extensions into the submucosa [2]. Dental treatments which were provided by dentists cannot be separated from chemical agents that actually have an impact on the body, but the dentists use them as needed so that often it does not have an impact on the oral mucosa. One of the chemicals that is often found can cause chemical burn in oral mucosa is hydrogen peroxide [3]. Hydrogen peroxide (H 2 O 2 ) is a colorless liquid with a bitter taste and is highly soluble in water to give an acidic solution. H 2 O 2 is an oxidizing agent with a wide number of industrial applications in for example, bleaching or deodorizing textiles, wood pulp, hair, fur and foods, in the treatment of water and sewage, as a seed disinfectant and neutralizing agent in wine distillation. Low concentrations of H2O2 have been found in rain and surface water, in human and plant tissues, in foods and beverages and in bacteria. Hydrogen peroxide is a reactive oxygen species, along with superoxide (O 2 -), hydroxyl (HO), peroxyl (ROO) and alkoxyl (RO). In human tissue, intrinsic sources of H 2 O 2 are organelles (especially mitochondria), salivary cells, microorganisms, and the lungs. Hydrogen peroxide production can be followed by the liberation of highly reactive oxygen species in the body via enzymatic and spontaneous redox reactions that often involve interaction with transitional metals such as iron or copper [4]. Enzymes such as catalase, glutathione peroxidase and superoxide dismutase catalyze the decomposition of H 2 O 2 into water and oxygen. Reactive oxygen radicals are a potential source of cell damage through causing DNA strand breaks, genotoxicity, and cytotoxicity, but these radicals tend neither to cross biological membranes nor travel large distances within a cell. Antioxidants provide a source of electrons that reduce hydroxyl radicals to water. However, when exogenous H 2 O 2 levels overwhelm cellular protective mechanisms, H 2 O 2 presents a health hazard. Individuals with catalase lack catalase activity, leading to high endogenous H 2 O 2 levels causing necrosis and ulceration of soft and hard tissues [5]. Hydrogen peroxides or H 2 O 2 is used to decrease plaque formation and to control pyorrhea (gum inflammation). The mechanism of antimicrobial action is due to the release of nascent oxygen which is detrimental to anaerobes. It acts on both Gram positive and Gram-negative organism. The other mechanism of antimicrobial property is the effect of H 2 O 2 on debridement of bacterial cell walls [6]. It is widely used professional and self-administered dental product. The most common applications of H 2 O 2 include Mouth rinse (1%-3%) and bleaching agent (3%-5%). As patients' needs for aesthetic dental treatment shift from traditional treatment including caries and dentures to esthetic treatment, an increasing number of people are visiting the dental office with hopes for whiter teeth. Over the counter (OTC) bleaching products are sold as cosmetics and are freely available through stores, pharmacies, and the Internet. Although in-office bleaching is a particularly popular method for bleaching, the bleaching agent may sometimes come into contact with the patient's gingiva or oral mucosa during the in-office bleaching procedure, even if the gingiva is protected with a light-cured resin or rubber dam and bleaching is performed by an experienced dentist. This may result in temporary whitening and pain in the gingiva or oral mucosa, but the pain subsides within a few hours and the whitened spot eventually regains its original color. This has also been reported with at-home bleaching [7,8] ( Table 1). Stomatitis Oral ingestion of 3% H 2 O 2 solutions usually do not result in severe toxicity but may cause vomiting; mild irritation to mucosa; and burns in the mouth, throat, oesophagus, and stomach. Ingestion of higher concentrations (>10%) can result in more dangerous sequelae, such as burns to mucus membranes and gut mucosa. It shows a dose-dependent reaction where at high concentration eugenol causes adverse effect on fibroblast-.and osteoblasts-like cells. This leads to localized necrosis and compromised healing. In lower concentration, it causes localized hypersensitivity reactions to oral mucosa called "contact stomatitis" and on dermis causes "contact dermatitis," possibly because it can react directly with proteins to form conjugates and reactive happens [5,11] ( Figure 1). Study before found that almost no cytotoxicity from Gingival inflammation Study in vitro examinations of signs such as whitening of the gingiva and pain that may result from tooth whitening. No studies to this effect in HGFs have been reported from at-home bleaching agents but mostly result from in-office bleaching agents and OTC products. Many studies have been carried out on tooth hypersensitivity from in-office or at-home bleaching agents. From in vitro studies, it has been concluded that whitening agents histologically penetrate the dentin and do not damage the pulp. In current dental practice, pain incurred during the procedure is generally treated with medication or fluoride or by stopping the procedure. Several studies examining plaque control as an index have shown that whitening agents reduce plaque on the gingiva and reduce gingival inflammation. Hydrogen peroxide and carbamide peroxide have been used for debridement during endodontic therapy, in mouth rinses to reduce plaque in individuals with gingivitis, and for treatment of periodontal diseases [13,14]. [15,16]. Chemical burn Recent studies have reported gingival irritation and chemical burn after at-home bleaching. Kirsten and others reported that patients experienced gingival irritation from at-home bleaching both immediately after the procedure and up to 45 days following treatment. Another study reported that hypersensitivity and gingival irritation disappeared within two days after in-office bleaching. Previous investigations have shown that 15% of patients reported gingival irritation after in-office bleaching, but it was possible to safely control contact of the bleaching gel with the gingival margin by using light-cured gingival dams [17,18]. Among the human genes listed in Table 2, TNFSF10 belongs to the tumor necrosis factor (TNF)-α ligand superfamily. TNFRSF4 and TNFRSF19 belong to the TNF-α receptor superfamily (TNFRSF). Other authors have examined the gingiva and proinflammatory cytokines. Two types of in-office were examined for bleaching agent and one type of at-home bleaching agent. Found that interleukin (IL)-1β expression increased with in-office bleaching but that there was no change in the expression of IL-10. In inflamed tissue, macrophages and other cells of the innate immune system synthesize TNF-α, a proinflammatory cytokine, to fight off infection and treat tissue damage. This TNF-α then binds to cell surface receptors and induces the production of other cytokines, triggering and maintaining inflammation. It is possible that H2O2 came into contact with the gingiva triggered a cellular response through the inflammatory cascade via TNF-α resulting in chemical burn [19,20]. Carcinoma potential Another study also reported that using hydrogen peroxide and alcohol on a daily/weekly basis encourages the promotion of malignant neoplasm in the oral mucosa. Alcohol, for instance, potentiates in 50 times the harm caused by tobacco. Mouth washing with hydrogen peroxide, using products made with alcohol or drinking alcohol everyday may lead to the risk of oral chemical carcinogenesis. Also known as urea peroxide, sodium perborate, carbamide peroxide or other less common names, the hydrogen peroxide may have deleterious effects on the enamel, dentin, cementum, pulp and gingiva [6]. These names vary according to the presentation and formulation of the product. However, if both formulation and presentation of the product are controlled, and if the product is properly applied by a professional who takes the appropriate compensatory measures, its use is safe. In the oral mucosa, hydrogen peroxide potentiates the effect of many other carcinogenic agents found in patient's mouth. These carcinogenic agents may originate from food, cosmetics, hygiene products, pesticides, herbicides, tobacco, alcohol, virus, among others. Such potentiation happens due to the fact that these products are promoting agents of oral chemical carcinogenesis [21]. In mice low doses of hydrogen peroxide (0.1% and 0.4%) administered in drinking water caused adenomas or adenocarcinomas in the duodenum. These findings have been questioned and it has been Further studies on skin have concluded that H 2 O 2 is inactive as a tumour promoter or carcinogen [22,23]. Impact in root resorption and tooth sensitivity An adverse effect that has been reported following internal tooth bleaching is cervical root resorption (an inflammatorymediated external resorption of the root). Summarizes the available data to support a correlation between internal tooth bleaching and cervical root resorption. In these cases, it is very difficult to distinguish if the root resorption noted was due to the effect of the bleach or the trauma [21,24]. A high concentration of hydrogen peroxide in combination with heating seems to promote cervical root resorption. The underlying mechanism for this effect is unclear, but it has been suggested that the bleaching agent reaches the periodontal tissues through the dentinal tubules and initiates an inflammatory reaction. In vitro studies using extracted teeth showed that hydrogen peroxide placed in the pulp chamber penetrated the dentine, that heat increased the penetration and that the penetration is greater in teeth with cervical cemental defects. Case reports and small clinical studies have confirmed that a 10% carbamide peroxide gel used in a bleaching tray at night, Volume 4 -Issue 5 Copyrights @ Nanda Rachmad Putra Gofur, et al. Inter Ped Dent Open Acc J 381 peroxide reddens the mucosa and gingiva by wounding them with tissue dissolution and inflammation. Hydrogen peroxide burns and may lead to necrosis of gingival papillae. It completely cleans the teeth because it demineralizes the enamel and also removes dirt or pigments. The enamel becomes porous and food stains it even more, increasing the need for mouth washing. Enamel becomes thicker every day. Should there be any restoration, it will induce microleakage through its interface with the tooth, causing the enamel to come out easily while eating [22,26]. If burning the mucosa and demineralizing the enamel were the biggest problems, we could think about using hydrogen peroxide with moderation. However, the biggest problem is that hydrogen forestomach tumors in rats. These reports showed that exposure to high-dose H 2 O 2 for a sustained period induces oxidative stress that leads to DNA damage in mammalian cells [26,27]. Conclusion The use of chemical agents in dental treatment can cause the chemical burns in oral cavity. So, the patients need to listen more towards the dentist about the instruction after treatment so the materials cannot cause the irritation in oral mucosa.
2,518.6
2020-11-03T00:00:00.000
[ "Medicine", "Chemistry" ]
Probing the top Yukawa coupling at the LHC via associated production of single top and Higgs We study Higgs boson production associated with single top or anti-top via $t$-channel weak boson exchange at the LHC. The process is an ideal probe of the top quark Yukawa coupling, because we can measure the relative phase of $htt$ and $hWW$ couplings, thanks to the significant interference between the two amplitudes. By choosing the emitted $W$ momentum along the polar axis in the $th\,(\bar{t}h)$ rest frame, we obtain the helicity amplitudes for all the contributing subprocesses analytically, with possible CP phase of the Yukawa coupling. We study the azimuthal asymmetry between the $W$ emission and the $Wb\,(\bar{b})\to t(\bar{t})\,h$ scattering planes, as well as several $t$ and $\bar{t}$ polarization asymmetries as a signal of CP violating phase in the $htt$ coupling. Both the azimuthal asymmetry and the polarization perpendicular to the scattering plane are found to have the opposite sign between the top and anti-top events. We identify the origin of the sign of asymmetries, and propose the possibility of direct CP violation test in $pp$ collisions by comparing the top and anti-top polarization at the LHC. I. INTRODUCTION The top quark Yukawa coupling of the 125 GeV Higgs boson (h) is the largest of the Standard Model (SM) couplings, and the precise measurement of its magnitude and properties is the important target of the LHC experiments. Measurements of the loop-induced hgg and hγγ transitions constrain the top Yukawa, or htt coupling indirectly, if only the SM particles contribute to the vertices with the SM couplings. The observation of the associated production of the Higgs boson and the top quark pair at the LHC [1,2] determines the htt coupling directly, constraining its magnitude to be within about 10% of the SM prediction. In this paper, we study the possibility of measuring a possible CP violating phase of the htt coupling in the Higgs boson production associated with single top or anti-top at the LHC. The cross section is dominated by the so-called t-channel W exchange process, where the W boson emitted from a quark or anti-quark in a proton scatters with a b orb quark in the other proton to produce a pair of h and t, ort. The process is particularly sensitive to the phase of the htt coupling, because we can measure the real and imaginary part of the htt coupling through the interference between the amplitudes with the htt and hW W couplings which have the same order of magnitude with opposite sign [3,4] in the SM limit. We can therefore measure the phase of the htt coupling with respect to that of the hW W coupling, whose magnitude and phase have already been constrained rather well [5][6][7] and will be determined precisely in the HL-LHC era. We adopt the following minimal non-SM modification to the top Yukawa coupling, where we introduce the positive κ factor as g htt = (m t /v)κ htt > 0 (2) for the normalization of the coupling. The Lagrangian expressed in terms of the chiral two-spinors t L and t R show that ξ htt is the CP phase of the Yukawa interactions. Its defined range is −π < ξ htt π (4) with respect to the hW W coupling term L hW W = g hW W hW − µ W +µ (5) for which we take the real positive value g hW W = (2m 2 W /v)κ hW W > 0. (6) CP violation in the htt coupling, ξ htt = 0, with κ htt = 1 can arise by radiative effects in the htt vertex due to new interactions which violate CP, or in models with two or more Higgs doublets when the Higgs interactions violate CP. Once the underlying new physics model is fixed, we often obtain correlations among the non-SM effective couplings, such as κ hW W , κ htt , ξ htt , and also for the other hf f couplings as well as the loop induced hgg, hγγ and hZγ vertices. In this report, we set κ htt = κ hW W = 1 (7) in all the numerical results, in order to focus on the observable CP violating effects for relatively small phase |ξ htt | 0.1π. In Fig. 1, we show the total cross section of the Higgs boson production with single t ort via t-channel W exchange in pp collisions at √ s = 13 TeV for the effective htt coupling of Eq. (1), with κ htt = 1 and |ξ htt | between 0 (SM) and π. Also shown is the total cross section for h and a tt pair in the same model. They are obtained by MadGraph [8] with the effective Lagrangian of Eq. (1) in Feynrules [9]. Here, and in all the following numerical calculations, we set m h = 125 GeV, m t = 173 GeV, m W = 80.4 GeV, v = 246 GeV, 4π/e 2 = 128 and sin 2 θ W = 0.233 for the electroweak couplings. Factorization scale is set at µ = (m t + m h )/4 for the ht and ht production via t-channel W exchange processes in 5-flavor QCD, following Ref. [10]. As for the QCD production of htt processes, we set the factorization √ s = 13 TeV) for the sum of pp → th and pp →th production via t-channel W exchange as a function of the CP phase |ξ htt | for κ htt = 1. Also shown is the pp → tth production cross section in the same model. and renormalization scales both at µ = (2m t + m h )/2, following Ref. [11]. The QCD coupling at µ = m Z is set at α s (m Z ) = 0.118 [12]. As is well known, the cross sections for the Higgs production with single t ort are sensitive to the relative sign of the htt and the hW W couplings, which becomes 13 times larger than the SM value at |ξ htt | = π where the sign of the htt coupling is reversed [3]. Because of this huge enhancement factor, LHC experiments [13][14][15][16][17] have ruled out the region around |ξ htt | ∼ π for κ htt = 1. It is worth noting, however, that we focus our attention in this paper on a relatively small magnitude of the CP phase |ξ htt | 0.1π, where the total cross sections do not deviate much from the SM values, σ(th +th) = 60.85 fb and σ(tth) = 406.26 fb in the LO, as shown in Fig. 1. The paper is organized as follows. In section II, we give helicity amplitudes for all the four LO subprocesses analytically. In section III, we study event distributions of ht and ht production with a tagged forward jet, and show the exchanged W helicity decomposition in Q (the virtual W mass) and W (the invariant mass of the th orth system) distributions. In section IV, we study the azimuthal angle asymmetry between the W emission plane and the W + b → th or W −b →th production plane about the W momentum direction. In section V, we study t andt polarizations in the t (t) rest frames, as a function of Q, W and the W + b → th (W −b →th) scattering angle θ * in the th (th) rest frame. In section VI, we study consequences of T and CP transformations, and show the possibility that CP violation signal can be distinguished from T-odd asymmetry arising from the final state scattering phase in pp collisions, by measuring the t andt polarizations perpendicular to the scattering plane. The last section VII gives a summary of our findings and remarks on possible measurements at HL-LHC. Appendix A gives the relation between the helicity amplitudes and t andt spin polarizations, and Appendix B gives polarized t andt decay distributions. II. HELICITY AMPLITUDES In the SM, four subprocess contribute to single top plus Higgs production in the leading order and also to single anti-top plus Higgs production; db → uth (sb → cth) (10a) ub →dth (cb →sth) (10b) FIG. 2: Feynman diagrams of ub → dth subprocess. The four momenta q µ and q ′µ along the W + and P µ th along the top propagators are shown with arrows. We work in 5-flavor QCD with massless b-quark distribution in the proton, where the matching with the 4-flavor QCD with massive b-quark has been shown for the single t plus h processes in the NLO level [11,19]. The subprocesses in the parenthesis with second generation quarks have exactly the same matrix elements when we ignore quark mass and CKM mixing effects. The Feynman diagrams of the subprocess ub → dth in Eq.(9a) are shown in Fig. 2. The left diagram (a) has the hW W coupling, while the right diagram (b) has the htt coupling. The u → dW + emission part is common to both diagrams. The amplitudes for all the other subprocesses in Eq. (9) are obtained by replacing the u → dW + emission current by c → sW + ,d →ūW + ands →cW + current, respectively. The Feynman diagrams for anti-top plus Higgs production in Eq. (10) are obtained simply by replacing the W + emission currents by the W − emission currents, and by reversing the fermion-number flow along the b to t transitions to make themb tot transitions. In pp collisions, valence quark initiated subprocesses ub → dth (3a) and db → uth (4a) dominate the single top and anti-top production cross sections, respectively. The amplitudes for the subprocess ub → dth in Fig. 2 are simply with for the effective top Yukawa coupling of Eq.(1) and the SM hW W coupling of Eq. (6). The propagator factors and D νρ W (q ′ ) are the W -propagators, with D W (q) = (q 2 − m 2 W ) −1 , and D t (P th ) = (P 2 th − m 2 t ) −1 is for the top quark. The four momenta are depicted in Fig. 2 as In the limit of neglecting all the quark masses except the top quark mass, m t , the amplitudes depend only on the top quark helicity σ/2 for σ = ±1, since only the left-handed quarks and right-handed anti-quarks contribute to the SM charged currents in the massless limit. Because the W + emission current of massless u and d quarks is conserved, only the spin 1 components of off-shell W + propagates in the common D µν W (q) term in Eq. (13): where λ denotes the helicity of virtual W + , and the (−1) λ+1 factor appears for q 2 < 0. By replacing the covariant propagation factor in the common W + propagator with Eq. (15), we can express the amplitudes Eq. (11) as a sum over the contributions of the three W + helicity states: 3: Scattering anglesθ, φ and θ * . The polar angleθ is defined in the Breit frame, whereas θ * is defined in the W + b rest frame, for the common polar axis along the W momentum direction. The azimuthal angle φ is the angle between the emission plane and the scattering plane. with and We calculate the helicity amplitudes T λσ for W + b → th process in the th or W + b rest frame. Therefore all the polarization asymmetries presented below refer to the top quark helicity in the th rest frame, see Fig. 3. On the other hand, because massless quark helicities are Lorentz invariant, and the W + helicity is boost invariant along the W + momentum direction, which we take as the polar axis in Fig. 3, we can evaluate the u → dW + emission amplitudes in the Breit frame [33], where the W + four momentum has only the helicity axis component with Q > 0 and Q 2 = −q 2 . The u and d quark four momenta are where their common energyω and the reflecting momentum along the polar axis are, respectively, withŝ = (p u + p b ) 2 and W = P 2 th = (p t + p h ) 2 . In Eq. (20) and in Fig. 3, the u → dW + emission plane is rotated by −φ about the z-axis, so that the top quark azimuthal angle measured from the u → dW + emission plane is φ. The u → dW + emission amplitudes have very compact and intuitive expressions in the Breit frame: Here we adopt for the three polarization vectors, which differs by the sign of the λ = +1 vector from the standard Jacob-Wick convention. The convention dependence cancels in the product, and our choice makes CP transformation properties of the sub-amplitudes, J λ and T λσ , simple because It is interesting to note [33] that the u → dW + emission amplitudes can be expressed in terms of Wigner's d-functions. In terms of the invariants, they are expressed as is the energy fraction of the d-quark in the ub collision rest frame. It should be noted that for typical events with x 0.1, the ordering holds among the magnitudes of the d-functions. In particular, J − for the helicity λ = −1 W + dominates over J + , because left-handed u-quark tends to emit a left-handed W -boson in the forward direction. The helicity amplitudes T λσ for W + λ b → t σ h process are calculated in the th rest frame. We first express T λσ (18) in terms of chiral two-spinors [34] where we denote P = P th , ξ = ξ htt , and σ µ ± = (1, ± σ) are the chiral four-vectors of σ matrices. We note that the chirality flip term for the right-handed top with e −iξ phase factor grows with P , while the chirality non-flip term for the left-handed top with e iξ is proportional to m t , because of the chirality flip by the Yukawa interactions. As for the W -exchange amplitudes, the chirality flip right-handed top proportional to m t is non-negligible because of the scalar component of the exchanged W boson, which has the 1/m 2 W factor. In the th rest frame, where the W + momentum is along the positive z-axis, the four momenta are given by momentum p * of t and h in the th rest frame in units of W/2. The amplitudes T λσ can be calculated straightforwardly, giving where we denote m = m t and E * = E * t . Note that the term √ E * + p * appears when the top helicity matches its chirality, while √ E * − p * when they mismatch. The amplitude for λ = +1 does not have the top Yukawa coupling contribution because the angular momentum along the z-axis is J z = + 3 2 for the left-handed b-quark, which cannot couple to J = 1/2 top quark. For λ = −1 and λ = 0 W + , both W and t exchange amplitudes contribute. Most importantly, the λ = 0 amplitudes are enhanced by the factor of W/Q, which is a consequence of the boost factor of the longitudinally polarized λ = 0 W + wave function. The polarization vectors in Eq. (23) in the Breit frame are invariant for λ = ±1, but the longitudinal vector becomes in the th rest frame, where both q * and q 0 * are the order of W/2 as in Eq. (29a). Summing over the three W polarization contributions, we find the amplitudes [31] where the factors are chosen such that they are positive definite. The ǫ, δ, and δ ′ factors are where ǫ ≃ 0.21, δ and δ ′ are all small at large W, and in particular, δ ≃ δ ′ holds rather accurately except in the vicinity of th production threshold, W ≃ m t + m h . At W 400 GeV, the amplitudes are well approximated as where we have dropped λ = +1 contributions which are suppressed at high W/Q. The above approximations show that the leading λ = 0 contributions with the W/Q enhancement factor are proportional Because both A and B terms are positive definite, their magnitudes are smallest at ξ = 0 (SM), where the W exchange term A and the t-exchange term B interfere destructively, whereas they become largest at |ξ| = π where the two terms interfere constructively, explaining the order of magnitude difference in the total cross section between ξ = 0 and |ξ| = π shown in Fig. 1. This strong interference between the two amplitudes gives the opportunity to accurately measure the htt Yukawa coupling with respect to the hW W coupling. Another important observation from the above approximation is that the CP-violation (CPV) effects proportional to sin ξ are significant only in the amplitude of the right-handed helicity top quark, M + , because M − is proportional to e −iξ + e +iξ = 2 cos ξ at large W where δ = δ ′ . We note here that M + is the leading helicity amplitude at large W, where the chirality flip Yukawa interactions give right-handed top quark from the left-handed b-quark. The negative helicity amplitudes M − is suppressed by an additional chirality flip of the top quark, indicated by the factor δ = m/W in Eq. (35b). Before starting discussions about signals at the LHC, let us complete all the helicity amplitudes of the contributing subprocess for both th andth productions. First, the amplitudes for the subprocesses cb → sth are the same as those for ub → dth in our approximation of neglecting quark masses and CKM mixing: as summarized in Eqs. (32). There are two additional contributions to th production from the anti-quark distributions of proton where thed →ūW + emission amplitudes are In the Breit frame, they are expressed as We note the relation between J λ and J λ . The matrix elements for the W + emission from anti-quarks differ from those from quarks by simply replacing 1 ±c by 1 ∓c, thereby changing the preferred helicity of W + from λ = −1 (for u → dW + and c → sW + ) to λ = +1 (ford →ūW + ands →cW + ). The λ = 0 amplitude remains the same. Note that our special phase convention for the vector boson polarization vectors in Eq. (23) allows the identities in Eq. (41) to hold without the λ-dependent sign factor, (−1) λ , that appears in the standard Jacob-Wick convention. Now theth production amplitudes are where J λ and J λ are the same as in Eqs. (22) and (40), respectively, and the W −b →th amplitudes T λσ are obtained from T λσ by CP transformation Note that the first identity above tells the invariance of the amplitudes when all the initial and final states are CP transformed, along with the reversal of the sign of the CP phase. The latter equality is valid in our tree-level expressions Eqs. (30) where absorptive parts of the amplitudes (including the top quark width) are set to be zero. i It is instructive to compare the amplitudes of the two subprocesses which are related by CP transformation, such as between the amplitudes (32) or (37) for ub → dth and those of Eq. (42b) forūb →dth, By using the identities (41) and (43) among the sub-amplitudes, we find the relation between M σ and M σ , whenσ = −σ. It is worth noting that if we ignore the absorptive phase of the amplitudes, such as the top quark width in the propagator, the above identity gives because both φ and ξ appear in the amplitudes only as the phase factor, e ±iφ and e ±iξ . This tells that all the distributions of the CP transformed processes are identical even in the presence of CP-violating phase, ξ = 0, if we ignore the absorptive amplitudes from the final state interactions. We will discuss the origin of this somewhat unexpected property among the amplitudes in section V. In pp collisions, the dominant subprocess for single production of Higgs and anti-top quark comes from the collision of valence down-quark scattering withb quark, whose amplitudes are given by Eq. (42a). Since the properties of thē th production processes are governed by these amplitudes, we give their explicit form by using the same A and B factors of Eq. (33): where the d → uW − emission amplitudes J λ are the same as the u → dW + emission amplitudes in the ub → dth subprocess amplitudes, Eq. (32), while the W −b →th amplitudes T λσ are obtained from the W + b → th amplitudes T λσ by CP transformation in Eq. (43). The chirality favored helicity oft from right-handedb is now −1/2, and the corresponding amplitude M − has the leading e iξ factor from the t † L t R term in the Lagrangian Eq. (1), while the contribution of the e −iξ t † R t L term is doubly chirality suppressed, by the δδ ′ factor. In the helicity +1/2 amplitude i Note that the sign factor, −σ, in the identities (43) is a consequence of the phase convention of Ref. [34,35] where the v-spinors for anti-fermions are expressed as See also Appendix. B of Ref. [36]. M + , single chirality flip (in addition to the flip due to the Yukawa interaction) is necessary, either in the spinor wave function (giving δ), or in the top quark propagator (giving δ ′ ). Summing up, we find M − to have significant imaginary part proportional to sin ξ, whereas M + is almost proportional to cos ξ, which are opposite of what we find for the single t and h production amplitudes. III. DIFFERENTIAL CROSS SECTIONS The differential cross section in pp collisions from the subprocess ub → dth is given at leading order by where D u and D b are the PDF of the u and b quark, respectively, in the protons. The colliding parton momenta in LHC laboratory frame are in the first term of Eq. (48), whereas the u-and b-quark four momenta are exchanged in the second term. Therefore, the b-quark momentum is negative along the z-axis for half of the events and positive for the other half. In order to perform the azimuthal angle or polarization asymmetry measurements proposed in [31], we should identify the momentum of the virtual W + emitted from the u (or c,d,s) quark. This is possible only when we can identify the sign of the b-quark momentum. A. Selecting the b andb momentum direction Because valence quark distributions are harder than the sea quark distributions, we expect that the subprocess with negative b-quark momentum should have positive rapidity of the hard scattering system (p The black curves are the sum of the rapidities from the four subprocesses. The quark and anti-quark jets from t-channel W emission are tagged with cuts p j T > 30 GeV, and |ηj | < 4.5. p j T > 30 GeV, and |η j | < 4.5. Events with negative momentum b-quark are shown by solid curves, whereas those with positive b-quark momentum are given by dashed curves. The solid black curve shows the total sum of all thj events. The blue curves give the sum of ub → dth and cb → sth subprocess contributions (that have exactly the same matrix elements), and the red curves are for the sum ofdb →ūth andsb →cth subprocess contributions. As expected, events with Y (thj) > 1 are mostly from the negative momentum b-quark (blue and red solid curves). Although the purity (the probability) of negative b-momentum is 95%, only 41% of the total events satisfy the Y (thj) > 1 cut, leaving (59%) of the events with mixed kinematics which results in significant reduction of observable asymmetries and polarizations. The situation is much worse forthj production processes, as shown in Fig. 4(b). With the same Y (thj) > 1 cut, the purity is only 89% and only 31% of the events are kept. It is mainly because the down quark is not as hard and populous as the up quark in the proton. All results in our study are calculated with the CTEQ14 PDF in the LO [37] with the factorization scale µ = mt+m h 4 , following Ref. [10]. In Fig. 5(a) and 5(b), we show the tagged jet pseudo-rapidity distributions. Now the separation of events with negative b momentum (shown by blue and red thick curves) and those with positive b momentum (shown by blue and red thin curves) is clearer for both thj (a) andthj (b). In Tables I and II, we show the purity and the survival rate of several η j selection cuts for choosing events with negative b orb momentum events, respectively, for thj andthj processes. Even for η j > 0, when all events are used in the analysis, the purity is higher than 96% for both thj and thj events. In this report, we adopt the selection cut for the jet tag. Since the purity is higher than 99% for both thj (Table I) andthj (Table II), we can safely neglect contribution from events with the wrong b-quark momentum direction, whose analysis requires additional kinematical considerations. Needless to say, events with η j < −1 have exactly the same signal with those with η j > 1 because there is no distinction between the two colliding proton beam. From Tables I and II, we find that the selection cut |η j | > 1 allow us to study 90% of thj and 88% ofthj events with full kinemetical reconstruction. In the following analysis, we assume that a significant fraction of h and single t ort production via t-channel W exchange can be kinematically reconstructed, and define observables whose properties are directly determined by the helicity amplitudes of Section II. B. Q and W distributions The differential cross section for the subprocess ub → dth is is the probability to find left-handed u and b quarks inside their PDFs, the color factor is unity for t-channel color-singlet exchange between the colliding quarks, and the three-body Lorentz invariant phase space can be parametrized as as a convolution of the two-body phase space integrated over the invariant mass W of the t + h system The j + th phase space is parametrized in the ub or thj rest frame, where the four momenta are parametrized as The forward peak in the η j distribution in Fig. 5 is due to the square of the common t-channel W propagator in the amplitude, which grows towards cosθ ∼ 1, subject to the jet p T cut The t + h phase space in the th rest frame is where the participating four momenta are parametrized as When evaluating the amplitudes M σ , we rotate the frame about the virtual W momentum axis so that the top three momentum is in the x-z plane, as in Eq. (58) and the azimuthal angle is given to the u → dW emission plane as in Eq. (20) in the Breit frame. We show in Fig. 6 the distributions with respect to the momentum transfer Q, Eq. (58). Contributions from λ = 0 and λ = ±1 W ′ s separately and their sum are shown. Because the momentum transfer Q does not depend on the azimuthal angle, integration over φ about the W -momentum axis (the common z-axis in Fig. 3) projects out the W helicity states and the interference among different λ contributions vanish. It is clearly seen that the longitudinal W (λ = 0) contribution in red solid curves dominates at small Q (Q 100 GeV) both for thj andthj. This is a consequence of the W/Q enhancement of the λ = 0 amplitudes as shown explicitly in Eqs. (32) for thj, and in Eq. (47) forthj. Among the transverse W contributions, λ = −1 (solid green) dominates over λ = +1 (dashed green) for thj, but they are comparable forthj. This somewhat different behaviour of the transverse W contribution between thj andthj processes needs clarification, and we show in Fig. 7 the distribution with respect to W, the invariant mass of the th system. The upper plots (a) and (b) are for Q < 100 GeV, and the lower plots (c) and (d) are for Q > 100 GeV. The left figures (a) and (c) are for thj, while the right ones (b) and (d) are forthj. Again contributions from the three helicity states of the exchanged W are shown separately. It is clearly seen that at small Q (Q < 100 GeV) and large W, W 500 GeV, the longitudinal W (λ = 0) contribution dominates the cross sections of both thj (a) andthj (b) production. The transverse W contributions are significant at large Q (Q > 100 GeV), where the left handed (λ = −1) W dominates over the longitudinal W (λ = 0) at W 400 GeV for thj. On the other hand, the right-handed W − dominatesthj production at small W, especially at large Q (Q > 100) GeV, see Fig. 7(d). This is because the λ = +1 W − collides with the right-handedb-quark, giving J z = + 1 2 initial state with noβ suppression, as can be seen from the first terms in Eqs. the d → uW − splitting amplitudes J λ , Eq. (22). Summing up, the λ = +1 W − contribution is significant near the threshold (W 400 GeV) forthj production, while the λ = −1 W − contribution takes over at larger W because of dominant d-quark contribution. In contrast, for the thj production, the λ = +1 contribution (green dashed lines) is deeply suppressed, as the disfavored helicity emitted from left-handed u quark at large W and by the p-wave threshold suppression at small W, making them very small both at small (a) and large Q (b). IV. AZIMUTHAL ANGLE ASYMMETY In Fig. 8(a), we show distributions of the azimuthal angle between the emission (u → dW + ,d →ūW + , etc) plane and the W + b → th production plane about the common W + momentum direction in the W + b rest frame; see Fig. 3. Shown in Fig. 8(b) are the same distributions for pp →thj process, where the azimuthal angle is between the W − emission plane and the W −b →th production plane about the common W − momentum direction. The results are shown at W = 400 and 600 GeV for large Q (Q > 100 GeV). The black, red and green curves are for the SM (ξ = 0), ξ = ±0.05π, and ±0.1π, respectively. Solid curves are for ξ ≥ 0 while dashed curves are for ξ < 0. The φ distributions are proportional to where the top polarization is summed over.Likewise, they are proportional to |M + | 2 + |M − | 2 forthj events. Analytic expression for the amplitudes, M ± and M ± are given in Eqs. (32) and (47), respectively, where we can tell that azimuthal angle dependences are in the λ = ±1 W ± exchange amplitudes. The asymmetry is large at small W and large Q because the transverse W ± amplitudes are significant there, see Fig. 7. The asymmetry remains significant at W = 400 GeV, however, even for events with Q < 100 GeV [31]. We show in Fig. 9(a) the azimuthal angle distribution of right-handed and left-handed top quark separately, in green and red curves respectively, at W = 400 GeV for events with Q > 100 GeV and ξ = 0.1π. Their sum, given by the black curve agree with the corresponding curve in Fig. 8(a). As expected from the analytic expressions Eqs. (32) and (35), |M − | 2 is almost symmetric about φ = 0, and the asymmetry is mainly from |M + | 2 . Likewise, for thethj events, shown in Fig. 9(b), the asymmetry is mainly from left handedt quark, depicted by the red |M − | 2 curve. The origin of the azimuthal angle asymmetry comes from the interference between transverse W amplitudes with the e ±iφ phase factor for λ = ±1 W and the longitudinal W (λ = 0) amplitudes as shown in Eq. (32) for ub → thj and Eq. (47) for db → uth. We show in Fig. 10 the azimuthal angle distribution of |M + | 2 for the subprocess ub → dth, separately. The three squared terms, |M(λ)| 2 , for λ = +1, −1 and 0, give no φ dependence, while the interference terms between M + (0) and M + (−1) amplitudes give terms proportional to sin φ sin ξ with positive coefficients, leading to positive sin φ for sin ξ > 0. It is clearly seen from Fig. 10 that |M + (λ = −1)| 2 ≃ |M + (λ = 0)| 2 ≫ |M − (λ = +1)| 2 at W = 400 GeV for Q > 100 GeV for the subprocess ub → dth, consistent with the trend expected from the SM amplitudes at ξ = 0, shown in Fig. 7(c). It is therefore the interference between the M + (λ = −1) and M + (λ = 0) amplitudes, shown by the orange curve in Fig. 10, which determines the asymmetry sin φ . The interference between the λ = ±1 W exchange amplitudes give terms of the form sin 2φ sin ξ, which gives rise to another asymmetry sin 2φ . Because |M − (λ = +1)| is generally small at all Q and W regions, as shown in Figs. 7(a) and (c), the asymmetry sin 2φ turns out to be small in our analysis. We therefore do not show results on sin 2φ in the following, but note that its measurement should improve the ξ sensitivity at a quantitive level, and that it should be sensitive to other type of new physics that affects mainly the transversally polarized W amplitudes. It may be worth noting that asymmetry sin 2φ is larger inthj process, because both λ = ±1 transversally polarized W contributions are significant, as can be seen from Fig. 7(d), especially at large Q and small W. In Fig. 11, we show the azimuthal asymmetry integrated over φ, as a function of the invariant mass W of the th orth system for ξ = 0 (SM), ±0.05π (red curve) and ±0.1π (green curve). The asymmetry for large Q (Q > 100) GeV events is shown by solid curves, while those for small Q (Q < 100 GeV) is shown by dashed curves. The positive aysmmetry is found for thj events, while negative asymmetry is found forthj, in accordance with the observation from the φ distribution in Fig. 8. Generally speaking, the asymmetry is large for large Q events at around W ∼ 400 GeV where the magnitudes of the transverse and longitudinal W exchange amplitudes are comparable in Fig. 7(c) and (d). For small Q, (Q < 100 GeV), the asymmetry is significant only near the threshold, W ∼ m t + m h , where the transverse W amplitudes are non-negligible in Fig. 7(a) and (b). Because the asymmetry due to the term linear in sin ξ are nearly absent in |M − | 2 for thj and in |M + | 2 forthj, as can be seen from Eqs. (32b) and (47a) for δ ≃ δ ′ approximation, we can expect enhancement of the asymmetry by selecting right-handed top and the left-handed anti-top. This can easily be achieved when t andt decay semileptonically, where the charged-lepton decay angular distribution in the t ort rest frame takes the form [38] dΓ(t → blν) d cos θl ∼ (1 + σ cos θl) 2 , (67a) about the helicity axis, where σ andσ are twice the helicities of t andt, respectively, in the th orth rest frame. For instance, if we select those events with cos θl, cos θ ℓ > 0, then dσ/dW/dφ is proportional to and the asymmetry is significantly larger, as shown in Fig. 12 for ξ = 0.1π when Q > 100 GeV. The asymmetries shown by the green curves are when no cuts are applied, and they agree with the corresponding curves in Fig. 11. The asymmetry grows to A φ (W) ∼ 0.22 for thj events and A φ (W) ∼ −0.23 forthj events, both at around W ∼ 450 GeV with the selection cut of the t andt decay charged lepton angles in Eq. (68). V. POLARIZATION ASYMMETRIES We are now ready to discuss the polarization of the top quark in the single top+h production processes. We first note that the helicity amplitudes M + and M − in Eq. (32) for the subprocess ub → dth, and those in Eq. (47) for db → uth are purely complex numbers when production kinematics ( √ŝ , Q, W, cosθ, cos θ * , φ) are fixed. This is a peculiar feature of the SM where only the left-handed u, d, and b quarks, and their anti-particles with right-handed helicities contribute to the single t andt production process via W exchange. It implies that the produced top quark polarization state is expressed as the superposition in the top quark rest frame, where the quantization axis is along the top momentum direction in the th rest frame, where the top quark helicity is defined. The top quark is hence in the pure quantum state with 100% polarization, with its orientation fixed by the complex number M − /M + . Its magnitude |M − /M + | determines the polar angle and the phase arg(M − /M + ) determines the azimuthal angle of the top spin direction ii . Therefore, the kinematics dependence of the polarization direction can be exploited to measure the CP phase ξ, e.g. by combining matrix element methods with the polarized top decay density matrix iii . Exactly the same applies for thet spin polarization, whose quantum state can be expressed as in Eq. (70) where the helicity amplitudes M ± are replaced by M ± . In this report, we investigate the prospects of studying CP violation in the htt coupling through the top and anti-top quark polarization asymmetries in the single th andth processes respectively, with partial integration over the final state phase space. For this purpose, we introduce a complex matrix distribution ii See Appendix A for a pedagogical review of quantum mechanics. iii The top quark decay polarization density matrices for its semi-leptonic and hadronic decays are given in Appendix B. Note that the matrix (71) is normalized such that its trace gives the differential cross section of Eq. (48). Here we denote λ/2 and λ ′ /2 for the top helicity, and the 1/4 factor accounts for the colliding parton spin average, just as in Eq. (53) for the subprocess cross section. All the other subprocesses which contribute to the same thj final state, cb → sth,db →ūth,sb →cth, whose matrix elements are given in Eqs. (37) and (38) should be summed over in the matrix (71). The integration over phase space and the summation over different subprocess contribution make the top quark in the mixed state and its polarization density matrix is given by for an arbitrary distribution. The coefficients of the three σ matrices makes a three-vector, P = (P 1 , P 2 , P 3 ), whose magnitude P = | P | gives the degree of polarization (P = 1 for 100% polarization, P = 0 for no polarization), while its spatial orientation gives the direction of the top quark spin in the top rest frame. The polarization vector P in (73) can be obtained directly from the matrix distribution (71) as follows where the integral over the phase space can be chosen appropriately in order to avoid possible cancellation of polarization asymmetries. For the helicity amplitudes (32) calculated in the th rest frame, the z-axis is along the top momentum in the th rest frame, and the y-axis is along the q × p t direction, perpendicular to the W + b → th scattering plane. In Appendix A, we obtain the orientation of the top quark spin in terms of the helicity amplitudes for a pure state and for general mixed states. The polarization oft quark is obtained also from the matrix distribution (71) with thethj amplitudes MλM * λ ′ , simply by replacing λλ ′ byλλ ′ in the density matrix (73). The orientation of the polarization vector is measured in the same frame, where the z-axis is now along thet quark momentum direction in theth rest frame and the y-axis is along the q × pt direction. We show in Fig. 13 the three components (P 1 , P 2 , P 3 ) of the polarization vector P as a function of the top (anti-top) scattering angle cos θ * in the th (th) rest frame, at W = 400 GeV (upper four plots) and 600 GeV (lower four plots), when all the other kinematical variables are integrated over subject to the constraint Q < 100 GeV (a), (e), (c), (g) and Q > 100 GeV (b), (f), (d), (h). The left-hand side of Fig. 13 gives the top polarization in thj processes, while the right-hand side plots give thet polarization inthj processes. As cos θ * deviates from −1, P 3 deviates from −1 according to the growth of |M + | 2 /|M − | 2 , but |P 1 | (and also |P 2 | when ξ = 0) grows quickly as they are linear in M + . The polarization P 2 normal to the scattering plane can become as large as 0.6 even for ξ = 0.05π, when Q < 100 GeV at W = 600 GeV; see Fig. 13(c). This is because at small Q and large W, the longitudinal (λ = 0) W contribution dominates over the transverse (λ = ±1) W contributions, and hence the integration over the azimuthal angle φ does not diminish much the degree of top polarization. Likewise, thet polarization is shown in the right hand side of Fig. 13, for the same configuration of W = 400 GeV (e), (f ) and 600 GeV (g), (h), for Q < 100 GeV (e), (g) and Q > 100 GeV (f ), (h). P 3 is now almost unity at cos θ * = −1, because |M − (θ * )| = |M + (θ * )| ≈ 0 at θ * = π. As cos θ * deviates from −1, P 3 decreases rapidly and the polarization perpendicular to the helicity axis, P 1 inside the scattering plane and P 2 normal to the scattering plane when ξ = 0 grows, just as in the case of top polarizations shown in the left-hand plots of the figure. Most notably, the magnitude of all three polarization components P 1 , P 2 , P 3 behave very similar as functions of cos θ * between the top and the anti-top polarizations for the same CP phase, whereas their signs are all opposite. As for P 2 , the magnitude becomes the largest for Q < 100 GeV events at W = 600 GeV, as shown in Fig. 13(c) for top and (g) for anti-top. As we will explain carefully in the next section, this is a consequence of CP violation in CPT invariant theory in the absence of rescattering phase in the amplitudes. Before we move on studying t andt polarization after integration over cos θ * , we note in Fig. 13(c) and (g) for Q < 100 GeV at W = 600 GeV, the magnitudes of P 2 are predicted to be larger for ξ = 0.05π (dashed red curve) than those for ξ = 0.1π (dash-dotted curve) in the cos θ * > 0 region. This non-linear behavior was not expected for relatively small phase of |ξ| ≤ 0.1π, and we study the elements of matrix dσ λλ ′ carefully for ten values of ξ in the range 0 < ξ < 0.1π. Shown in the left plot of Fig. 16 is the thj production differential cross section, σ ++ + σ −− , with respect to cos θ * at W = 600 GeV for Q < 100 GeV events. The cross section is smallest at ξ = 0, and grows with ξ almost linearly in the region cos θ * −0.5. The cross section near cos θ * = −1 is dominated by the W exchange amplitudes (with the A factor), and hence does not depend on the htt coupling. In the middle plot, Fig. 14(b), we show Im(σ(+, −)) v.s. cos θ * . Its magnitude grows with ξ, but it changes sign at around cos θ * = 0 and the growth of the magnitude is very slow at cos θ * > 0. The average polarization P 2 is obtained as their ratio −2Imσ(+, −)/(σ(+, +) + σ(−, −)) in Eq. (74b), which is shown in Fig. 14(c). In the cos θ * > 0 region, the magnitude grows from ξ = 0 up to ξ ≃ 0.05π, but decreases to the orange curve at ξ = 0.1π. This study shows that the polarization P 2 has strong sensitivity to the CP phase ξ, whose magnitude can reach 20% even for ξ = ±0.01π. As can be seen from Fig. 14(a), the differential cross section decreases sharply as cos θ * deviates from cos θ * = −1, and hence the polarization asymmetry integrated over cos θ * is determined by the sign and magnitude in the cos θ * < 0 region. Shown in Fig. 15 are the polarization asymmetry P 2 for top (above zero) and antitop (below zero), for the events with cos θ * < 0, plotted against the th (th) invariant mass W. The results for Q > 100 GeV are shown by solid curves, while those for Q < 100 GeV are shown by dashed curves. The red curves are for ξ = 0.05π, while green curves are for ξ = 0.1π. Although the ad-hoc selection cut cos θ * < 0 is not optimal, we can observe the general trend that the magnitude of the polarization asymmetry P 2 grows with the CP phase ξ, and the sign of P 2 is positive for t, but it is negative fort, when ξ > 0. We may tempt to conclude that the same physics governs the sign of A φ in Fig. 11 and that of P 2 in Fig. 15, since both asymmetries change sign between thj andthj events. We will study the cause of this similar behaviour in the next section. Before discussing consequences of CPT invariance in the next section, let us introduce a slightly more complicated top quark polarization asymmetries whose signs also measure the sign of ξ. We recall that the t polarization perpendicular to the W + b → th scattering plane P 2 can be expressed as a triple three-vector product (75), which is naive T-odd (T-odd), since it changes the sign when we changes the signs of both the three momentum and spin. In the absence of final state re-scattering phase,T-odd observables measure T-violation, or CP-violation in quantum field theories (QFT). Therefore, we examine pentuple products which are clearlyT-odd polarization asymmetries, whose expectation values should vanish at ξ = 0 in the tree level. We note that the three-vector ( q × p j ) × ( q × p h ) points toward the direction of q, while its sign changes when the azimuthal angle between the W + emission plane and the W + b → th scattering plane changes sign, between −π < φ < 0 and 0 < φ < π. Likewise, ( p b × p j ) × ( p b × p h ) points either along or opposite of p b direction, depending on the same azimuthal angle between the two planes, because the W + momentum q and the b momentum p b are back to back in the frames which define the emission and the scattering planes, see Fig. 3. In the top quark rest frame, the two three-vectors, q and p b span the scattering plane, which is chosen as the x-z plane in our analysis. Therefore, if we define the azimuthal asymmetry of the top quark polarization vector as where P k (φ > 0) and P k (φ < 0) denotes, respectively, the top quark polarization of events with φ > 0 and φ < 0, P A 1 and P A 3 areT-odd. This is because the x-and z-axis vectors are linear combination of q and p b in the t-rest frame. We show in Fig. 16 all three polarization asymmetries, P A k for k = 1, 2, 3, for pp → thj events in the left two panels (a), (b), and for pp →thj in the right panels. The upper plots in Fig. 16 (a), (c) are for W = 400 GeV, while the bottom plots (b), (d) are for W = 600 GeV, both for Q > 100 GeV. As expected, P A 1 = P A 3 = 0 for the SM (ξ = 0). We find that P A 3 > 0 for ξ = 0.05π (dashed curves) and 0.1π (dash-dotted curves) in all the regions of cos θ * , W, and Q that we study, including the four cases shown in Fig. 16. This follows our observation that P 3 is large and opposite in sign between t andt, see Fig. 13, and that azimuthal angle asymmetry is also opposite in sign, see Fig. 11. The magnitude of P A 1 is small near cos θ * = −1 where the cross section is large. As explained in the previous sections, the asymmetries A φ , P 2 , P A 1 and P A 3 , whose signs measure the sign of the CP violating phase ξ are all so-called T-odd asymmetries. We found in section IV that the asymmetry A φ has opposite sign between the pp → thj events and pp →thj events, and we found in section V the polarization asymmetry P 2 has the opposite sign between the thj andthj events. In this section, we study consequences of the invariance under the discrete unitary transformationsT and CP, and CPT. We adopt the symbolT for the unitary transformation under which all the three momenta p and the spin vectors s reverse their sign, in order to distinguish it from the time reversal transformation T, which reverses the sign of the time direction, and hence is anti-unitary. In the absence of the final state interaction phases of the amplitudes,T-odd asymmetries are proportional to T violation, or equivalently CP violation in QFT. Fig. 17 illustrates theT and CP transformations of the subprocess ub → dth, whose three momenta are the same as those in Fig. 3, or Eqs. (20) and (29). We add the helicities of external massless quarks (u, d, b) and also along the W + momentum direction, where the λ = −1 state is chosen for illustration. The top polarization, or its decay charged lepton momentum, is normal to the scattering plane along the positive y-axis. UnderT transformation, all the three momenta and spin polarizations change sign, as shown in (b), which can be viewed as (d) by making the 180 degree rotation about the y-axis. Comparing (a) and (d), we find that the initial state remains the same while in the final state φ → −φ and P 2 → −P 2 (78) underT transformation. Therefore the observation ofT-odd asymmetries such as implies either T-violation or the presence of an absorptive phase of the scattering amplitudes or both [39,40]. Likewise, the configuration (c) or (e) after the R y (π) rotation, is obtained by CP transformation from the configuration (a). All the particles are transformed to anti-particles and their helicities and three momenta are reversed. If we define the asymmetries A φ and P 2 for the processpp →thj, then CP-invariance between (a) and (e) implies A φ = −A φ and P 2 = P 2 . (80) Violation of the above identities hence gives CP-violation. Finally, the configuration (f ) in Fig. 17 is obtained from (d) by applying CP, or from (e) by applyingT, together with the rotation R y (π). In short, (f ) is obtained from our original configuration (a) by CPT transformation [41]. By comparing (a) and (f ), CPT invariance, or the absence of the absorptive phase in QFT amplitudes should give As an illustration of how absorptive phases of the amplitudes in T or CP invariant theory contribute toT-odd asymmetries, we examine the impacts of the top-quark width in the s-channel propagator D t (P th ) in Eq. (12), or in the B factor of Eq. (33b). The width of Breit-Wigner propagator gives absorptive parts to our amplitudes, and since the top quark width appears only in the amplitudes with htt coupling, it can give rise toT-odd asymmetries, A φ and P 2 . We show in Fig. 18 the asymmetries A φ (a) and P 2 (b) in the CP-invariant SM (ξ = 0) for Γ t = 0 (blue), Γ t = 1.35 GeV (red), the SM value, and for 10 times the SM width Γ t = 13.5 GeV (green). We find that the asymmetries are both zero when Γ t = 0 as expected. Furthermore, we confirm the relations (80) between the asymmetries of pp → thj events, A φ and P 2 , and those ofpp →thj events, A φ and P 2 , respectively. This is a consequence of CP invariance, as can be viewed from the illustration by comparing the configurations (a) and (e). If CP is conserved, the amplitudes for the configuration (e) should have the same magnitude with those of the original configuration (a). The azimuthal angle between the W emission plane and the scattering plane is reversed, whereas thet spin polarization should be the same as the t spin polarization. It is worth noting here that instead of top and anti-top spin polarization vector, P 2 and P 2 , if we use the decay charged lepton momentum normal to the scattering plane in the t ort rest frame, we find as a consequence of P 2 = P 2 (80) in CP invariant theory. Here, we assume that the t andt decay angular distributions follow the SM, where the charged leptons are emitted preferably along the t spin polarization direction, whereas they are emitted in the opposite of thet spin polarization direction. This is simply because only the right-handed l + and the left-handed l − are emitted from t andt decays, respectively, in the SM. The above spin-momentum correlation is CP invariant, and hence the identity (82) is also a consequence of CP invariance. In Fig. 19, we show comparisons of the asymmetries between pp → thj andpp →thj events for CP violating theory (ξ = 0) in the approximation of no absorptive parts in the amplitudes, i.e., we set Γ t = 0. We confirm the relations (81) for the same value of ξ, as a consequence of CPT invariance. The relations between the asymmetries in pp → thj andpp →thj are opposite between Fig. 18 and Fig. 19, as expected from Eqs. (80) and (81). All the above relations between pp andpp may seem to be just formal rules since we will not have app collider with the LHC energy and luminosity. However, we find the above rules useful in testing our amplitudes, especially in fixing the relative sign between the two helicity amplitudes which determines the top and anti-top spin polarization directions away from their helicity axis. Furthermore, we find that it is possible to disentangleT-odd effects coming from the SM re-scattering effects (that give rise to the absorptive amplitudes) from CP violating new physics effects in pp collisions at the LHC by measuring the polarization asymmetry P 2 of t andt precisely. Let us examine Fig. 15 again, where we show P 2 for thj andthj events at the LHC as a function of W, the th or th invariant mass. The polarization asymmetry P 2 have opposite sign between t andt. More quantitatively, we note that the magnitudes of the asymmetry is almost the same for small Q events (Q < 100 GeV) at large W (W 600 GeV). This is a consequence of CPT invariance of our tree-level amplitudes with Γ t = 0, because at small Q and large W, the events are dominated by the contributions of the longitudinally polarized W bosons; see Fig. 7 (a) and (b). Therefore, in this region of the phase space, we can regard the single top or anti-top plus Higgs production processes as which are CP conjugates of each other. Their amplitudes are given in Eqs. (30c) and (43), and we can obtain the polatization asymmetries directly from these amplitudes, which are independent of parton distribution functions in pp collisions. Because the absorptive amplitudes contribute to the polarization asymmetry P 2 with the same sign as shown in Fig. 18, we can further tell that the difference, measures CP violation, whereas the sum P 2 (thj events) + P 2 (thj events) measures the CPT-odd effects from the absorptive amplitudes in the region of small Q and large W. We find in the SM the leading contributions for the absorptive amplitudes appear at one-loop level in QCD and in the electroweak theory [42]. The top quark width that we adopted in this section for illustration is a part of the electroweak corrections. The sign of the polarization asymmetry P 2 remains the same and the magnitudes are larger at smaller W and large Q. This can be understood qualitatively also from Fig. 7, where the sub-dominant contributions are at small W (W 500 GeV) especially at large Q (Q > 100 GeV). The above subprocesses are again CP-conjugate to each other, and hence follow the rule (81) from CPT invariance. VII. SUMMARY AND DISCUSSIONS We studied associated production of single top (or anti-top) and the Higgs boson via t-channel W exchange at the LHC. We obtained analytically the helicity amplitudes for all the tree-level subprocesses with massless b (orb) quark PDF in the proton, and studied consequences of possible CP violation in the Higgs Yukawa coupling to the top quark. By choosing the momentum direction of the W ± exchanged in the t-channel, the helicity amplitudes are factorized into the W ± emission amplitudes from light quarks or anti-quarks, and the W + b → th or W −b →th production amplitudes. We find that the amplitudes for the right-handed top quark and those of the left-handed anti-top quark are sensitive to the sign of the CP violating phase ξ in the effective Yukawa interaction Lagrangian of Eq. (1). This is because the right-handed top quark is produced by the t † R t L operator with the e −iξ phase without chirality suppression, whereas the contribution of the t † L t R operator with the e iξ phase is doubly suppressed. For the anti-top production, the role of the two operators are reversed. On the other hand, the other amplitudes for the left-handed top and the right-handed anti-top productions are almost proportional to e iξ + e −iξ = 2 cos ξ because both terms in the Lagrangian contribute with one chirality suppression, either in the top quark propagator or from the helicity-chirality mismatch in the wave function, δ ′ and δ in Eq. (34), respectively. We studied mainly the azimuthal angle asymmetry A φ between the W ± emission plane and the W + b → th or W −b →th production plane, and the t ort spin polarization normal to the scattering plane, P 2 , as observables which are sensitive to the sign of the CP phase ξ. The asymmetry A φ arises from the interference between the amplitudes with longitudinal and transversely polarized W ± contributions, and hence is significant when the exchanged momentum transfer Q is relatively large and the th orth invariant mass W is not too large, where both of the interfering amplitudes are significant. The magnitude of the asymmetry can be enhanced by selecting the chirality favored top or anti-top quark helicity, e.g. by selecting those events with charged lepton momentum along the top or anti-top momentum direction in the th orth rest frame; see Fig. 11. On the other hand, the polarization asymmetry P 2 is obtained as the interference between the two helicity amplitudes of t ort. We find that the amplitudes are dominated by the collision of longitudinally polarized W ± and b orb when the momentum transfer Q is small and the invariant mass W of the th orth system is large. Therefore in such kinematical configuration, the asymmetry P 2 of the top and the anti-top can be regarded as the direct test of CP violation between the CP-conjugate processes, W + (λ = 0) + b → t + h and W − (λ = 0) +b →t + h. Because of the dominance of the longitudinally polarized W ± exchange amplitudes, all the differences in the quark and anti-quark PDF's of the colliding protons drop out in the polarization asymmetry. All the analytic and numerical results presented in this report are done strictly in the tree-level, in order to clarify the symmetry properties of observable asymmetries that are sensitive to the sign of the CP violating phase ξ htt . In order to show their observability at the HL-LHC with its 3 ab −1 of integrated luminosity, we should perform the following studies. Most importantly, we should identify the top and the Higgs decay modes which can be used to measure the asymmetries, since we may have different radiative corrections and background contributions for each set of the decay modes. We expect that semi-leptonic decays of t andt when the Higgs decays into modes without missing energy are favorable because the lepton charge identify t vs.t, and the charged lepton decay anglular distribution measures the t andt polarization with maximum sensitivity. Hadronically decaying t andt events can have sensitivity to the asymmetries, because their decay density matrix polarimeter introduced in Ref. [36] retains strong sensitivity to the t andt polarizations, and also because the CP asymmetry of the polarizations, P 2 (thj) ≈ −P 2 (thj) in Fig. 15 tells that the observable asymmetries in the decay distributions are the same between t andt events even if we cannot distinguish between them. Although the direct test of CP violation cannot be made in the hadronic decay modes, the sensitivity to the sign and the magnitude of the CP violating phase ξ can be improved by assuming the SM radiative contribution to the asymmetries [42]. We believe that the associated production of the Higgs boson and single t ort via t-channel W ± exchange at the LHC can be an ideal testing ground of the top quark Yukawa coupling, because the amplitudes with the htt Yukawa coupling and those of the hW W coupling interfere strongly. We studied the sensitivity of the process to possible CP violation in the Yukawa coupling. We anticipate that our studies based on the analytic form of the helicity amplitudes will be useful in the test of various scenarios of physics beyond the SM. For the mixed states, it is useful to introduce the density matrix where the summation is over all the processes and kinematical configurations that contribute to the top quark which we observe. Because the matrix is Hermitian and has trace 1, we can parametrize it as ρ = 1 2 1 + P · σ (A7) by using the σ matrices. We find , which for the pure state (A1) gives (A2). In general, we can parametrize the density matrix (A7) as which is a sum of unpolarized top quark with the probability 1 − | P |, and the fully polarized top quark with its spin polarization orientation along P = | P |(sin θ cos φ, cos θ sin φ, cos θ) (A10) with the probability | P |. We find it convenient to show the general polarization vector P (A10) by using an arrow of length | P | in the polar coordinate defined as −π < θ ≤ π, −π 2 < φ ≤ π 2 . (A11) When the imaginary part of M − /M + is small, we tend to have small |φ|, and with the above definition we can show φ > 0 and φ < 0 as pointing up and down in the z-x plane [31] . andρ ′d is obtained from (B9) by replacing thed and u four momentum (B5) in the W + rest frame. This simple density matrix distribution reduces to the charged lepton distribution (B1a) in the Pd u = 1 limit. The decay density distribution fort →bℓν is obtained similarly as dρ(t →bℓν) = 6B(t →bdū) where the density matrixρ is obtained from the d-quark momentum (B7b) in the t-rest frame, andρ ′ d is obtained by exchanging the d andū four momenta (B6) in the same event. The decay angular distribution of arbitrary polarized t andt are then obtained simply by taking the ′ trace ′ Note that the decay distributions for t → bsc andt →bsc are the same as (B12a) and (B12b), respectively, where instead ofd and d momenta we haves and s momenta, while the identification probability Ps c = P sc may be significantly larger than 0.5, the most pessimistic value which was assumed in Ref. [36]. Finally, we find it encouraging that the t andt decay angular asymmetries have the same sign when as suggested from approximate CPT invariance in section VI and from Figs. 13 and 15 in section V. This tells that the polarization asymmetry can be measured even if we cannot distinguish t fromt, which may often be the case for hadronic decays.
15,544.2
2019-12-26T00:00:00.000
[ "Materials Science", "Physics" ]
Physics research on the TCV tokamak facility: from conventional to alternative scenarios and beyond The research program of the TCV tokamak ranges from conventional to advanced-tokamak scenarios and alternative divertor configurations, to exploratory plasmas driven by theoretical insight, exploiting the device’s unique shaping capabilities. Disruption avoidance by real-time locked mode prevention or unlocking with electron-cyclotron resonance heating (ECRH) was thoroughly documented, using magnetic and radiation triggers. Runaway generation with high-Z noble-gas injection and runaway dissipation by subsequent Ne or Ar injection were studied for model validation. The new 1 MW neutral beam injector has expanded the parameter range, now encompassing ELMy H-modes in an ITER-like shape and nearly non-inductive H-mode discharges sustained by electron cyclotron and neutral beam current drive. In the H-mode, the pedestal pressure increases modestly with nitrogen seeding while fueling moves the density pedestal outwards, but the plasma stored energy is largely uncorrelated to either seeding or fueling. High fueling at high triangularity is key to accessing the attractive small edge-localized mode (type-II) regime. Turbulence is reduced in the core at negative triangularity, consistent with increased confinement and in accord with global gyrokinetic simulations. The geodesic acoustic mode, possibly coupled with avalanche events, has been linked with particle flow to the wall in diverted plasmas. Detachment, scrape-off layer transport, and turbulence were studied in L- and H-modes in both standard and alternative configurations (snowflake, super-X, and beyond). The detachment process is caused by power ‘starvation’ reducing the ionization source, with volume recombination playing only a minor role. Partial detachment in the H-mode is obtained with impurity seeding and has shown little dependence on flux expansion in standard single-null geometry. In the attached L-mode phase, increasing the outer connection length reduces the in–out heat-flow asymmetry. A doublet plasma, featuring an internal X-point, was achieved successfully, and a transport barrier was observed in the mantle just outside the internal separatrix. In the near future variable-configuration baffles and possibly divertor pumping will be introduced to investigate the effect of divertor closure on exhaust and performance, and 3.5 MW ECRH and 1 MW neutral beam injection heating will be added. The research program of the TCV tokamak ranges from conventional to advanced-tokamak scenarios and alternative divertor configurations, to exploratory plasmas driven by theoretical insight, exploiting the device's unique shaping capabilities. Disruption avoidance by real-time locked mode prevention or unlocking with electron-cyclotron resonance heating (ECRH) was thoroughly documented, using magnetic and radiation triggers. Runaway generation with high-Z noble-gas injection and runaway dissipation by subsequent Ne or Ar injection were studied for model validation. The new 1 MW neutral beam injector has expanded the parameter range, now encompassing ELMy H-modes in an ITER-like shape and nearly noninductive H-mode discharges sustained by electron cyclotron and neutral beam current drive. In the H-mode, the pedestal pressure increases modestly with nitrogen seeding while fueling moves the density pedestal outwards, but the plasma stored energy is largely uncorrelated to either seeding or fueling. High fueling at high triangularity is key to accessing the attractive small edge-localized mode (type-II) regime. Turbulence is reduced in the core at negative triangularity, consistent with increased confinement and in accord with global gyrokinetic simulations. The geodesic acoustic mode, possibly coupled with avalanche events, has been linked with particle flow to the wall in diverted plasmas. Detachment, scrape-off layer transport, and turbulence were studied in L-and H-modes in both standard and alternative configurations (snowflake, super-X, and beyond). The detachment process is caused by power 'starvation' reducing the ionization source, with volume recombination playing only a minor role. Partial detachment in the H-mode is obtained with impurity seeding and has shown little dependence on flux expansion in standard single-null geometry. In the attached L-mode phase, increasing the outer connection length reduces the in-out heat-flow asymmetry. A doublet plasma, featuring an internal X-point, was achieved successfully, and a transport barrier was observed in the mantle just outside the internal separatrix. In the near future variableconfiguration baffles and possibly divertor pumping will be introduced to investigate the effect of divertor closure on exhaust and performance, and 3.5 MW ECRH and 1 MW neutral beam injection heating will be added. Keywords: nuclear fusion, tokamak, overview, TCV, MST1, EUROfusion (Some figures may appear in colour only in the online journal) Introduction The tokamak à configuration variable (TCV) [1] is a mature European fusion facility, with numerous experiments conducted by international teams organized by the EUROfusion consortium through the medium-size tokamak (MST1) Task Force [2], in parallel with a nearly continuous, self-managed domestic campaign. A versatile device with unparalleled shaping capabilities and flexible heating systems (electron-cyclotron resonance heating (ECRH) and neutral beam heating (NBH)), TCV is employed in a multi-faceted research program ranging from conventional topologies and scenarios in support of ITER, to advanced tokamak scenarios and a broad palette of alternative divertor configurations with an eye to DEMO, to exploratory plasmas driven by theoretical speculation and insight. A strong link with academia and education is enforced organically by TCV's nature as a university facility. As such, generous machine time is provided for training students, who in return provide an essential service as full members of the experimental and operating team. This environment is also naturally conducive to close and productive links with the SPC theory group, which has a strong tradition of analytical and numerical first-principles enquiry, while also managing a panoply of higher-level, interpretation-oriented codes. The main operating parameters of TCV are as follows: major radius R = 0.88 m, minor radius a = 0.25 m, vacuum toroidal field B T = 1.5 T, plasma current up to I p = 1 MA. The polarities of both field and current can be chosen at will in any discharge. The primary wall-facing material is graphite. Three piezoelectric valves are used for injection of both the primary discharge fuel and seed impurities; an additional, fast, solenoid-based multi-valve system is available for disruption mitigation through massive gas injection (MGI) (recently upgraded from a previous version for greatly increased gas flow). The defining shaping versatility of the device is provided by a system of 16 independently-powered shaping poloidal-field (PF) coils, in addition to two coils internal to the vessel for control of axisymmetric instabilities with growth rates up to 5000 s −1 . During most of the device's lifetime, its primary auxiliary heating source has been ECRH, in a combination of second-(X2, 82.7 GHz) and third-harmonic (X3, 118 GHz) X-mode components with a maximum aggregate power of 4.1 MW, injected through up to seven independent launchers [3]. The finite life expectancy of the gyrotron sources has led to a gradual reduction of this power to a current total of 1.15 MW. We are currently in the process of procuring four additional gyrotrons, two 0.75 MW units for X2 waves and two 1 MW dual-frequency units for either X2 or X3 [4]. By the end of 2019 we thus expect to have 3.3 MW X2 and 3.1 MW X3 available at the tokamak end (with a maximum simultaneous total power of 4.5 MW), restoring the plant's erstwhile flexibility in both localized and diffuse heating at virtually all plasma locations in virtually all configurations, with a varying mix of heating and current drive. Since 2015, NBH has also been employed on TCV, using a 15-25 keV beam of maximum 1 MW power (at the highest energy), in a tangential geometry affording a double pass through the plasma cross-section [5,6]. A second 1 MW injector, directed in the opposite direction and featuring an energy of 50-60 keV, is currently being planned for the 2020 horizon. The experimental campaigns are assisted by a continuous program of diagnostic upgrades and development. The Thomson scattering diagnostic was upgraded with the addition of 40 new spectrometers and a redesign of the optical layout to guarantee a more complete coverage of the plasma in all configurations, without spatial gaps and with increased energy resolution particularly for edge measurements [7]. A three-radiator Cherenkov detector was deployed in support of runaway-electron experiments in a collaboration with the National Centre for Nuclear Research in Poland [8]. Runaway studies were also assisted by a runaway electron imaging and spectrometry system detecting infrared and visible synchrotron radiation, on loan from ENEA (Italy) [9]. Tangential, multi-spectral, visible-light camera arrangements have been installed on TCV by groups from MIT (USA) [10] and Eindhoven University of Technology (The Netherlands) [11]. TCV was also equipped recently with a Doppler backscattering apparatus and a highly novel short-pulse time-of-flight reflectometer, used alternately as they share most of the hardware, including a steerable quasi-optical antenna [12,13]. This paper reports on scientific results from the past twoyear period, during which TCV was operated regularly without major interruptions. Several of the experiments described in this paper also had counterparts in the other operating MST facility, ASDEX Upgrade (AUG) [14]. Section 2 discusses work on abnormal discharge termination events, including disruptions and runaway-electron beam formation; section 3 reports on discharge scenario development and associated real-time control; section 4 deals with core physics, particularly the related issues of transport and turbulence for both the thermal and non-thermal populations; section 5 is on edge and exhaust physics and detachment, both in conventional and alternative-divertor scenarios; the first successful generation of a doublet configuration is discussed in section 6; conclusions and an outlook, including the description of a significant upcoming divertor upgrade, are provided in section 7. Disruption physics This area had not received sustained attention in the past on TCV and its recent rise to prominence is a particularly good demonstration of successful international collaborations, particularly as catalyzed by the EUROfusion framework. Disruptions Issues related to unwanted discharge termination, being at the forefront of reactor designers' concerns [15], are addressed vigorously in the TCV program. A path-oriented approach [16] has been advanced to deal with the changed perspective of the reactor scale, which remains grounded in safety but has to be mindful of economics. While device integrity remains paramount, value is attached to keeping peak performance as well. This yields a prioritized hierarchy of full performance recovery, disruption avoidance, and disruption mitigation-all of which are dependent upon the specific disruption path [16]. Experiments were performed in parallel on AUG and TCV. On TCV the focus was on disruptions caused by abnormal impurity inflow precipitating a disruptive event caused by the locking of a pre-existing n = 1 neoclassical tearing mode (NTM) on the q = 2 surface. The impurity inflow was simulated by a massive, controlled injection of a noble gas such as neon. The event detector was a sharp increase in radiation, specifically soft x-ray emission. The application of ECCD on the q = 2 surface can prevent the locking altogether or, if applied with some delay, unlock and stabilize the mode, still preventing the disruption. Both full performance recovery and soft landing paths were explored and documented [17]. Higher power and better precision are required to unlock the mode than to prevent locking (figure 1). Prevention was enabled by real-time triggers based on maximum entropy and maximum likelihood techniques applied to magnetic signals [18], or on radiation thresholds. Safe discharge termination through controlled current ramp-down (to 50 kA) was also tested successfully. All these techniques were finally combined in a first prototype closed-loop feedback system including tracking of the q = 2 surface through real-time equilibrium reconstruction [17] and ECRH ray tracing [19]. A path-oriented approach is bound to be costly as the paths to disruption form a large and heterogeneous set, but the techniques developed for the specific path described above have some degree of generic applicability and hold promise for generalization. A version of the strategy described above is under development in the general architecture of supervisory real-time control, including event monitoring through plasmaposition and rotating-and locked-mode detection [20], and incorporating actuator management tools [21]. In parallel, an algorithm to detect proximity to the density limit based on changes in sawtooth characteristics was also developed successfully. An offline disruption database was also constructed using the DIS_tool package, for statistical analysis of disruption triggers and as a basis for prediction and modelling (to be performed using a machine-learning technique already applied successfully to JET [22]). By processing multiple diagnostics, DIS_tool is able to detect fast transient events characterizing the disruptive process, such as thermal quenches and current spikes, and to automatically compute characteristic times and parameters of interest. The parametrization of the algorithm renders it independent of the characteristics of the individual device [23,24]. Runaway electrons Another facet of the discharge termination problem is the production of runaway electrons (RE), which is also a central concern of reactor operation. In ITER, RE beams of up to 12 MA can be expected, with the potential to cause deep damage to the metallic structures. Mitigation and control of RE beams are thus a must. Runaway generation both in steady state and disruptive conditions, at low density and with the aid of high-Z noble gas injection, has been documented in TCV. RE beams are generated on TCV for a broad range of edge safety factor, down to 2.1, and at elongations up to 1.5. Key data on runaway dissipation by subsequent Ne and Ar injection are being used for validation of a high-Z interaction model. The increased throughput of the new MGI valve system has allowed us to quantify the increase in RE dissipation rate with gas injection rate and its dependence on gas species, Ar being more effective than Ne. Initial studies of the effect of ECRH on the RE beam have also been performed. Secondary RE avalanching was identified and quantified for the first time after massive Ne injection; simulations of the primary RE generation and secondary avalanching dynamics in stationary discharges indicate that the RE current fraction created via avalanching could reach 70%-75% of the total plasma current. Relaxation events consistent with RE losses caused by the excitation of kinetic instabilities are also observed [25]. An extensive set of experiments were performed to study the options for controlled ramp-down in the presence of a disruptive event featuring a RE beam. Upon detection of the current quench and plateau onset (via current and hard x-ray observers), a dedicated controller takes over. The 'hybrid fast controller', initially developed for the Frascati tokamak upgrade, is empirical, lightweight, easily tunable, and portable [26]. The current ramp-down is controlled through the Ohmic transformer, with assistance from MGI to limit the RE beam's energy, and the beam position is controlled through the PF coils. The ramp-down rate is kept below a threshold to avoid the appearance of deleterious magnetohydrodynamic (MHD) instabilities that can engender a total loss of control. This control scenario appears robust and reproducible (figure 2), and intriguingly, a total conversion of RE current into thermal plasma current has also been observed. This is speculatively attributed to loop-voltage oscillations coupled with system hysteresis, and hints at a possible new termination scenario [26]. Main reactor scenarios In the most recent campaign a stable ELMy H-mode was obtained in an ITER-like shape permitting direct scaling comparisons with the corresponding AUG scenario [27]. Work in this area has been hampered by the empirical and unexplained elusiveness of regular edge-localized modes (ELMs) in configurations centered near the vessel midplane, where NBH is located. Of different plasma shapes attempted, the most resilient has proven to be one with low upper triangularity (δ) and high lower δ, which has been taken to q 95 = 3.6 at an elongation of 1.8, with NBH alone as well as NBH+X3. The application of a more ELM-resilient vertical observer in the future could alleviate some of the difficulties with these scenarios and allow also for stable operation at q 95 = 3. The I-mode [28] has also been pursued, primarily mimicking the equivalent low-δ shape of AUG. NBH not being sufficient to reach I-or H-mode in this configuration, X3 was added, unsuccessfully at B T = 1.35 T but with some promising recent developments at B T = 1.53 T. The goal of the Advanced Tokamak route was to extend to higher β N , using NBH and X3, well-known fully noninductive scenarios with internal transport barriers (ITBs) and high bootstrap current fraction achieved in the past in TCV with X2 ECCD. It has not proven possible thus far to obtain non-inductive ITBs with NBH, its diffuse or central heating and modest current drive contribution being detrimental to the establishment of negative magnetic shear in the center. Conversely, ITBs with ECCD and NBH were studied for the first time, but could not be sustained non-inductively [29]. Non-inductive discharges in L-mode were sustained at The disruption prevention plots collect three discharges with identical deposition location for the sources used to destabilize the mode ('scenario gyrotrons') and varying deposition location for the electron cyclotron current drive (ECCD) source used for stabilization, showing that ECCD is effective at stabilizing when it is applied on or just inside the q = 2 surface. In this case, one 100 kW ECCD source is sufficient to restore discharge performance when mode locking is prevented. The right-hand plots show that, once the discharge enters the disruptive chain, 500 kW for 150 ms or 800 kW for 110 ms is required for recovery (only the power used for stabilization is plotted here): the times of mode unlocking are indicated by vertical lines in the middle plot. Note that these discharges do disrupt eventually during a controlled rampdown. By contrast, the disruptive reference with no stabilizing action shown for comparison (in blue) disrupts during the flat top as a result of the locked mode. Reproduced courtesy of IAEA. Figure I p = 130 kA, H-factor H 98(y ,2) = 0.8, and β N = 1.4. A successful attempt at H-mode was made in nearly non-inductive conditions, by targeting a low enough density in H-mode to be compatible with X2 heating. The result is shown in figure 3, with H 98(y ,2) = 1 and β N = 1.7 [30]. The neutral-beam deposition and fast-ion dynamics are being modeled with the nubeam-ascot code suite for these scenarios [31]. The additional power expected for the next campaign holds promise for improving performance. Finally, TCV has long established the merits of negative triangularity, which is now being considered as a serious candidate for a test reactor. All this work was performed in limited shapes and, more recently, in diverted shapes with negative upper triangularity. Finally, stable, negative-triangularity single-null-diverted shapes, fully mirroring conventional diverted discharges, were developed for the first time in the last campaign but have not yet been successfully established in a NBH-compatible location. Real-time control Many achievements in the past campaign were either made possible or at least aided by advancements in plasma control. In this subsection progress on different specific aspects of control will be presented briefly in turn, concluding with the work performed on integration and unification. In the area of MHD control, work on NTMs continues to feature prominently, with increasing refinements in characterization and understanding. It has been determined that NTM destabilization through central co-ECCD only occurs within a given density range [32]. For the first time, the application of a periodic (sinusoidal) deposition-location sweep has been shown to be effective for both pre-emption and stabilization of the (2,1) NTM, the latter requiring more than twice as much power as the former. A simple new analytical model for the time history of the magnetic Δ′ stability index, for NTMs triggered as classical tearing modes, was introduced and shown to provide accurate simulations of the island evo lution [33]. Quasiin-line ECE, nearly counter-linear with the associated ECRH actuator, was tested on TCV for monitoring the island's position, and was demonstrated to be accurate to within less than the EC beam width [34]. Though receiving less attention than NTMs in recent times, the vertical axisymmetric instability also remains a concern; while the (magnetic) stabilization technique is well understood, its economics are strongly affected by the minimum achievable stability margin in any given device. In this perspective, experiments were carried out in TCV using elongation ramps to provide data for model validation [35]. In a unique multi-institutional collaborative effort, TCV functioned as a testbed for an eclectic ensemble of currentprofile control strategies, within a unified framework using the tokamak profile simulator raptor for offline testing. Various highly specialized controllers (model-predictive [36], Lyapunov-based [37], and interconnection and damping assignment-passivity based control [38]) were all tested and validated successfully using this paradigm. A parallel activity has seen the development of alternative, exploratory current control methods-based on so-called sliding mode and supertwisting controllers-specifically for TCV, yet to be tested [39]. In a related development, a model-based detector of L-H and H-L transitions and of ELMs was used successfully to actuate a power reduction and consequently a back-transition to L-mode, thus avoiding the disruptions that typically terminate ELM-free H-modes. A shape and position controller, using boundary flux errors and based on a singular-value decomposition approach, was delivered in a complete time-varying version and applied in particular to advanced divertor configurations such as snowflakes [40]. One limitation of this control scheme is its inherent coupling with vertical stability control, making its optimization highly dependent on the particular configuration. To obviate this problem, a new, decoupled set of controllers is currently under development, with promising initial tests already performed on TCV [41]. The raptor code, updated with new time-varying terms [42], was employed in a general effort towards the optimization of the ramp-down phase of tokamak discharges, using appropriate transport models including the L-H and H-L transition dynamics. The optimization is found to include in particular an early H-L transition and a sharp elongation reduction to reduce the internal inductance. Promising initial tests were performed on TCV, pointing the way to possible automation of the optimization procedure [43]. Real-time equilibrium reconstruction, now routinely available on TCV with sub-ms resolution (rtliuqe), is at the crux of modern tokamak control. From this consideration stems the need to improve over simple magnetic reconstruction, by using kinetic constraints available in real time: a kinetic equilibrium reconstruction suite compatible with real-time needs has accordingly been developed for TCV [44]. In parallel, efforts towards a unified European reconstruction code have turned an eye to TCV as a particularly challenging reconstruction problem, and first reconstructions with the equal/ equinox codes have been obtained and benchmarked with rtliuqe [45]. Controller integration is steadily progressing: NTM, β N , safety factor (estimated by raptor), density, and shape were shown to be controlled simultaneously. Key to this is the constant development and addition of new elements as need dictates: NBH power control and readback, ECRH power and launcher readback were recently incorporated in the digital control system; the torbeam (real-time ECRH beam-tracing) [46] module was also added and rabbit (NBH deposition) [47] is currently being integrated. A shift from controllerbased to task-based control is underway. The architecture for task-based integrated control separates state estimation and event detection from decisions related to actuators. A superviso ry controller coordinates the execution of multiple control tasks by assigning priorities based on the plasma state and on the discharge [21]. Crucially, this entire layer is tokamak-agnostic by construction, providing a level of abstraction to discharge planning [48]. The tokamak-specific interfaces are also standardized to minimize exceptions. New controllers may be tested and integrated continuously using a unified controller test environment comprising raptor and several common algorithms. Wall cleaning and start-up assist with ECRH in support of JT-60SA operation TCV has been used to test techniques anticipated for the successful operation of JT-60SA [49], which will also feature a carbon wall. Characterization of wall cleaning with ECRH, as a substitute for glow discharge cleaning (GDC), has continued from the previous campaign [50]. Additionally, experiments were performed on ECRH-assisted start-up at reduced loop voltage (electric field 0.7 V m −1 , consistent with JT-60SA) with residual gas and/or impurities, such as would be expected after a disruption or generally with a shortened shot cycle. The question addressed by this work is of equal importance for DEMO. The minimum ECRH power required for breakdown and successful burn-through was determined by controlling the power from a plasma-current observer. Experimental tests included variations in deuterium prefill, reductions in inter-shot pumping speed (down to 25%), and puffing of Ar impurities, always without the GDC customarily performed between TCV discharges. It was found that 0.4 MW ECRH at the reduced 0.7 V m −1 electric field was sufficient to start the plasma and sustain the plasma current, except with Ar injection, which increased the threshold. The results of this experiment are used to validate the 0D breakdown code bkd0, which is coupled with the beam tracing code gray to model the ECRH propagation [51]. Transport and confinement The physics mechanisms underlying the different scenarios are explored through systematic parametric studies making use of all available diagnostics. The H-mode pedestal is under particular scrutiny, as it can play a formidable role in determining the global confinement in conditions of stiff core profiles. As both fueling and impurity seeding will likely be necessary in a reactor, the latter for heat load control, it is especially important to determine their effect on the pedestal and on confinement. Such a systematic study was performed on TCV in the latest campaign, in a type-I ELMy H-mode, with the addition of a shaping variable, i.e. triangularity. Specifically, a deuterium gas fueling scan and two nitrogen puffing scans-one with no fueling and one with constant fueling-were performed at two different values of the triangularity [52]. It is found that D2 fueling increases the density pedestal height and moves it outwards; interestingly, the shift is in the opposite direction to AUG [53], a metal-wall machine where it is speculated that the high-field-side high-density front observed there could play a role. The pressure pedestal height displays a generally decreasing trend with increased gas injection ( figure 4). The sensitivity to fueling and seeding increases with triangularity. However, the total stored energy is largely unaffected by these changes, indicating that the core profiles do not remain strictly stiff during these scans-or more accurately, they are not stiff with respect to varying fueling and seeding. An MHD stability analysis indicates that these scenarios are close to the ideal stability limit, where the pedestal is defined by the peeling-ballooning (PB) limit: in the widelyused eped1 model [54], the evolution of the pedestal leads it to reach the kinetic-ballooning-mode (KBM) limit first, which sets the marginally stable pressure gradient, and then to continue increasing in both width and height until the PB limit is attained, which precipitates an ELM crash. This model yields a pedestal width scaling which makes it proportional to the square-root of the poloidal β at the pedestal top-the proportionality factor being generally machine-dependent. The current dataset is in fact not fitted by this relation with a single constant, but eped1 modeling gives satisfactory results when the experimentally derived factor is used, confirming that the scenarios are likely to be PB-limited [52]. In spite of apparent differences in the phenomenology, a unified picture is in fact found among TCV, AUG, and JET-ILW in the PB-limited pedestal regime. While the underlying cause of the pedestal shift and particularly of its direction is not understood, the pedestal evolution in response to this shift is consistent: an outward shift of the pressure pedestal reduces its stability and lowers the pedestal height [55]. The properties of the pedestal in earlier discharges with negative upper triangularity [56] were also revisited to evaluate the attractiveness of a negative-triangularity reactor. In these discharges, the shift to negative upper triangularity mitigated the ELMs, increasing their frequency and decreasing the power loss per ELM. The eped1 model [54] was coupled with a suite of codes commonly used for TCV in the so-called eped-ch implementation. It was established that negative upper triangularity restricts the KBM + PB-stable domain by closing the second-stability region for ballooning modes, thus further limiting the pedestal's width and height, with the result of mitigating the power expelled by ELMs [57]. Whether this very attractive feature would come at the expense of reduced core performance remains to be determined, e.g. through transport modelling. Significant progress has been made in establishing a robust small-ELM regime in TCV. Following the lead of the type-II ELMy regime in AUG [58], grassy ELMs were obtained at high triangularity and steady fueling, replacing the type-I ELMs completely at a triangularity δ = 0.54 ( figure 5). The auxiliary heating used in this experiment was 1 MW NBH plus 0.75 MW X3 ECRH. This configuration approaches a double-null shape and the role of the secondary X-point is difficult to disentangle from that of triangularity. The pedestal profiles are remarkably similar in the two discharges shown in figure 5 and the stored energy is thus also similar. However, the peak heat flux in the grassy-ELM regime is reduced by a factor of ten with respect to type-I ELMs, approaching in fact the inter-ELM level of the latter case [59]. In a related development, initial tests were performed for a planned revisitation of ELM pacing through vertical kicks, using new features in the TCV control system [60]. The basic physics of the L-H trans ition also continues to be explored, with current emphasis on transitions induced by a varying divertor leg length in an otherwise stationary plasma. Understanding the generation of intrinsic rotation and the mechanisms governing momentum transport is another crucial goal, as rotation is a central ingredient in all scenarios through its inter-relation with transport of energy and particles and with MHD stability. Techniques were developed in the latest campaign to modulate the intrinsic torque in a controlled way for reliable quantitative estimation of intrinsic versus externally-induced rotation. This involves modulation of both the heating and the diagnostic neutral beam, with a complex phasing relationship, and unraveling the data while accounting for the perturbative nature of heating. A thorough documentation of the dependence of intrinsic rotation on density, edge safety factor, and auxiliary power is also underway. Gyrokinetic simulations have suggested a correlation between the toroidal rotation inversion observed when crossing a density threshold with a transition from an ion-temperature-gradient (ITG) dominated to a mixed ITG-trapped-electron-mode (TEM) turbulence regime [61]. Turbulence-driven residual stress is predicted to depend strongly on the up-down asymmetry of the plasma cross-section, which can be parametrized in terms of elongation, triangularity, and tilt angle [62]: these predictions are also being tested in a broad shaping scan. A detailed study of the evolution of rotation during the sawtooth cycle has shown that a co-current torque occurs in the core at the sawtooth crash, in addition to the expected fast outward transport of momentum [63]. Finally, the first characterization of the changes in impurity flow occurring at the L-H transition was obtained on TCV, revealing the familiar formation of a narrow and deep radial-electric-field well just inside the separatrix [63]. The fundamental physics associated with the high-power heating systems is being investigated through the properties, and particularly the confinement, of suprathermal particles. Alfvén modes were recently observed on Mirnov signals for the first time, in the presence of simultaneous off-axis NBH Figure 4. Pressure at the top of the pressure pedestal as a function of fueling or impurity seeding gas injection rate for the lowtriangularity case. Reproduced from [52]. © IOP Publishing Ltd. All rights reserved. and off-axis ECRH. These modes are only seen with ECRH. Fast-ion measurements by FIDA and NPA diagnostics are used in conjunction with the evolution of the main plasma parameters to model the dynamics of NBH; a high edge neutral density-consistent with charge-exchange losses of the order of 25%-is required to explain the results, but a FIDA signal deficit remains in the case of NBH + ECRH, possibly suggesting enhanced turbulent transport [64]. Turbulence A set of fluctuation diagnostics including tangential phasecontrast imaging (tPCI), correlation ECE, and more recently Doppler backscattering and short-pulse reflectometry are employed in fundamental studies of plasma turbulence. The long-standing observation of a clear improvement in confinement in plasmas with negative-triangularity shape compared with positive-triangularity ones [65] has led to an extensive set of studies of the dependence of turbulence on triangularity. Comparisons were made between discharges at δ < 0 and δ > 0, in conditions of equal heating (Ohmic and ECRH) as well as with different heating but matched pressure profiles. In each case a clear suppression in both density and temperature fluctuations is observed with δ < 0, more prominent in the outer region of the plasma but extending deep in the core, approximately to mid-radius (figure 6) [66,67]. The correlation length and decorrelation time of the broadband fluctuations also decrease with δ < 0. An additional effect of negative triangularity appears to be an increase in the critical gradient for the core pressure profile [66]. As for the variation in anomalous transport, this difference in turbulence characteristics deep in the core suggests the existence of nonlocal effects, since the local triangularity vanishes there. Global gyrokinetic simulations are broadly in accord with the experimental results [68]. In complementary experiments, the fluctuation amplitude was found to decrease with increasing effective collisionality in the TEM-dominated regime [67], consistent with the stabilizing effect of collisionality on these modes and, again, consistent with an attendant improvement in confinement [65]. A mode with the characteristics of the geodesic acoustic mode (GAM), possibly coupled with avalanche events, is routinely observed on TCV. It sometimes takes the appearance of a continuum mode, with frequency varying with the minor radius according to its linear dependence on the ion sound speed; while in other cases it exhibits a constant frequency over the spatial extent it occupies, typically the outermost third of the plasma cross-section. The physical quantities governing the GAM type are not yet understood, but recent tPCI measurements have shown for the first time a transition from the former to the latter mode in a single discharge, during a safety-factor scan in an ECRH-heated L-mode plasma [69]. Gyrokinetic simulations, however, suggest that the varying density and temperature during the scan, rather than the safety factor itself, may be the cause of the transition [70]. For the first time, this GAM-like oscillation has been detected by scrape-off-layer (SOL) diagnostics near the strike points of diverted plasmas. This includes photodiodes observing D α emission, wall-embedded Langmuir probes, and magnetic probes. The mode has a high degree of correlation with the core mode measured by tPCI and suggests that it drives a particle flow to the wall. These observations were documented in conventional single-null and double-null shapes as well as alternative divertor configurations such as snowflake (SF) and super-X plasmas [69]. In addition to causing anomalous transport, turbulence can have deleterious effects also on the propagation of externally launched waves; it is feared for instance that strong SOL fluctuations can refract and scatter the mm-wave beams used for ECRH and ECCD and broaden them to the point where the efficacy of, e.g. tearing mode stabilization would be sharply reduced. Dedicated experiments were carried out on TCV to quantify this effect, using a simple setup consisting of the vertically-launching X3 antenna, located at the top of the vessel, coupled with a microwave detector at the bottom [71]. Conditional sampling techniques were used to determine the degree of correlation of oscillations in the transmitted power with fluctuations in the top SOL traversed by the beam. To calculate the beam perturbation, a full-wave code was implemented in comsol and benchmarked against the WKBeam code. The Global Braginskii Solver (gbs) code was employed to compute the SOL fluctuations, with input from experimental profiles. This analysis suite was able to demonstrate a causal relationship between the SOL fluctuations and the power transmission oscillations, which exceed 20% in a simple L-mode plasma. In H-mode, similar perturbations are seen to be caused by ELMs ( figure 7), although the exact physical mechanism still remains to be clarified [71]. Edge and exhaust physics Exhaust physics remains a central concern of the TCV program, which features the broadest range of divertor topologies, from conventional single-and double-null, to all versions of the snowflake concept, to super-X and beyond. This section reports in turn on results related to divertor detachment, on heat load dynamics in attached conditions, and on SOL transport and fluctuations. L-mode. Detachment is studied primarily through density ramps and impurity seeding. In Ohmic L-mode plasmas, detachment achieved either by fueling or nitrogen seeding results in a reduction of the heat and particle loads at the strike points, as shown by both Langmuir probes (LPs) [72] and infrared thermography [73,74]. Only the outer target detaches in the case of fueling, whereas both the inner and outer targets detach in the case of seeding. Also, the familiar density 'shoulder' in the upstream SOL profile only appears in the former case [74]. Novel spectroscopic analysis techniques [75] have yielded profiles of divertor ionization and recombination rates and of radiation along the divertor leg, which clearly demonstrate that detachment is caused by power 'starvation', i.e. a reduction in the ionization power source, combined with an increase in the energy required per ionization. Volume recombination plays only a minor role except with deep detachment at the highest densities reached. This is in agreement with analytical predictions as well as solps simulations [76]. Momentum losses of up to 70% develop along with power starvation and the onset of detachment, with charge exchange reactions dominating over ionization. H-mode. Leveraging on the experience accumulated in L-mode detachment studies in previous campaigns, attention has moved primarily to H-mode in the latest run. Contrary to the L-mode case, the forward-field configuration-with the ion ∇B drift directed towards the X-point-was used to facilitate the L-H transition. As the parameter space for stable ELMy H-mode accessible by NBH is relatively limited, a multi-pronged approach was pursued [77]. The clearest indications of partial detachment have been obtained at q 95 = 3.9 (I p = 210 kA); scans in divertor geometry, including X-and super-X configurations, were however conducted at q 95 = 4.6 (I p = 170 kA), where the ELMy H-mode regime is more robust; and to extend the study to low q 95 , detachment dynamics was also studied in ELM-free H-modes, as this is the regime naturally obtained at q 95 = 2.4 (I p = 340 kA). Lineaveraged density in the ELMy plasmas is near the minimum in L-H power threshold as a function of density (5 × 10 19 m −3 ). The power threshold itself is found to be largely insensitive to the divertor geometry (including the snowflake-minus, or SF-, case). Detachment was again sought with both fueling and nitrogen seeding. In the ELMy regime, partial inter-ELM detachment of the outer target was observed only with dominant seeding, with attendant power-load mitigation by a factor of two, a 30% reduction in ion saturation current accompanied by a change in its profile (figure 8), and the familiar upstream migration of the N II and C III radiation fronts towards the X-point [77]. Detachment in low-q ELM-free H-modes was also accompanied by a power-load reduction by a factor of two but no measurable decrease in the total particle flux. This regime is inherently non-stationary and short-lived (~200 ms) as the density increases uncontrollably until a disruptive limit is encountered. Analogously to previous L-mode investigations, scans of flux expansion were performed. The total flux was varied by sweeping the outer leg, varying the major radius of the outer target by 40%. As in L-mode, this has no direct effect on the detachment process, although in the ELMy cases the movement of the impurity emission front (a proxy for divertor cooling) is 20% slower at the largest radius [78]. Simulations with solps are able to reproduce the insensitivity to strikepoint radius, attributing it to the competing and counter-varying effects of flux expansion and power losses by ionization and radiation (stronger at small radius). Scans in poloidal-flux expansion were also performed at fixed target radius. In the ELMy case, detachment shows signs of more H-mode resilience to nitrogen and a stronger drop in particle flux at large flux expansion, and radiation along the outer leg is increased, although the radiation fraction is far lower than in L-mode [77]. ELM-free plasmas, attached or detached, exhibit a drop in particle and heat flux to the outer target with increasing flux expansion. This effect is attributed to a redistribution of the fluxes between the two targets (see section 5.2.1), which could dampen the benefit of flux expansion in a reactor [79]. Diverted plasmas. The issue of heat loads on the first wall, of crucial importance for the safe operation of a reactor, is intimately tied to SOL transport physics. The SOL heat-flux profiles, which are measured by infrared thermography, are almost universally parametrized using a main-SOL upstreamremapped power decay length (λ q ) and the so-called spreading factor (S), which describes the transport scale length in the divertor SOL [80]. Experiments were performed in attached, SN, Ohmic, low-density plasmas in TCV, in which the connection length was modified without a concomitant change in poloidal flux, by varying the vertical position of the plasma and thus the divertor leg length [81]. It was found that S is unaffected by this change, whereas λ q increased monotonically with the leg length. A modeling effort with the simple Monte Carlo particle tracer (monalisa) as well as with the comprehensive transport code SolEdge2D-eirene assuming diffusive cross-field transport yields good agreement with the experimental heat flux in the short-leg case. As the leg becomes longer, however, the effect of ballooning turbulence at and below the X-point becomes more important. This is revealed by the first-principle turbulence code tokam3x [81], run in isothermal mode and reproducing experimental trends in the target density profile, which, similarly to the heat flux profile, shows an asymmetric broadening with leg length. Simulations with the solps-iter code [82], assuming diffusive, anomalous cross-field transport, were also able to show trends in agreement with the experiments, though further improvement could be achieved by including enhanced transport in the region of unfavorable magnetic curvature. Increasing the connection length by increasing the poloidal flux expansion has a much weaker effect on λ q . By contrast, λ q is found to decrease for increasing plasma current. Both an increase in divertor-leg length and in flux expansion have the effect of reducing the asymmetry in power load at the inner and outer targets-their ratio increasing to nearly unity at the largest values of flux expansion (figure 9) [83]. This variation is attributed to a decrease in the outer conductance, as indicated by emc3-Eirene simulations. A simple analytical model based on SOL conduction is remarkably successful in reproducing these effects, including the dependence of λ q on plasma current [79]. The difference in λ q between the inner and the outer divertor, as well as a dependence on the magnetic-field direction, are however not captured by this model. A study of transport and heat loads was conducted on the alternative SF-divertor configuration, using a fast reciprocating probe in addition to infrared thermography [84]. The power sharing between the inner and the outer divertor is modified by the appearance and position of the secondary X-point. A simple analytical model is used to derive a single effective width of the SOL heat-flux profile in the low-poloidal-field region. This width is found to be similar in the SN and HFS SF-configurations, whereas it doubles in the LFS SF-, even though the outer-midplane SOL profiles are similar ( figure 10). The increased diffusivity in the latter case cannot be explained by the pressure-driven plasma convection expected near the primary X-point, whereas it is consistent with ballooning interchange turbulence enhanced by the low poloidal field [84]. In the forward ∇B drift direction, the SF-exhibits double-peaked particle-and heat-flux profiles, which previous simulations were unable to reproduce. The simple conductance model described before also fails for this particular case [79]. The 2D edge transport code uedge was used-only on SN discharges thus far-to test the hypothesis that these discrepancies may be due to ExB drifts in addition to turbulent processes. The code was able to reproduce the double peaks whereas a control run with the drifts turned off did not; the variation in density and temperature between the forward-and reversed-field cases is also reproduced successfully, although minor discrepancies persist [85]. The effect of shaping was explored through a scan of the upper triangularity δ up from negative to positive, in deuterium and helium plasmas and in both forward and reversed field [86]. The outer-divertor λ q was found to be larger in helium (as in AUG) and to increase with δ up . The inner-divertor λ q , by contrast, is non-monotonic and reaches a maximum at δ up = 0. The direction of the field is immaterial. This dependence on δ up is not captured by standard scalings but is consistent with one scaling containing a dependence on the edge temperature [87], which is found to decrease with δ up [86]. A limited study of heat expulsion by ELMs was conducted, in conjunction with AUG, with the specific aim of determining under which circumstances a second, slower ELM crash follows the first one, increasing the total energy released. The answer is that the second crash is observed only at high density and with intense fueling. The hypothesis that the second ELM crash is related to a threshold in pedestal pressure is disproven by this dataset [88]. 5.2.2. Limited plasmas: the narrow SOL feature. It is by now well documented that inside-limited L-mode plasmas frequently exhibit a two-slope SOL parallel-heat-flux profile, which results in an enhanced wall heat load, potentially dangerous to a reactor during the limited ramp-up phase. It was reported earlier that the narrow feature disappears on TCV at low plasma current or high density [89]. A more recent study was specifically conducted using the reciprocating probe on the outboard side. A narrow feature is seen there but is considerably wider than that inferred from thermography measurements on the inboard side. However, the calculated power fraction contained in the feature is found to be equal for the two measurements, indicating that it is indeed the same phenomenon. The width of the feature is determined to scale with the radial correlation length of the turbulence, as is expected on theoretical grounds if it is due to sheared E × B drifts [90]. Nitrogen impurity seeding also has been demonstrated to eliminate this feature when the radiated power fraction exceeds 60%. The attendant 30% increase in effective charge may well be a tolerable price for the mitigated power flux. In addition, a radiative mantle is seen to persist long after the injection, resulting in enhanced core temperature [91]. Wall heat-flux control. A wide-angle visible and infrared viewing system is planned for ITER to protect the plasmafacing components (PFCs) from excessive power deposition in real time [92]. A model-based controller, which accounts for 3D effects in the PFCs, is being developed for this task. The controller is based on real-time equilibrium reconstruction, which is then used to describe the deposited heat flux as a magnetic-flux function with user-specified parameters for the power exhausted into the SOL and the SOL heat-flux width. The heat-flux observer has been validated in limited plasmas in TCV and was found to be in good agreement with the heat flux determined from infrared thermography [93]. SOL turbulence and transport SOL turbulence studies focus primarily on the larger, fieldaligned intermittent structures known as filaments or blobs. Considerable data analysis work has gone in particular into investigating the possible relation between filaments (characterized by the TCV reciprocating probe) and the flattened upstream density-profile feature observed in the SOL in many scenarios and termed the density shoulder. Density ramps with varying outer-target flux expansion were used to determine that (a) the filament size increases with density but is insensitive to the connection length, (b) the density gradient length increases with density in the near SOL but is unaffected in the far SOL, while both are insensitive to the connection length. It is concluded that flux expansion is not a viable tool to affect the density profile. It is believed that the shoulder formation requires high collisionality, but this appears not to be a sufficient condition in TCV [94]. These studies were extended more recently, in conjunction with AUG, through plasma current scans-both at constant toroidal magnetic field and at constant q 95 . No clear trend is evinced in the latter case, whereas at constant B T the shoulder is formed at lower edge density when current is lower, consistent with an underlying dependence on the Greenwald fraction. Unlike on AUG [95], no clear correlation is found between the shoulder appearance and either filament size or divertor collisionality. As filaments in both devices originate primarily from resistive ballooning instabilities, the different behavior must be associated with other mechanisms-arising, presumably, from the radically different divertor geometry (closed in AUG, open in TCV) [95]. An extensive reciprocating-probe database was constructed for TCV and mined with novel analysis techniques to study the scalings of the radial velocity of filaments, motivated by analytical theory predictions [96] and with the goal of refining models that are crucial for the understanding of SOL transport. In absolute terms, filament diameters lie typically in the 3-11 mm range and their radial velocities are between 0.5-2 km s −1 (outward); however, significant tails exist in the distribution and, in particular, inward velocities are observed for the first time, only in reversed field (ion ∇B drift pointing away from the X-point) and in conditions of high poloidalvelocity shear. The maximum velocity is a function of filament size and of divertor collisionality as predicted by theory, but the velocity of most filaments is in fact independent of collisionality owing to their resistive-ballooning character, which explains the insensitivity of velocity to density and connection length [97]. A study of flows and fluctuations in the low-poloidal-field region of a LFS SF-plasma was conducted with a fast framing visible-light camera. As the normalized distance σ between X-points decreases, the flow in the outer SOL is unchanged, whereas it increases in the inner SOL; at the same time, the fluctuations between the X-points become uncorrelated from those above the primary X-point, suggesting the formation of filaments in the low-B p region. In addition, the dominant motion of these filaments turns from poloidal to radial as σ decreases, consistent with an enhancement of cross-field transport [98]. Doublets Beyond all current 'alternative' scenarios lies a long-dormant topological concept, the doublet [99,100] (with a figureof-eight flux-surface featuring an internal X-point), which is believed to afford the benefits of high elongation with increased vertical stability [99] and promises tantalizing new physics associated with its internal X-point. The primary difficulty associated with this configuration is the inherent tendency of the two lobes to collapse into one, owing either to the magnetic attraction between the two current channels or to a thermal instability favoring one lobe over the other. With a uniquely suited coil set, TCV was the natural device on which to revisit this possibility using modern control technology. In preparation for this attempt, extensive work went into tuning the plasma control system to improve and optimize the plasma breakdown and burn-through, which had a non-negligible failure rate. Proper consideration of the large currents circulating in the conducting vessel during this phase was required for this task. With these tools in hand, a double breakdown and ramp-up was attempted, with only partial success in that the top lobe always coalesced rapidly into the lower one. The next step was to apply ECRH power separately to the two lobes, each controlled from its own lobe current observer, in an attempt to equalize the currents. A successful doublet was maintained in this manner for ~30 ms, with a current up to 270 kA and peak electron temperature 1.3 keV in both lobes ( figure 11). These initial data suggest the appearance of a transport barrier in the negative-shear mantle just outside the internal separatrix [101]. Power and deposition scans also showed that the scenario was surprisingly robust against coalescence, suggesting that the transport barrier common to both lobes effectively sets the boundary condition for both and results in similar pressure profiles irrespective of the input power apportionment. The reasons for the disruption at ~30 ms are not currently understood, and additional research is planned to be conducted towards achieving steady state. Conclusions and outlook TCV is documenting the physics basis for ITER and exploring avenues for solving its most pressing concerns, while also casting a wide net in configuration space to identify viable alternatives for DEMO and an eventual fusion reactor. The experimental campaigns of the past two years have brought significant advances on all these fronts. Looking to the near future, a substantial upgrade program is now in full swing [102]. Additional auxiliary heating sources are being added as discussed in the introduction. Even more momentously, in-vessel baffles will be added to equip TCV with a partially closed divertor for the first time, allowing it to reach reactor-relevant neutral density and impurity compression [103]. We plan to fabricate baffles of different sizes, to be swapped in relatively short interventions, in order to vary the divertor closure and investigate how this affects plasma performance. The first set to be installed comprises 32 baffles on the HFS and 64 on the LFS [104] (figure 12). Simulations with solps-iter and emc3-Eirene were performed to guide the design [103]. For enhanced control of the plasma and of the divertor region, dedicated pumps (e.g. cryo-pumps) are also under consideration in addition to toroidally distributed fuel and impurity injection valves. A phased program of diagnostic additions and upgrades is also in place. The first new diagnostics to be associated with the vessel upgrade will be ~50 new Langmuir probes, baratron gauges, infrared thermography, bolometry, divertor spectroscopy, divertor Thomson scattering, and additional Mirnov coils [102]. Note that most of TCV's versatility in plasma shaping will be preserved with this upgrade, which is in any case modular and reversible. This upgrade has been designed with the express goal of extending the TCV research program well into the ITER era.
12,588.8
2019-08-30T00:00:00.000
[ "Physics" ]
Integrating Web-Based Collaborative Live Editing and Wireframing into a Model-Driven Web Engineering Process Today’s Model-Driven Web Engineering (MDWE) approaches automatically generate Web applications from conceptual, domain-specific models. This enhances productivity by simplifying the design process through a higher degree of abstraction. Due to this raised level of abstraction, the collaboration on conceptual models also opens up new use cases, such as the tighter involvement of non-technical stakeholders into Web development. However, especially in the early design stages of Web applications, common practices for requirement elicitation mostly rely on wireframes instead of MDWE, created usually in analog settings. Additionally, state-of-the-art MDWE should integrate established and emerging Web development features, such as Near Real-Time (NRT) collaborative modeling and shared editing on the generated code. The combination of collaborative modeling, coding and wireframing, all in NRT, bears a lot of potential for improving MDWE practices. The challenge when covering these requirements lies with synchronizing source code, wireframes and models, an essential need to cope with regular changes in the software architecture to provide the flexibility needed for agile MDWE. In this contribution, we present a MDWE approach with live code editing and wireframing capabilities. We present the conceptual considerations of our approach, the realization of it and the integration into an overarching development methodology. Following a design science approach, we present the cyclic iterations of developing and evaluating our artifacts, which show promising results for collaborative Web development tasks that could open the gate towards novel, collaborative and agile MDWE techniques. Introduction Current Model-Driven Web Engineering (MDWE) approaches try to increase productivity by enabling the generation of Web applications, based on information usually specified in the form of conceptual models [21].Corresponding to a certain domain-specific metamodel, the models reflect the structure of Web frontends and abstract the pagination and the navigation of applications.Based on certain templates and incorporated, framework-specific best practices, the resulting applications can be specified and instantiated accordingly.By splitting the metamodel into separate views that reflect separate parts of the application, different stakeholders can focus on different parts of application design, according to their background, expertise and interest.If used in a Near Real-Time (NRT) collaborative fashion, this approach bears the potential to involve nontechnical stakeholders better into the development process and thereby also serves as a means to improve requirements elicitation. However, modeling alone often cannot depict the complexity of a Web application.Certain parts of an application are very specific, and while a metamodel can enforce the overall architecture of a Web application, often manual code editing is still needed to implement the complete application functionality.To adapt to this, a collaborative MDWE approach has to support development cycles with rapid changes in the model-based architecture and the corresponding source code, both being simultaneously edited.Hence, traditional methods that enable the synchronization between model and code need to be adapted to this collaborative setting. On the other hand, modeling (and especially manual code editing) still requires a rather good and specific development knowledge, in order to be able to model and modify the generated software artifacts.Software prototyping, often also called wireframing, is a popular software engineering method to quickly conceive the most important aspects of a software application at the early stages of software development.It is a collaborative and social process, that involves designers, end users, developers and other stakeholders.In contrast to a conceptual model, that consists of rather abstract nodes and edges, a wireframe provides a closer representation of the final Web application's visual design.Consequently, a wireframe is more intuitive and feels more familiar to non-technical stakeholders.Such an application promises a lower learning curve, with less required knowledge about Web development.In order to achieve such a novel collaborative frontend development practice, live synchronization between models and wireframes have to be implemented. To illustrate this concept, we want to sketch a use case that integrates this novel MDWE practice.A professional community of medical doctors uses videos and images as main study and documentation objects in their training practice.We now assume that this community wants to integrate 3D objects (e.g., highly detailed digital representations of anatomical objects) in their training practices.Such features cannot be easily implemented without technical knowledge.On the other hand, they are also hard to explain to developers without deeper domain knowledge.Thus, the community uses a Web-based MDWE approach for requirementselicitation with (possibly external) developers.Doctors and developers can now distribute according to their domainspecific knowledge to work on the corresponding views.For example, doctors could produce wireframes to explain the developers their proposed extension of the current system.Directly transforming these wireframes into models, developers start working on the corresponding models and source code, all directly in the browser and in NRT.At all times, the Web application is automatically generated and deployed on the Web, thus the community can follow along and provide direct feedback on the current state of the prototype. In this contribution, we present a Web-based MDWE approach that integrates both live code editing and wireframing, all in a NRT collaborative setting.This work provides a first complete view on the approach, including the interplay between the different, previously independently published parts [8][9][10][11][12].We present additional evaluations, the embedding into an overarching development methodology, and a description of the underlying research methodology and its application in this research project.We start by presenting the background and related work of our contribution in Sect. 2. Our research is based on a design science methodology [17], that is presented in Sect.3. We present a conceptual overview of our approach in Sect.4, which includes the presentation of the different views and representations of a Web application within our framework (Sect.4.1), as well as the general application metamodel (Sect.4.2).The approach is embedded into an overarching development methodology for the creation and deployment of peer-topeer (p2p)-based microservices.We describe this integration in Sect.4.3, which also includes the connection to a Webbased requirements analysis platform and the possibility to directly define monitoring capabilities of services within the MDWE environment.We apply related work on traceability [29] and synchronization [16] to realize the live code editing functionality (Sect.5.1).We adapt related conceptual mappings of MDWE [23] and wireframes [35], and present a conceptual mapping for the co-evolution of models and wireframes in NRT (Sect.5.2).Our approach is realized as a MDWE framework named Community Application Editor (CAE), which is described in Sect.6.Here, we describe the user interface of the CAE (Sect.6.1), its general architecture (Sect.6.2) and the technical integration of both live code editing (Sect.6.3) and wireframing (Sect.6.4).We continue by giving a detailed overview on the evaluations we conducted during our research in Sect.7, before we conclude this contribution in Sect.8. Background and Related Work In this section, we describe the work related to our research.We start with an overview on MDWE in Sect.2.1, before we present the background on transformation algorithms, that we use for the realization of the live coding features (Sect.2.2).We conclude this section with related work on wireframing, which includes works that define a structural user interface model, that we took as a basis to realize the collaborative wireframing features (Sect.2.3). Model-Driven (Web) Engineering Most MDWE approaches follow the philosophy of separation of concerns [18].Based on a comprehensive metamodel, certain views are defined to reflect specific aspects of a Web application.One of the first MDWE approaches that obeyed the separation of concerns idea in MDWE was OOHDM [36], with the goal of dealing with the increasing complexity of Web applications.It described a methodology for systematic guidance to design large scale, dynamic Web applications.The main activities of the OOHDM methodology comprised a conceptual, navigational and abstract user interface design and proposed how they are implemented in the final Web application.A slightly more recent, as well as ongoing, MDWE methodology is the UML-based Web Engineering (UWE) [20], which was conceived as a conservative extension of the UML.Thus, already existing concepts of the UML are not modified, the new extensions are just related to existing concepts.The first extensions are UML stereotypes, which are used to define new semantics for model elements, e.g., a navigation link.The Object Constraint Language (OCL) is used to define constraints and invariants for classes.UWE follows the separations of concerns principle to split up the modeling process into the conceptual, navigational, and presentation modeling part.WebML is another MDWE approach developed in 2000.It does not propose another language for data modeling, but also extends the UML and is compatible with classical notations of ER-diagrams and others [5].WebML as well emphasizes the concept of separation of concerns.Therefore, the development process is divided into four distinct modeling phases.The structural model represents the content of the site expressed as UML class or ER diagram, and the hypertext model consists of a composition and navigation model.The former one describes which entities of the structural model are composed by a certain page and the latter one specifies the links between pages.The third one is the presentation model which expresses the layout and graphical appearance of pages.Finally, the personalization model defines user and/or user group specific content.In 2013, WebML emerged into the Interaction Flow Modeling Language (IFML) [4] and was adopted as a standard by the Object Management Group (OMG).A rather recent implementation of the IFML specification in a Web-based editor is the Direwolf Model Academy [22], which also features NRT collaborative editing of IFML models. ArchiMate is an enterprise architecture modeling language [23].Although not a direct MDWE approach, it is relevant related work for our approach, because of its interpretation of the separation of concerns paradigm.It separates the content and visualization of the view.The main advantage of this is the usage of different visualizations on the same modeling approach and vice versa.The content of a view is derived from the base model and expressed in the same modeling concept.The visualization on the other hand can be completely different from the actual representation of the model.ArchiMate allows to define a set of modeling actions, that alter the content of the model.These modeling actions are mapped to operations on a specific visualization of the view.This additional abstraction level allows to define any sort of visualization, like videos or dynamic charts. In this contribution, we use the concept of view separation to map certain operations on the wireframing editor to operations on the modeling canvas, which alter the current state of the wireframing, respectively, the modeling view.To our knowledge, there exists no implementation of a MDWE framework that allow for a complete cycle of collaborative modeling, coding and deployment of an application on the Web in NRT. Transformation Algorithms In the scope of MDWE, Model to Text (M2T) transformations are a special form of Model to Model (M2M) approaches, in which the target model consists of textual artifacts [25], in this case the source code of the generated Web application.The target model is generated based on transformation rules, defined with respect to a model's metamodel [24].Template-based approaches are (together with visitor-based approaches) the most prominent solution for M2T transformations [6].Here, text fragments consisting of static and dynamic sections are used for code generation.While dynamic sections are replaced by code depending on the parsed model, static sections represent code fragments not being altered by the content of the parsed model [28].An important aspect of M2T transformations is model synchronization.It deals with the problem that upon regeneration, changes to the source model have to be integrated into the already generated (and possibly manually modified) source code.To achieve this, traces are used to identify manual source code changes during a M2T (re)transformation.In MDWE, managing traceability has evolved to one of the key challenges [2].Another challenge is the decision on the appropriate granularity of traces, as the more detailed the links are, the more error-prone they become [15,37].Formal definitions of model synchronization for M2M transformations have been proposed in [14,16]. Wireframing In Web engineering, a wireframe is an agile prototyping technique to sketch the skeletal structure of a Web application [3].There exist a plethora of wireframing and mockup tools on the Web.We here exemplary introduce Balsamiq 1 (as one of the most used ones) and Mockingbird 2 (as it features NRT collaboration and is Web-based).The idea behind Balsamiq is not to build large and fully interactive prototypes, which take hundreds of hours to develop and may lead to costly refinements if something can not be realized as intended.Instead, Balsamiq follows a more rapid development philosophy.This has the advantage that developers gain experience and evaluate components of the wireframe directly on a very early version of the Web application, which can also involve end user feedback.This feedback is used to tweak the wireframes and the implementation process starts again.Therefore, Balsamiq offers only limited interactivity features on a wireframe.Mockingbird is a Webbased wireframing application that offers NRT collaborative editing.The graphical editor offers the most common UI elements of today's Web applications, which can be rearranged and resized freely on a page.Similar to Balsamiq, it is possible to link pages and preview them to demonstrate the Web application's interactivity flow. Mockup Driven Development (MockupDD) is a hybrid, model-based and agile Web engineering approach [35].The main goal of MockupDD is to extract and combine the advantages of MDWE methodologies and the rapid collaborative design process of wireframing, to add agility to existing MDWE approaches.MockupDD describes a transformation approach from a mockup to a comprehensive model that is further transformed to the specific models of an arbitrary MDWE approach.In most related approaches, wireframes are not considered as models, and their impact declines in later development stages.MockupDD tackles this with a generic approach to integrate mockups directly into the whole MDWE development process.An additional computational instance builds the bridge from the output of an arbitrary wireframing tool to an arbitrary MDWE approach.The MockupDD methodology begins by creating UI mockups with an arbitrary tool, e.g., Balsamiq or Mockingbird.The resulting mockup file is then parsed, validated and analyzed with regard to a Structural UI (SUI) metamodel, which denotes each UI control element, their compositions and hierarchical structures.The goal is to obtain a "sufficient enough" structural model of the UI.Based on this SUI model, another transformation approach to the specific model of the used MDWE methodology is required.To further enrich the representational strength of a SUI model, MockupDD includes a tagging mechanism.A tag is simple specification that is applied over a concrete node of the SUI model and consists of a name and an arbitrary number of attributes.The main purpose of a tag is to define functional or behavioral aspects of a certain UI element.It allows the designer to construct more complex wireframe specifications.An UI element may have an arbitrary number of tags assigned to it.A SUI model enriched with tags is also called a SUIT model.The concept of MockupDD has been adapted to various modeling languages and domains (WebML [35], UWE [33], IFML [34], and specifically focusing on mockups of touch user interfaces [1]).We base our wireframing integration partly on the conceptual findings of MockupDD, and take the approach one step further by co-evolving the wireframe and MDWE artifacts throughout the whole Web application development process.To our knowledge, there exists no approach that both enables Web-based wireframing, while simultaneously providing a way to transfer the wireframes to conceptual models (or code) and vice versa.As such, our approach combines the conceptual findings of research in the domain of wireframe-to-model transformations with the applicability of existing Web-based wireframing solutions. Methodology Our methodology follows a design science approach as proposed by Hevner [17], and applies the guidelines proposed by Peffers [30]. Figure 1 depicts this process, which consists of seven iterations.We started with the initial question, how to integrate end users more into development, to close the gap in requirement elicitation.This led to the development of the initial CAE prototype, which we used to redesign an existing Web application that showcased its usability.These results were communicated in a demo paper [8].The first usage of the CAE clearly pointed out that a more defined development process was needed.Thus, we started to create the agile and cyclic development process that the CAE approach currently implements.We first evaluated this approach in multiple evaluation sessions with teams of mixed professions, as well as that we observed the usage of it within a longer timespan in a university software development lab course.Results were communicated in [9,10].These evaluations showed a lack of expressiveness of the modeling language for certain aspects of a Web application, which we tackled by developing the Live Code Editor.The results of the evaluation with student developers were published in [12].As both the Live Code Editor and the collaborative modeling are still rather abstract, especially in frontend development, our next step was the integration of the collaborative Wireframing Editor, which we published in [11].We continued measuring the impact of the Wireframing Editor by evaluating the time spent in different views of the CAE, and finally we embedded the CAE even further into its overarching methodology by implementing service success measurement support.The combined results are published in this contribution. Conceptual Overview In this section, we provide an overview on our approach as a whole (Sect.4.1), its underlying metamodel (Sect.4.2), and its integration into an overarching development methodology (Sect.4.3). View-Based Model-Driven Web Engineering Our approach follows the separation of concerns principle [18] and defines four orthogonal views [26] for the modeling of Web applications, based on a comprehensive metamodel: To illustrate this concept, Fig. 3 depicts four representations of the same frontend component.Figure 3a shows the conceptual model.Figure 3b A Web Application Metamodel Although our general approach could be used for arbitrary MDWE frameworks and Web applications, in the scope of these works we consider Web applications composed of HTML5 and JavaScript frontends, and RESTful microservice backends.Figure 4 depicts this Web application metamodel.The central entity of a microservice is a RESTful Resource.It contains HTTP Methods, which form the interface for communication either via a RESTful approach, but also via an Internal Service Call from one HTTP method to another, possibly between different microservices.To enable service monitoring, each HTTP method can be enhanced with multiple Monitoring Messages.According to the idea of polyglot persistence, each microservice can have access to its own Database instance. The central entity of a frontend component is a Widget.This widget consists of Functions and HTML Elements.HTML elements can either be static, meaning that they are not modified by any other element or functionality of the component, or dynamic, meaning that they either are created or updated by one of the frontend component's elements.Both static and dynamic HTML elements can trigger events, which can for example be a mouse click, that cause function calls.The second option to trigger a function call is via an Inter Widget Communication (IWC) Response object, that waits for an IWC Call to be triggered.These calls are again part of a function, which initiates them.A function is able to update or create a dynamic HTML element.The last part of the frontend component view is the communication and collaboration functionality, which includes the already mentioned IWC call response mechanism, as well as microservice calls that are triggered by a function.HTML elements can also be instrumentalized with collaborative support, making it possible for elements to share the same Embedding in Open Source P2P-Based Community Service Development Methodology We integrated our MDWE framework into an agile methodology for distributing community microservices in an open source, decentralized p2p infrastructure [7].The central part of the methodology is the p2p microservice framework itself, called las2peer [19], which consists of microservice hosting nodes.Additionally, a monitoring and evaluation suite [32] collects monitoring information in the network in a decentralized manner and can provide it centrally at a specified monitoring node.Our MDWE framework integrates itself as a means for speeding up development and enforcing best practices, with the possibility for continuous deployment of the developed services directly in the infrastructure. To connect the framework more tightly with the overarching las2peer methodology and to further support requirement elicitation, we recently integrated the Requirements Bazaar [31], a Web application that is especially targeted at including end users in requirement gathering and discussion.A modeling space can be connected with a corresponding project in the Requirements Bazaar, and requirements can directly be discussed during the modeling process in a special component of the editor.Additionally, we also created modeling elements for different types of Monitoring Messages.These can be used to measure certain success features of a service, according to las2peer's monitoring and evaluation suite.Specifically, Monitoring Messages can be connected to an HTTP Method, which then log its processing time, to an HTTP Payload, which logs the payload content, or to an HTTP Response, to log the response content.These measures are available to be extended in the Live Code Editor or to be directly used in the service success modeling [19]. Conceptual Integration of Live Code Editing and Wireframing Support In this section, we describe the formal conceptual integration of both the live code editing (Sect.5.1) and the wireframing support (Sect.5.2). Model Synchronization Strategies for Live Code Editing We unify the architecture of applications developed with our approach through the usage of protected segments, that enforce a certain base architecture, facilitating future service Fig. 4 The Web application metamodel used in this contribution and frontend orchestration, maintenance and training efforts for new developers.Protected segments in the source code describe a functionality that is reflected by a modeling element.In order to encourage the reuse of software components, we allow changes which modify the architecture only in modeling phases.Since our approach offers a cyclic development process, this can be done instantly by switching to modeling, changing the corresponding element and returning to a new coding phase.To further enforce this methodology, before source code changes are persisted, a model violation detection is performed.This informs the user about source code violating its corresponding model, e.g., architecture elements manually added to the source code instead of being modeled.Concerning the synchronization between the code and the model, our collaborative MDWE process uses a trace-based approach.Changes in the code produce traces, which are used in the model-to-code (re)generation in order to keep the corresponding code synchronized to the model elements.This way, the process can be reflected without the need to implement a full RTE approach. Our general synchronization concept (depicted in Fig. 5) is divided into two separate synchronizations: a synchronization between the source code and its trace model and a second synchronization between the source model and the source code.In the following, we explain our synchronization concept by using a simple formalization.We denote the source models by S i , source code models by T i and trace models by tr i .The source-and source code-metamodels are denoted by M S and M T .We use the definition of synchroni- zation expressed in [16] as follows: two models A and B with corresponding metamodels M A and M B are synchronized, if holds for the transformation trans ∶ M A → M B and a func- tion strip ∶ M × (M A → M B ) → M that reduces a model of M with either M = M A or M = M B to only its elements relevant for the transformation.This definition uses the trans and strip functions [16].Intuitively, the trans function expresses that applying a transformation to the source model yields the target model.The function strip is used to remove any additional elements and map models to only the relevant source/target model. Synchronization of Source Code and Trace Model Based on a first model S 1 , an initial generation of the source code T 1 and its trace model tr 1 is performed.As depicted in Fig. 5, the trace model tr i is updated, once the source code changes. While the former inserts a character c ∈ C at position n ∈ ℕ , the latter deletes a character c from position n in the source (1) trans(A) = strip(B, trans) code.Then, the result of applying the source code changes 1 , t h e c o n d i t i o n trans(T 2i ) = strip(tr i , trans) must hold for the synchroniza- tion between the updated source code T 2i and trace model tr i : For the synchronization between source code and trace model, we only need to update the lengths of the segments of the trace model.Therefore, we assume strip(tr i , trans) = len(tr i ) , where len(tr i ) is a tuple containing the segments' lengths.This leads to the following equation that must hold after the source code was updated: To satisfy this condition, each source code change needs to update the length of the segment that is affected by the deletion or insertion.Therefore, each ± M T is transformed to an update of the trace model tr i : where l i ∈ ℕ for i, m ∈ ℕ, 1 ≤ i ≤ m is the length of the ith segment and l j , j ∈ ℕ for 1 ≤ j ≤ m is the length of the seg- ment that is affected by an insertion or deletion in the source code at position n. Synchronization of Model and Source Code In the model synchronization process, the last synchronized model S i , the updated model S i+1 , the current trace model and the last syn- chronized source code T 2i are involved.By using the trace model of S i , the applied model changes S i can be merged into the last synchronized source code T 2i without overwrit- ing already implemented code refinements.As a result of the (2) In general, model changes can be defined as functions of the form ∶ M S → M S .More specifically, the model changes can be denoted by the following five functions, adapted from [16]: + t , − t : creating/deleting element of type t; + e,s1,s2 , − e,s1,s2 : adding/deleting edge from element s1 to s2; and attr a,s1,v : setting attribute a of element s1 to value v.As such, applying S i to S i can be defined as a sequence of these changes: According to Eq. 1, the following equation must hold for the synchronization between model and source code: Furthermore, as all parts of the source code that directly correspond to model elements are contained in protected segments, we assume strip(T 2i+1 , trans) = prot(T 2i+1 ) , where prot(T 2i+1 ) represents the source code that is reduced to the content of its protected segments.Finally, this leads to the following equation that must hold after the synchronization process: To satisfy this equation, each individual model change i , i ∈ ℕ, 1 ≤ i ≤ n is transformed to its corresponding source code changes.Next, we first introduce formulas that are needed for the later transformations. Attribute value: the value of the attribute labeled name of a model element elm is denoted by attr name (elm Position and length of an element: the position of the first character of a model element elm within a file is defined by pos seg (elm) .The length of elm is defined by len seg (elm). Position and length of an attribute: the position of the first character of an attribute a of a model element elm is defined by pos attr (a, elm) .The length of a is defined by len attr (a, elm). Template: a template for an element elm of type t is denoted by with c i ∈ C for k, i ∈ ℕ, 1 ≤ i ≤ k .The attributes are used for the instantiation of the variables occurring in the template.We further define two functions that ease the formulas for deleting and inserting multiple characters: (11) trans(S i+1 ) = strip(T 2i+1 , trans) While the former inserts a tuple of characters starting from position n into a file, the later deletes the characters c n , ..., c n+k at the positions n, ..., n + k from a file.The trans- formation of model to source code changes is highly dependent on the type of the updated model elements.An elaborate example of such a transformation can be found in [12]. Mapping Between SUIT Wireframingand Frontend Component Model Inspired by the concepts of MockupDD and following the view separation of ArchiMate (both presented in Sect.2.3), we developed a SUIT model for the wireframing integration and defined the transformations of this model to the frontend component metamodel, depicted in Fig. 6.The SUIT model of the wireframing editor comprises the most common HTML elements of the current HTML5 standard.It offers simple structural elements like buttons, text boxes and containers.Furthermore, media elements like the HTML5 video and audio player and custom Polymer elements3 are supported.Also compositions of elements are defined, like a checkbox with a label.Each UI control element of the SUIT model has its own set of attributes defined, according to the HTML5 standard.We also introduced a so-called Shared-Tag, that can be assigned to any UI control element to add NRT collaborative behavior to it. Conceptually, an instance of the SUIT model is a labeled tree.We formally define such a tree as a connected, acyclic and labeled graph.An arbitrary element v ∈ V always has the signature v = (l, t, A) , where l ∈ is the label, with being a finite alphabet of vertex and edge labels.t ∈ T is the type of the node, where T is either a UI con- trol element or a tag defined in the SUIT metamodel, as depicted in Fig. 6.For example T might consist of the following elements: UI = {Text, Button, Video, Canvas, ..} and Tag = {SharedTag, DynamicTag, ..} with T = UI ∪ Tag .A is a finite set of properties related to an UI control element or tag and each a ∈ A is a key-value-pair (k, v) with k, v ∈ .The tree always consists of a distinguished vertex r, which is also called the root.The root is always of type Widget.The parent(v) function is a helper function that yields the parent vertex for a vertex of the SUIT tree.If the vertex v is the root, the root will be returned. Definition 1 A SUIT model is a labeled tree with SUIT = (V, E) .V is a finite, non-empty set of vertices.V is always initialized with the root r.E is a set of unordered pairs of distinct vertices (v1, v2) with v1 ≠ v2 , which con- stitutes the edges of the tree. For the integration into our MDWE approach, a SUIT model is mapped to an instance of the frontend component view.Let VP = (V, E) be an acyclic, directed graph that repre- sents an arbitrary view.An edge e ∈ E of such a graph has the signature (l, t, v1, v2, A), where l ∈ , t is the type of the edge, v1, v2 ∈ V and A is a set of key value pairs that constitute the attributes of the edge. Definition 2 An instance M of a view of VP is an acyclic, directed graph with M = (V � , E � ) .For each v ∈ V � holds type(v) ∈ label(V) , with type and label being helper func- tions defined as: Analogously, these functions are defined for an edge e ∈ E � of a view.Now let VP wireframe be the acyclic directed graph represent- ing an arbitrary instance of the wireframe view and W SUIT a SUIT model representing a concrete wireframe.An instance of the SUIT model is mapped to an instance of the wireframe view with function : where is defined as follows: where shared and dynamic are functions that are applied to every attribute in A of the referenced 'HTML Element' node.These helper functions change the value of the 'collaborative', respectively, 'static' attribute for the referenced 'HTML Element' node.All other attributes are left untouched.Thus, shared is defined as with and dynamic is defined as with for t ∈ UI (l, SharedTag, �) ↦ (l � , HTML Element, shared(A � )), for l � = parent(l) (l, DynamicTag, �) ↦ (l � , HTML Element, dynamic(A � )), for l � = parent(l) (l, Widget, A) ↦ (l, Widget, A), otherwise The relationships between the nodes in the wireframe view are generated with : with With , we only map the UI elements of the SUIT model to the wireframe view.An 'HTML element' node of the frontend component, respectively, wireframe view, consists of the four properties id, type, static and collaborative.The id of the HTML element is automatically generated by the mapping approach.The value of the type attribute is an element from the UI.The static and collaborative attributes are the only attributes represented as tags in the SUIT model.Furthermore, they are simple Boolean attributes and therefore have no own attributes defined.Additionally, the tags are unique and thus they only appear once for a certain UI element. A node of the SUIT tree is mapped with to a certain 'HTML Element' or 'Widget' node.An arbitrary UI element of the SUIT model is always mapped to an instance of the 'HTML Element' node class, where the label of the UI element is the label of the node.The type of the UI element is mapped to the type-attribute of the node.By default, the 'static' attribute is true and the 'collaborative' attribute is false.To change the values of these attributes, a DynamicTag, respectively, SharedTag element is mapped to the corresponding attribute in the 'HTML Element' node.For each tag, a function is required, which alters a certain aspect of the signature of an 'HTML Element' node (e.g.type or attribute).For the definition of the current mapping approach, the two helper functions shared and dynamic are defined, which change the Boolean value of the associated attribute.The root element of the SUIT tree is always mapped to the 'Widget'-node, where the label of the root is also the label of the 'Widget'-node.The same holds for the attributes.With function , the relationships between nodes are generated.The function comprises two cases.If the UI element is a direct child from the root, a single 'Widget To HTML Element' edge is created (abbreviated in the function with 'Wid.To El.', due to space restrictions).For the second case, we assume that v 1 is a parent of v 2 and v 1 and v 2 are not the root.Then, the 'hasChild' relationship is generated. Realization In this section, we present the user interface of the CAE (Sect.6.1), its general architecture (Sect.6.2) and the integration of the live code editing (Sect.6.3) and wireframing (Sect.6.4).The CAE supports common utility functions, like copy&paste, deletion of an arbitrary number of selected elements and an undo&redo functionality for both the Modeling Canvas, Live Code-and Wireframing Editor.It uses an automatic save functionality.Each altering in the editor saves the current state of all models to the shared editing framework in the Yjs shared data space.Nevertheless, since this is not a persistent storage, the editor offers a persistence functionality, such that all models get stored alongside in a relational database.The editor provides awareness features to support the collaboration.The activity widget shows all collaborators currently working in one of the different views.If a remote user selects one or more elements, each element is highlighted with a surrounding frame and marked with the image of their OpenId Connect profile they used to log into the CAE. Architectural Overview Figure 8 provides an overview of the complete architecture.Our frontend is composed of HTML5 Web components and (apart from the wireframe model, which uses an XML representation) uses JSON representations of the models.We use a lightweight meta-modeling framework, called SyncMeta [13], to realize the collaborative modeling functionalities of the CAE.It supports NRT collaborative modeling by using Yjs [27], a Conflict-free Replicated Data Type (CRDT) framework.For communication with the backend, we use a RESTful interface. The backend, realized as a las2peer network itself, is composed of two services.The Model Persistence service manages the persistence of the microserivce-and frontend component models (together with their enriching wireframe SUIT models, if existing, see Sect.6.4) in a relational database.The Code Generation service implements both the synchronization with the trace models (see Sect. 6.3), as well as it is responsible to generate the resulting source code from the models and trace models.The source code is directly pushed to a GitHub repository, using commit messages to create a history of the modeling process for later reference.We use a Jenkins-Docker continuous deployment pipeline to deploy the resulting services in a las2peer network, directly from the modeling environment. Trace Generation and Synchronization A template engine forms the main component for trace generation and model synchronization.It is used for both the initial code generation, as well as for further model synchronization processes.Figure 9 depicts our trace model, adapted from the metamodel of traces presented in [29].For each FileTraceModel, and thus for each file, we instantiate a template engine class, which can hold several template objects.A template object is a composition of Segments, generated by parsing a template file.A template file contains the basic structure of an element of the Web application's metamodel.In such a file, variables are defined to be used as placeholders, which are later replaced with their final values from the model.Additionally, the template file contains information about which part of the generated source code is protected or unprotected.Based on the template syntax for variables and unprotected parts, a template file is parsed and transformed into a composition of segments of the trace model.For each variable a protected segment is added, and for each unprotected part, an unprotected segment is added to the composition.The parts of a template file that are neither variables nor unprotected parts are also added to the composition as protected segments.According to our previous definition of model synchronization for M2T transformations, Eq. 1 must hold for the model synchronization.Thus, we need to update the content of each variable for all templates of all model elements.However, maintaining a trace and a model element reference for all of these variables is not feasible due to the large size of such a file trace model.Instead, traces are only explicitly maintained for the composition of segments of a template.Linking a segment of a variable to its model element is done implicitly by using the element's id as a prefix for its segment id.When templates are appended to a variable, the type of its linked segment is changed to a composition. We implemented an initial generation strategy and a synchronization strategy, which are used by our template engine.Each synchronization strategy instance holds a reference to the file trace model of the last synchronized source code to detect new model elements as well as to find source code artifacts of updated model elements.As in some cases, source code artifacts of model elements can Fig. 8 Architecture of the CAE be located in different files, a synchronization strategy can also hold multiple file trace models in order to find code artifacts across files.After a template engine and its template strategy were properly initiated, the template engine is passed as an argument to the code generators.These create template instances for the model elements based on the template engine.The engine checks, if a segment of the model element is contained in the trace models file by recursively traversing its segments.If a corresponding segment for the model element was found, a template reusing this segment is returned.Otherwise, a new composition of segments, obtained by parsing the template file of the model element, is used for the returned template.For new model elements, new source code artifacts are generated.For updated elements, their corresponding artifacts are reused and updated.As templates can contain other templates in their variables, these nested templates need to be synchronized as well.In the generated final files, source code artifacts of model elements that were deleted in the updated model must be removed from the source code.Therefore, the nested templates, more specifically their segment compositions, are replaced with special segments by the synchronization strategy.These special segments are used as proxies for the original compositions and ensure that templates of deleted model elements are removed from the final source code.'type'-attribute of the node is set to the value of the 'HTML Element'-node name of the corresponding UI control element.Furthermore, the 'HTML Element'-node is marked as static and the 'id'-attribute of the node is automatically generated.The value of the id is composed of the 'type'attribute value and unique.The identifier of the UI control element is reused for the resulting node, which allows to trace back an 'HTML Element'-node to the UI control element.This is necessary for the awareness features and the live mapper.For each node, a 'Widget to HTML Element'edge is generated, because each node has a connection to the 'Widget'-root node.If the parent of the UI control element is not the root, additionally a 'hasChild'-edge is added to the set of edges.This edge type denotes the hierarchical structure of the wireframe.It connects the parent UI control element to one of its child elements.If a UI control element has the 'shared'-tag assigned to it, the 'collaborative'-attribute of the corresponding 'HTML Element'-node is set to true as well.Since the frontend component metamodel allows every HTML Element to be collaborative, the wireframing editor allows this as well.The result of this transformation is a valid instance of the frontend component metamodel.However, the HTML attributes specified for a certain UI control element are lost, because the frontend component metamodel does not offer a way to represent them.Additionally the width, height and position of the UI control element in the wireframing editor are not related in any way to the position and dimension of corresponding 'HTML Element'node.Therefore it is necessary to apply an auto-layout for directed graphs to the model, so that it is displayed correctly in the modeling canvas. Wireframe and Frontend Component Model Transformations Model to Wireframe Transformation The input for this transformation is a JSON representation of the frontend component model and an instance of the wireframe editor.The latter one is required to map the 'type'-attribute of an 'HTML Element' node to the correct UI control element.Since the wireframe only represents the HTML elements of the frontend component model, we only have to consider the 'Widget' node (for the size of the whole frontend component) and those 'HTML Element' nodes that are connected to the 'Widget'-node and marked as static.All other node and edge types of the frontend component model can be ignored for this transformation.As already described in the previous transformation algorithm, certain UI layout information (for example the size and position of elements) is not present in the frontend component model.Thus, we initialize these attributes with default values defined in the wireframe model.Finally, the transformation algorithm assigns the 'shared'-tag to every 'HTML Element'-node which has the 'collaborative'-attribute set to true.The result of the transformation approach is an XML document that represents the wireframe model.The resulting model is then stored in the shared data space alongside with the frontend component model. Live Mapper The live mapper listens to events of the Modeling Canvas of the frontend component modeling view and to the Wireframing Editor.In contrast to the two previously described transformations, the live mapper directly applies changes to the wireframe and frontend component model and visualizes the results in NRT.Additionally, the live mapper provides awareness features for the selection of entities on both the Modeling Canvas and the Wireframing Editor.To give an example of the live mapping, the creation of a button element in the Wireframing Editor leads to five to six operations on the Modeling Canvas.First, the node is created on the Modeling Canvas, the 'type'-, 'id'-, and 'static'-attributes are set and the new node is connected to the 'Widget'-node.If the button is placed in a container, an additional edge is created between the 'HTML Element'node representing the container and the new node that represents the button.Furthermore, it is possible to edit the wireframe model through the frontend component model view.For example one can create any UI control element in the Wireframing Editor though the Modeling Canvas by creating an 'HTML Element'-node, connect it to the 'Widget'-node and set the 'static'-attribute to true.After each action on the Wireframing Editor, an auto layout algorithm for directed graphs is applied to the Modeling Canvas, only manipulating those elements that were updated. Evaluation We evaluated our approach in several iterations, which we describe in this section.For evaluation questionnaires that were filled out at the end of the user evaluation sessions, we used a five-point Likert scale (1)(2)(3)(4)(5).For a more extensive coverage of the evaluations of Sects.7.1-7.5, the corresponding publications mentioned in Sect. 3 can be considered. Initial Evaluation We successfully used the CAE to redesign an existing collaborative Web application used for graph-based storytelling [8].While this evaluation was only conducted internally, we used it as a first proof-of-concept usage scenario for the CAE. 4 It provided initial feedback on the usability and acted as a first and ongoing test case that lead to several necessary improvements of the framework, before we were able to apply it in the following user evaluations. Evaluation with Heterogeneous Teams After we successfully defined the agile and cyclic development approach that builds the basis for development with the CAE, we conducted our first user evaluation [9]. Participants and Procedure We considered groups of two to three people with various technical backgrounds.We carried out 13 sessions, with a total number of 36 participants.The groups consisted of at least one experienced Web developer and at least one member without any technical experience in Web development, who received a description of the application to be designed.During the evaluation session, the non-technical members had to communicate the requirements to the developer team and collaboratively implement the application using the CAE.Each session lasted for about 45 minutes.The goal of this study was to assess the role of NRT collaboration for the development process, and whether our approach improves the integration of non-technical community members into the design and development of Web applications. Analysis and Outcomes In general, we received high ratings from non-technical members in terms of methodology ("Understanding of separation of concerns" 3.91, "Understanding how application was built" 4.36), and developers felt they were able to implement the requirements formulated by the non-technical members (4.64).Most non-technical members felt integrated well into the NRT development process (4.27) and the oral interviews revealed that they could follow the development process well.Although the question, if non-technical members took an active role in the development process received the lowest score, the result is still pretty high (3.82).From the developer survey, we received the highest ratings for questions regarding the concept of CAE and its usability ("Understanding functionality" 4.79, "Understanding separation of concerns" 4.71).Collaborative aspects were also rated rather high by both groups.The oral interviews revealed that most developers felt both the need for requirement analysis improvements regarding the inclusion of non-technical stakeholders as well as that the CAE can be used for this purpose. The evaluation showed the usefulness of the CAE to integrate non-technical members better into the development process.Developers saw the benefit of CAE's MDWE approach to contribute to a unified community application landscape.A particularly often requested feature was the introduction of a second abstraction tier for the frontend component view, which could hide too technical aspects from non-technical members, concentrating more on the "visible" elements, putting the functionality into a second component view, which would then be used by the developers only.Another issue mentioned by the developers was the need to adjust the generated source code to fully reflect the requirements, and then having no possibility to return to the modeling environment, since the modified code would be overwritten when the code was regenerated by the framework.We used this feedback to start developing the Live Code Editor, as well as the Wireframing Editor, which both tackle this problem from different directions, but both use the same idea of providing different views on the same model. Evaluation in a Lab Course While we were developing the aforementioned live coding and wireframing extensions, we in parallel started to validate our MDWE process with its modeling and development phases over a longer period of time, by studying the impact it has on Web developers [9,10].Therefore, it was necessary to extend the CAE with automated deployment features, such that the created applications could be used in practice, to validate their functionality. Participants and Procedure We evaluated our approach in a lab course of 15 undergraduate computer science students.The students had basic programming knowledge, in particular in Java (4.6) from their first programming lectures, but our pre-survey also indicated that none of them were really familiar with Web development (1.67) or microservice architectures (1.73).During a two week period, the students were asked to model and deploy the basic framework of their lab course prototype. Analysis and Outcomes In this evaluation, we were especially interested in how the CAE can help developers that are not yet familiar with the present development environment.Our questionnaire thus focused on the learning effects the CAE has on developers that have to integrate into a new development process.Our results indicate a high learning effect in terms of understanding the underlying Web development concepts of microservices (4.43 vs 1.73) and frontend components (4.5 vs 2.6).We received rather high ratings in terms of MDWE easing the learning of new concepts and techniques (3.86) and MDWE improving the understanding of the generated application (3.71). Occurring problems during this evaluation were mainly due to the use of an experimental prototype which was never tested in an environment with more than a handful of people using it at the same time.Boundary conditions and network latency problems lead to a cycle of fixes, version incompatibilities and newly introduced problems.Even though this might have clouded the participants' impression of CAE use, it finally lead to major technical improvements of our framework.These first results of a more realistic usage setting showed promising applications of the CAE as a tool to teach developers of different domains the development of Web applications with a p2p microservice architecture. Live Code Editor Evaluation Due to the feedback received in the previous two evaluations, we developed the Live Code Editor.On this, we performed a usability study with student developers to assess how it integrates into our collaborative MDWE methodology and how well it performs and is received in practice [12]. Participants and Procedure We carried out eight user evaluation sessions, each consisting of two participants.After receiving a short introduction and filling out a presurvey to asses their experiences in Web development, the participants were seated in the same room and asked to extend an existing application, which consisted of two frontend components and two corresponding microservices.As expected, the pre-survey rating of the familiarity with Web technologies (4.00) was rather high.However, only a minority of our participants were familiar with MDWE (2.67) or had used collaborative coding for creating Web applications before (2.40).Each evaluation session took about 30 minutes of development time. Analysis and Outcomes The participants rated connections between our two collaborative phases, namely the access to the code editor from the model (4.67) and the reverse process with the synchronization enabled (4.40) very high.Even though the chosen application was, due to the time constraints of a live evaluation setting, quite simple, the evaluation participants mostly saw cyclic development in general as relevant (4.13) and also rated the benefits of a cyclic MDWE process high (4.00).Moreover, all participants identified the advantages of code and model synchronization (4.33).We considered the results of this evaluation as a proof-of-concept, that the Live Code Editor technically fulfills its purpose as a way to integrate live coding into the cyclic development, mitigating the need to manually change the generated source code and thereby break the MDWE cycle. Wireframing User Evaluation After we successfully developed and integrated the Live Code Editor, we tackled the feedback gained in our evaluation with heterogeneous teams (see Sect. 7.2) and developed the Wireframing Editor.We then evaluated it to gain user feedback on how well it integrates into the process and how it changes the way applications are developed with the CAE [11]. Participants and Procedure We recruited eight student developers as participants, which were split up into groups of two, resulting in four evaluation sessions that each lasted about 60 minutes.As in the Live Code Editor evaluation, the pre-survey revealed a high familiarity with Web development (4.50).Also, a rather high familiarity with MDWE concepts was observed (3.13), but wireframing editors were not very familiar to the participants (2.75).The participants were asked to develop a frontend for an already existing microservice backend.A specification for both the existing RESTful API, as well as the desired Web frontend was handed out to the participants at the beginning of the session. Analysis and Outcomes With an average of 4.00, most participants found the wireframing editor was easy to use and with an average of 4.13 and 4.25, the participants found both their application reflected in the live preview widget as it was designed by them, as well as that they were aware of what their collaborator did in the wireframing editor.Also, participants mostly agreed that the modeling canvas and the wireframing view reflected the same model (4.25).With an average rating of 4.25, the majority of the participants thought the wireframing editor a useful extension for MDWE frontend development and the integration of the wireframe into the process was understood quite well (4.38). Activity Evaluation To gain a deeper understanding about the collaboration process and working behavior of the participants, we monitored the participants' activities in the CAE. Participants and Procedure We monitored five sessions with two participants each.Each time a participant switched the widget, the time spend in the widget was logged.Furthermore, if a participant altered the wireframe, model or code, an additional event was logged.For each participant the time spend in each editor was aggregated and the total amount of altering activities a participant issued was counted. Analysis and Outcomes Table 1 depicts for each participant the relative time spend in a certain widget, as well as the number of absolute activities in this widget.A particular activity can be a create-, move-, resize-, delete-or attribute change event of an element in the Wireframing Editor or Modeling Canvas, or a value change activity in the Live Code Editor. The results clearly indicate that the participants spend the most time in the Modeling Canvas.One explanation for this might be that the participants had to get familiar with the modeling language first, which corresponds with our observations, that during the first few minutes participants did not use it productively, but experimented with different modeling elements, until they were familiar with them.With an average of AVG model = 188 activities, the modeling part was also the most demanding task.With an average of AVG code = 13 activities the coding task was less work intensive.One reason for that might have been that all participants had some development experiences.The average number of activities to complete the wireframing task was quite low with AVG wireframe = 10 , and also the time spend in the Wireframing Editor was quite short, compared to the time spend in the Modeling Canvas.The participants of the fifth session were not able to generate the code, which is why there is almost no activity and usage time recorded in the Live Code Editor. These results indicate that the Wireframing Editor was easier to use and required less time to get familiar with, which was a major goal and a key requirement of the editor. Nevertheless it has to be mentioned, that the evaluation of time spend in each editor and activity monitoring is tightly coupled with the evaluation task, and thus influenced by it. Service Success Measurement Evaluation Our final evaluation concerned the integration of the CAE in the overarching las2peer methodology (see also Sect.4.3), with a special focus on the Requirements Bazaar integration and the usage of service success measurement modeling elements, which then were used for visualization in las2peer's monitoring and evaluation suite. Participants and Procedure We recruited thirteen participants from a universities computer science department and conducted thirteen evaluation sessions.Participants were given an existing Web service for image uploading, which had a certain, yet obvious, flaw It contained an (artificial) delay in the upload process.The service itself was already developed with the CAE and a corresponding Requirements Bazaar project existed, that already documented the flaw.The workflow of the evaluation contained first reading the documentation of the issue in the Requirements Bazaar, and then reacting by first measuring the image uploading time via a modeled monitoring message extension and finally correcting it in the live code editor.The resulting improvement could then also visually be confirmed in las2peer's monitoring and evaluation suite. Analysis and Outcomes While the evaluation was concerned with several aspects of the monitoring and evaluation suite, three questions were posed to specifically justify the Requirements Bazaar and monitoring integration into the CAE.With an average score of 4.32, most participants found the Requirements Bazaar well integrated into the CAE.Our observation was, that participants had no problems browsing the requirements connected to the modeling project directly from the CAE's interface.Another question to verify this was, if participants were able to distinguish the responsibilities of the CAE, the Requirements Bazaar and las2peer's evaluation suite.This was answered with an average score of 3.83, which, taking into account the short time the participants had to get familiar with the concept, is a clear sign that the responsibilities of the individual components were understood.Finally, we asked, if the combination of the three components was perceived as a way to make monitoring features more transparent to non-technical stakeholders, which was answered with an average score of 3.71.Taking these results together, we perceive the CAE well integrated into the overarching methodology. - 3 - Figure2gives an overview of the different views, as well as their connections with each other and the code refinements/deployment.It can be split up into three main phases, namely the Modeling, the Coding, and the Wireframing depicts the wireframe visualization of the frontend component model.Both the wireframe model and the conceptual model are used as input for the code generation to generate the code artifacts depicted in the Live Code Editor of Fig. 3c.Finally, Fig. 3d shows a live Fig. 1 Fig. 1 Design science methodology we followed in conducting this research Fig. 2 Fig. 3 Fig. 2 Overview of the MDWE approach T 2i−1 are applied to the source code T 2i−1 in the ith code refinement phase.Formally, a single source code change can b e d e n ot e d by o n e o f t h e t wo f u n c t i o n s Fig. 6 Fig. 6 Mapping of the SUIT-to the MDWE metamodel Figure 7 Figure 7 shows a screenshot of a frontend component modeling space of the CAE.The Wireframing View is depicted in the upper right, the Modeling Canvas in the lower left and the Live Code Editor can be found in the lower right.The upper right depicts the Live Preview, the selected modeling element's Property Window, the Persistence Functionality and on the far right the Modeling Palette, Activity Widget and Requirements Bazaar Integration.The CAE supports common utility functions, like copy&paste, deletion of an arbitrary number of selected elements and an undo&redo functionality for both the Modeling Canvas, Live Code-and Wireframing Editor.It uses an automatic save functionality.Each altering in the editor saves the current state of all models to the shared editing framework in the Yjs shared data space.Nevertheless, since this is not a persistent storage, the editor offers a persistence functionality, such that all models get stored alongside in a relational database.The editor provides awareness features to support the collaboration.The activity widget shows all collaborators currently working in one of the different views.If a remote user selects one or more elements, each element is highlighted with a surrounding frame and marked with the image of their OpenId Connect profile they used to log into the CAE. Fig. 7 Fig. 7 Screenshot of the CAE Wireframe and frontend component models are persisted next to each other, as the SUIT model enriches the HTML elements of the frontend component model with additional metadata and type-specific attributes.Thus, for code generation, a frontend component model is always required, while the SUIT model is optional.A Wireframe to Model Transformation and a Model to Wireframe Transformation were developed to transform the SUIT wireframe model to a frontend component model and vice versa.The two transformations are only needed, if one of the two frontend component representations is not existing.After that, the two model states are kept synchronized by the Live Mapper.Wireframe to Model Transformation The wireframe to model transformation takes as input an instance of a wireframe model and the frontend component metamodel.The output of the transformation is a JSON object of the frontend component model.The implementation uses templates of a node, edge, and attribute representation in JSON of the frontend component model.First, the transformation algorithm generates the 'Widget'-node, which represents the root element of the frontend component model.Then, it recursively traverses the wireframe model and creates a corresponding 'HTML Element'-node for each UI control element.The Fig. 9 Fig. 9 Trace model used for the live code editing
14,175
2020-06-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Load Balancing Approach in Cloud Computing Cloud computing is a utility to deliver services and resources to the users through high speed internet. It has a number of types and hybrid cloud is one of them. Delivering services in a hybrid cloud is an uphill task. One of the challenges associated with this paradigm is the even distribution among the resources of a hybrid cloud, often refereed as load balancing. Through load balancing resource utilization and job response time can be improved. This will lead to better performance results. Energy consumption and carbon emission can also be reduced if the load is evenly distributed. Hence, in this paper we have conducted a survey of the load balancing algorithms in order to compare the pros and cons of the most widely used load balancing algorithms. Cloud computing Cloud computing is a utility to deliver services and resources to the users through high speed internet [1].It has gained immense popularity in recent years.These cloud computing services can be used at individual or corporate level [2,3].Cloud computing can be summarized as a model that gives access to a pool of recourses with minimal management effort [4]. Types of clouds Clouds can be classified as private, public and hybrid [6,7,8] on the basis of their architecture.It provides three types of services 1. Infrastructure as a Service (IAAS), that provides the infrastructure a user demands like routers.2. Software as a Service (SAAS), delivers software services like Google Apps. 3. Platform as a Service, PAAS, as the name suggests provides platforms for program development for example Google's App Engine [5]. Private cloud: A cloud used only within an enterprise is referred as a private cloud [6].It can also be addressed as internal cloud [8].They are managed by the organization itself. Public cloud: A cloud that is made available to the users around the globe through an Internet access is called a public cloud [6].Organizations providing such cloud services include Google Docs [9], Microsoft's Windows Azure Platform [10], Amazon's Elastic Compute Cloud and Simple Storage Services [11], IBM's Smart Business Services [12]. Hybrid cloud: A union of private and public clouds forms another type of cloud referred as hybrid cloud.As one part of it is private, it is considered to be more secure but designing a hybrid cloud is a challenging job because of the complexities involved in the design phase [8].The major issues linked with them are that of interoperability and standardization [13].They are costly as compared to the aforementioned types but it has their best features combined [14]. Benefits of hybrid clouds Hybrid cloud model provides a seamless integration of public and private infrastructure which allows the use of public resources when local resources run out.The term normally used to refer to this state is called "cloud bursting".An elastic environment is created this way.Some benefits of hybrid clouds are listed as follows [15,16].concept.Load balancing has always been given prime importance in cloud environment.Lately, researchers have started expanding this idea to the hybrid clouds as well in order to balance load at peak times while meeting the promised QoS and SLAs. Load balancing strategies for clouds Load balancing algorithms can be broadly categorized into static and dynamic load balancing algorithms. Static load balancing algorithms: Gulati et al. [24] claimed that in cloud environment a lot of work is done on load balancing in homogeneous resources.Research on load balancing in heterogeneous environment is given also under spot light.They studied the effect of round robin technique with dynamic approach by varying host bandwidth, cloudlet long length, VM image size and VM bandwidth.Load is optimized by varying these parameters.CloudSim is used for this implementation. Dynamic load balancing algorithms: A hybrid load balancing policy was presented by Shu-Ching et al. [25].This policy comprises of two stages 1) Static load balancing stage 2) Dynamic load balancing stage.It selects suitable node set in the static load balancing stage and keeps a balance of tasks and resources in dynamic load balancing stage.When a request arrives a dispatcher sends out an agent that gathers nodes information like remaining CPU capacity and memory.Hence the duty of the dispatcher is not only to monitor and select effective nodes but also to assign tasks to the nodes accordingly.Their results showed that this policy can provide better results in comparison with min-min and minimum completion time (MCT), in terms of overall performance. Another algorithm for load balancing in cloud environment is ant colony optimization (ACO) [26].This work basically proposed a modified version of ACO.Ants move in forward and backward directions in order to keep track of overloaded and under loaded nodes.While doing so ants update the pheromone, which keeps the nodes' resource information.The two types of pheromone updates are 1) Foraging pheromone, which is looked up when an under loaded node is encountered in order to look for the path to an over loaded node.2) Trailing pheromone is used to find path towards an under loaded node when an over loaded node is encountered.In the previous algorithm ants maintained their own result sets and were combined at a later stage but in this version these result sets are continuously updated.This modification helps this algorithm perform better. Genetic algorithm [27] is also a nature inspired algorithm.It is modified by Pop et al. [28], to make it a reputation guided algorithm.They evaluated their solution by taking load-balancing as a way to calculate the optimization offered to providers and makespan as a performance metric for the user. Another such algorithm is the bees life algorithm (BLA) [29], which is inspired by bee's food searching and reproduction.This concept is further extended to specifically address the issue of load balancing in [30].The Honey bee behavior inspired load balancing (HBB-LB) algorithm basically manages the load across different virtual machines for increasing throughput.Tasks are prioritized so that the waiting time is reduced when they are aligned in queues.The honey bee foraging behavior and some of its variants are listed in [31]. Comparison A comparative study of different load balancing algorithms is presented in [32].Load balancing is not only required for meeting application from one cloud to another.This becomes a possibility only if the dependencies are removed. Cost: In these environments, on one hand, private infrastructure is to be managed, while on the other hand, you are charged on the basis of pay-per-use for using the public resources.This makes predicting the overall cost an uphill task. Security: For using public cloud resources certain SLAs are settled first and a lot of trust is placed in the public cloud.Additional security measures are to be taken along with the company's firewalls.That is why security is one of the primary concerns in hybrid cloud environment.To ensure secure computation, some security issues are given prime importance.The list includes: Identification and authentication, authorization, and confidentiality etc. [18]. Reliability: As the communication between a private and public cloud occurs through a network connection, the availability of connection is again an issue as connection often breaks.Are these connections secure or not and would the migration of tasks to the public cloud actually help reduce the response time or not, are the questions that need to be addressed.So ensuring reliability is another challenge. Monitoring: Organizations monitor the cloud services to ensure the performance is not compromised in any situation.In hybrid clouds, along with monitoring the private cloud, public clouds also need to be monitored. Denial of service: Another challenge that is inspected by the researchers is the denial of service (DoS) in cloud computing environments.As in normal clouds and even in the hybrid environments resources are allocated dynamically, how would these clouds respond to a DoS attack, is a question given a lot of importance in the recent years.In hybrid clouds if resources are not available to the executing tasks, those tasks are forwarded to the public clouds but in this case the strategy discussed won't be a feasible solution.Finding a solution to this problem is a burning challenge for the researchers. Load balancing: Load balancing is also one of the main challenges faced in hybrid cloud computing, as there is a need for an even and dynamic distribution of load between the nodes in private and public clouds. In distributed systems load balancing is defined as the process of distributing load among various nodes to improve the overall resource utilization and job response time.While doing so, it is made sure that nodes are not loaded heavily, left idle or assigned tasks lesser then its capacity [19].It is ensured that all the nodes should be assigned almost the same amount of load [20]. If resources would be utilized optimally, performance of the system will automatically increase.Not only this, the energy consumption and carbon emission will also reduce tremendously.It also reduces the possibility of bottleneck which occurs due to the load imbalance.Furthermore, it facilitates efficient and fair distribution of resources and helps in the greening of these environments [21,22]. Load balancing algorithms are classified into categories for the ease of understanding.That helps in identifying a suitable algorithm in the time of need.A detailed view of classification is presented below [23]. Related Work With the emergence of hybrid clouds, the idea of balancing the load between the public and private clouds has gained immense popularity.That is why a lot of research in now being conducted to facilitate this users' satisfaction but it also helps in proper utilization of the resources available.The metrics that are used for evaluating different load balancing technologies are: throughput, overhead associated, fault tolerance, migration time, response time, resource utilization, scalability, and performance.According to this study, in honeybee foraging algorithm, throughput does not increase with the increase in system size.Biased random sampling and active clustering do not work well as the system diversity increases.OLB + LBMM shows better results than the algorithms listed so far, in terms of efficient resource utilization.The algorithm Join-Idle-Queue can show optimal performance when hosted for web services but there are some scalability and reliability issues that make its use difficult in today's dynamic-content web services.They further added that minmin algorithm can lead to starvation.They concluded that one can pick any algorithm according to ones needs.There is still room for improvement in all of these algorithms to make them work more efficiently in heterogeneous environments while keeping the cost to a minimum.A somewhat similar analysis of load balancing algorithms is presented by Daryapurkar et al. [33] and Rajguru and Apte [34] as well.Different scheduling algorithms for the hybrid clouds compared by Bittencourt et al. [35], highlights that the maxspan of these algorithms widely depend on the bandwidth provided between the private and public clouds.The channels are usually part of the internet backbone and their bandwidth fluctuates immensely.This makes the designing of the communication aware algorithms quite challenging. Load Balancing Strategies in Hybrid Clouds Zhang et al. [36] proposed a design for hybrid cloud is.It allows intelligent workload factoring by dividing it into base and trespassing load.When a system goes into a panic mode the excess load is passed on to the trespassing zone.Fast frequent data item detection algorithm is used for this purpose.It makes use of the least connections balancing algorithm and the Round-Robin balancing algorithm as well.Their results show that there is a decrease in annual bills when hybrid clouds are used.Buyya et al. [37] proposed a concept of federated cloud environment, to maintain the promised QoS even when the load shows a sudden variation.It supports dynamic allocation of VMs, Database, Services and Storage.That allows an application to run on clouds from different vendors.In Social Networks like Facebook, load varies significantly from time to time.For such systems this facility can help scale the load dynamically.No cloud infrastructure provider can have data centers all around the globe.That's why to meet QoS, any cloud application service provider has to make use of multiple cloud providers.For implementation purpose they used Cloud Sim Tool kit.They made a comparison between federated and non federated cloud environments.Their results showed a considerable gain in performance in terms of response time and cost in case of the former.The turnaround time is reduced by 50% and the make span improves by 20%.Although the overall cost increases with the increase in the public cloud utilization but one has to consider that such peak loads are faced occasionally which makes it acceptable. Task scheduling plays a vital role in solving the optimization problem in hybrid clouds.A graph-based task scheduling algorithm is proposed by Jiang et al. for this purpose [38].In order to reduce the cost to a minimum value, like other algorithms, it makes use of the public resources along with the private infrastructure.The key stages of this algorithm are 1) Resource discovery and filtering, for the collection of the status information of the resources that are discovered.2) Resource selection, this algorithm's main focus is on this stage as this is the decision making stage.Resources are picked keeping in view the demand of the tasks to be performed.3) Task submission, once the resources are selected the tasks are assigned accordingly.A bipartite graph G=(U,V,E) is used to help elaborate this concept, where U is used for private or public Virtual Machines, V is for the tasks, and E denotes the edges in between.Cloud Report and Cloud Sim 3.0 are used for evaluating this algorithm.Their results showed a 30 % decrease in cost as compared to a non hybrid environment.For improving these figures even more, disk storage and network bandwidth need to be considered as well. Another algorithm, adaptive-scheduling-with-QoS-satisfaction (AsQ) [39], for the hybrid cloud is proposed that basically reduces the response time and helps increase the resource utilization.To fulfill this goal several fast scheduling strategies and run time estimations are used and resources are then allocated accordingly.If resources are used optimally in the private clouds, the need for transferring tasks to the public clouds decreases and deadlines are fulfilled efficiently but if a task is transferred to the public cloud, minimum cost strategy is used so that the cost of using a public cloud can be reduced.The size of the workload is specially considered in this regard.Their results show that As Q performs better compared to the recent algorithms of similar nature in terms of task waiting, execution and finish time.Hence it provides better QoS. Picking the best resources from the public cloud is a serious concern in hybrid clouds.The Hybrid Cloud Optimized Cost (HCOC) [40], is one such scheduling algorithm.It helps in executing a workflow within the desired execution time.Their results have shown that it reduces the cost while meeting the desired goals.Gives better results in comparison with the other greedy approaches.There is another approach [41], which also deals with directed acyclic graphs (DAG) as in study by Bittencourt and Madeira [40].It uses integer linear program (ILP) for the workflow scheduling n SaaS/PaaS clouds with two levels of SLA, one with the customer one for the provider.This work can be extended by considering multiple workflows and fault tolerance in view. Gupta et al. [42], contributed that there are a number of load balancing algorithms that basically help in avoiding situations where a single node is loaded heavily and the rest are either idle or have lesser number of tasks when in reality they can afford to deal with a lot more.But what is overlooked in most of these algorithms is the trust and reliability of the datacenter.A suitable trust model and a load balancing algorithm are proposed.They used VMMs (Virtual Machines Monitors) to generate trust values on the basis of these values nodes are selected and the load is balanced. A virtual infrastructure management tool is offered by Hoecke et al. [43], that helps to set-up and manage hybrid clouds in an efficient way.This tool automatically balances load between the private and public clouds.It works at the virtual machine level.This tool has two parts 1) a proxy, where different load balancing algorithms are implemented like weighted round robin and forwarding requests to appropriate VMs, and on the other hand a management interface is designed that visualizes the hybrid environment and manages it too for example it can start and stop VMs, can form clusters of VMs, and can also manage the proxy remotely.It can be improved further by using a more efficient algorithm on the proxy for balancing load in a more convenient way. In workflow applications [44], the cost of execution is kept to a minimum level by allocating the workflow to a private cloud but in case of peak loads, resources from the Public cloud need to be considered as well.As meeting the deadlines is a primary concern in workflow applications.By using cost optimization, this algorithm decides which resources should be leased from the public cloud for executing the task within the deadline.In this algorithm workflow is divided into levels and scheduling is performed on each level.It uses the concept of subdeadlines as well.That helps in finding the best resources in public cloud in terms of cost while keeping in view that the workflows are executed within the deadlines.Although the make span of level based approach is 1.55 times higher than the non level based approach, its cost is three times lower.In comparison with min-min, its make span is double but it costs three times lesser.This makes the proposed level based approach better as it costs less and meets the deadlines too although its make span is higher but it finishes the assigned tasks within the deadline. Conclusion Cloud computing is a utility to deliver services and resources to the users through high speed internet.It has a number of types and hybrid cloud is one of them.As one part of it is private, it is considered to be more secure but designing a hybrid cloud is a challenging job because of the complexities involved.Some benefits of hybrid clouds are optimal resource utilization, risk transfer, availability, reduction in hardware cost and better QoS.However, many challenges are also associated with hybrid clouds as elaborated.Some of them are interoperability and portability, cost, security, reliability, monitoring, denial of service, load balancing. Load balancing algorithms can be broadly categorized into static and dynamic load balancing algorithms.A comparative study of different load balancing algorithms is presented.Load balancing is not only required for meeting users' satisfaction but it also helps in proper utilization of the resources available.The metrics that are used for evaluating different load balancing technologies are: throughput, overhead associated, fault tolerance, migration time, response time, resource utilization, scalability, and performance.According to this study, in honeybee foraging algorithm, throughput does not increase with the increase in system size.Different load balancing strategies in hybrid cloud are also discussed.A concept of federated cloud environment is proposed to maintain the promised QoS even when the load shows a sudden variation.Task scheduling plays a vital role in solving the optimization Problem in hybrid clouds.Another algorithm, adaptive-scheduling-with-QoS-satisfaction (AsQ) for the hybrid cloud is proposed that basically reduces the response time and helps increase the resource utilization. Figure 2 : Figure 2: Comparison of the existing load balancing techniques.
4,396
2015-08-23T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
On-Chip Glass Microspherical Shell Whispering Gallery Mode Resonators Arrays of on-chip spherical glass shells of hundreds of micrometers in diameter with ultra-smooth surfaces and sub-micrometer wall thicknesses have been fabricated and have been shown to sustain optical resonance modes with high Q-factors of greater than 50 million. The resonators exhibit temperature sensitivity of −1.8 GHz K−1 and can be configured as ultra-high sensitivity thermal sensors for a broad range of applications. By virtue of the geometry’s strong light-matter interaction, the inner surface provides an excellent on-chip sensing platform that truly opens up the possibility for reproducible, chip scale, ultra-high sensitivity microfluidic sensor arrays. As a proof of concept we demonstrate the sensitivity of the resonance frequency as water is filled inside the microspherical shell and is allowed to evaporate. By COMSOL modeling, the dependence of this interaction on glass shell thickness is elucidated and the experimentally measured sensitivities for two different shell thicknesses are explained. Whispering gallery mode (WGM) resonances in optical cavities have been studied for more than a century since the interaction of electromagnetic waves with dielectric spheres was first observed in late 1900's 1,2 . Since the first experimental observation in 1960's 3 , the WGM optical resonances have been demonstrated to be supported within several structures with an axis of rotational symmetry such as microdroplets 4,5 , microtubes [6][7][8] , microbottles 9,10 , microspheres [11][12][13] , microrings 14,15 , microdiscs 16,17 , microbubbles 18,19 , and microtoroids 20 . WGM relies upon total internal reflection at the cavity interface. To induce a resonance mode, an adiabatically tapered fiber is placed in close proximity to the resonator structure to evanescently couple the light. A large refractive index contrast between the cavity and the surrounding medium strongly confines the WGMs resulting in resonances with very high Q-factors of 10 7 -10 9 21,22 . Conversely, a low refractive index contrast facilitates extension of the modal profile beyond the confines of the resonator medium allowing for the optical radiation to interact with the surrounding medium and thus enabling sensor designs with exceptionally high sensitivity -albeit at the expense of the Q-factor. In general, changes in either the cavity geometry or the refractive index contrast between the cavity and surrounding medium perturb the resonance characteristics of the confined optical modes and can be used for sensing applications. The extreme level of sensitivity afforded by WGM resonators has elicited intense research in realizing sensors based on these structures 23 . To date, two kinds of WGM optical resonator configurations have been explored: (i) microsphere, microbottle, and microbubble structures formed by individually melting or machining and polishing fibers and capillaries of suitable dielectric materials to form highly smooth and axisymmetric structures; and (ii) on-chip microfabricated microring, microdisk, and microtoroid structures from suitable dielectric materials. Unlike solid structures such as spheres and discs, hollow structures such as cylindrical and spherical shells have two surfaces and offer the advantage of coupling the light through the outer surface whereas the inner surface can be engineered to induce perturbations for sensing. Microtube, and microbottle based sensors have been reported in a configuration commonly known as optofluidic ring-resonator (OFRR) sensors 8,24,25 where the analyte fluid interacts with the optical resonance through the inner surface of the shell. However, until now all OFRR sensors have been fabricated by glass blowing techniques from individual capillaries where the physical characteristics of these structures are not easily controlled or reproducible. On the other hand, on-chip microring, microdisc and microtoroid based sensors are able to leverage the reproducibility afforded by microfabrication techniques and the economy of wafer scale manufacturing methods. However, in these resonators it is much harder to achieve a clean interface with fluidic analyte medium since in most typical configurations both the resonators and the tapered fiber are exposed to these fluids. Recently, chip-scale glass blowing techniques have been demonstrated to create hemispherical and toroidal structures from borosilicate glass and fused silica 26,27 . These structures consist of glass microspherical shells with radii ranging from 0.1 mm to >1 mm and are proposed here for use as WGM resonator structures. What is really significant is that these structures are highly reproducible and can be integrated with on chip microfluidics to achieve high performance WGM OFRR structures for sensing applications. Furthermore, the thickness of the microspherical shell structures on the chip can be precisely tailored to achieve optimal interaction with the fluid within while maintaining very high Q-factor for the optical resonance. These glass microspherical shell structures are also ideally suited for cavity optomechanical applications for sensitive detection of mechanical motion and quantum optomechanical experiments 28 which can be further enhanced by operating these systems at exceptional points 29,30 . Hence, the microspherical shell structures can be utilized for a multitude of optical resonance based sensing applications including motion, temperature, pressure, (bio)chemicals etc. In this paper, we demonstrate the first chip-scale, borosilicate glass microspherical shell, optical resonators with high-Q factors fabricated by glassblowing techniques and demonstrate the potential of these structures for on-chip sensing applications. We present a model for microspherical shells with diameters ranging from 230 µm to 1.2 mm and shell thicknesses of 300 nm to 10 μm. Figure 1 shows an array of the fabricated on-chip, glass microspherical shells with the equatorial planes above substrate. The on-chip integration of highly symmetric and smooth surface, closed spherical shell structures, can allow for the realization of WGM based in-line microfluidic (bio)chemical sensors where the analyte fluid interacts with the optical resonance through the inner surface of the shell. Here we demonstrate and model the thermal sensing capability of the glass microspherical shell resonator. Furthermore, we show a proof-of-concept liquid core sensor by sensing the index of refraction change from water to air and confirm the phenomenon with a model. Methods The glass microbubbles were fabricated on 500 µm thick silicon substrate. First, circular features were patterned using positive photoresist and the silicon was etched to a depth of h eSi = 250 µm using deep silicon etching process to realize cylindrical cavities as schematically shown in Fig. 2(a). Second, Corning® 7740 borosilicate glass wafer was optionally patterned with smaller circles than on silicon using positive photoresist and 4 µm of nickel was electroplated as an etch mask. After removal of the photoresist in acetone, the borosilicate wafer was etched to a depth of h eG µm using a modified ICP-RIE high-aspect ratio glass etch process 31 . Thereafter, the nickel, chrome and gold layers were stripped from the borosilicate wafer using wet etchants resulting in a cross-sectional profile as shown in Fig. 2(b). The etched silicon and optionally etched borosilicate glass wafers were aligned to result in concentric circles and anodically bonded at a pressure of 1.35 atmosphere (1026 Torr) at 400 °C to form the bonded cavity as shown in Fig. 2(c). The bonded wafer was diced into chips and the borosilicate layer was thinned down to a total thickness of t µm from the un-etched side in 49% hydrofluoric acid as shown in Fig. 2(d). The bonded chip was thereafter heated on a silicon nitride ceramic heater to a temperature of 775 °C in a vacuum oven maintained at 0.13 atmosphere (100 Torr) for 45 seconds and was rapidly cooled down to ambient temperature. At 775 °C, the borosilicate glass softens and begins to expand outwards into a spherical shell in response to the high pressure created within the sealed cavity at this elevated temperature and the external vacuum pressure 26 . The blown glass microspherical shell is schematically illustrated in Fig. 2(e). While the dicing step can be performed after the glass blowing step and the entire process can be done at wafer level, in this work we fabricated the glass microbubbles at chip scale due to the small sized heater used in this work. However, if the bubbles are blown at wafer scale, extreme care must be exercised during the dicing step to prevent contamination of the microspherical shell surface with particulate or other dicing related debris and residues. Results and Discussion Chip Scale Microspherical Shells. Referring to Fig. 2, the final height, h g , that the sphere develops is a function of the heater temperature T f (in Kelvin), the pressure in the vacuum oven P f , the pressure P s and temperature T s (in Kelvin) at which the cavity is sealed, the etched depth h eSi and h eG , and the radius r 0Si and r 0G of the etched cavity in the silicon and glass wafers respectively and is given by 26 : The radius of the glass microsphere r g can now be calculated as The sphericity of the blown glass microspherical shells Ψ is defined as 32 : where the V g ′ and A g are the effective volume and surface area respectively of the glass microspherical shell region above the top-surface of glass substrate and are expressed in terms of (h g | exp − t), r g | exp , and t in eqs (5) and (6) as Table 1 lists the calculated h g and radius r g and experimentally measured values of h g | exp and radius r g | exp of various glass microspherical shells. A fairly good agreement between the calculated and experimental values of the heights and radii of the blown glass microspherical shells is found with a maximum error of <~20%. The calculated sizes of the microspherical shells are a sensitive function of the temperature and pressure at which these structures are sealed and blown. The experimental pressure and temperature are measured as global parameters at the system level of wafer bonder, heater, and vacuum oven pressure. Uncertainties in the actual temperatures and pressures at the individual microspherical shell level versus the global parameter values used in the calculations are considered to be the main reason for the observed discrepancy between the calculated and observed dimensions of the microbubbles. However, it must be emphasized that the process is highly reproducible across a single chip. For example, on a 1 cm × 1 cm chip on which more than 20 microspherical shells were blown simultaneously, the measured diameters of eight microscope-viewable shells was determined to be 1001.84 μm ±5.74 μm; i.e., a dimensional variation of ~0.5%. The position of the equatorial plane of the glass microspherical shell is critical to obtaining WGM optical resonance. The optical modes are localized on the equatorial plane and are sustained only when the equatorial plane is above the substrate with minimal coupling loss to the substrate. In our initial experiments, the bonded silicon-glass substrates with sealed cavities were heated at ambient atmospheric pressure to blow the glass bubbles and resulted in hemispherically shaped shells. In these devices no optical resonance was obtained due to significant loss into the substrate. This situation was remedied by changing the glass blowing step to a vacuum ambient rather than at atmospheric pressure. The vacuum ambient during the glass blowing step raises the pressure difference relative to the sealed cavity pressure and enhances the expansion of microspherical shell volume to develop into near spherical structures with the equatorial plane located above the substrate for all shell sizes as shown in the last column of Table 1. The sphericity of the blown glass microspherical shells quantifies the relative height of the equatorial plane with respect to the glass substrate regardless of shell sizes. Sphericities in the range of 0.985-0.996 were measured for the shells #4-#7 which indicates that near-spherical glass shells were achieved in this work. Smaller sphericities were observed in glass microspherical shells #1-#3 blown out of thicker glass substrates. The excess material in these thicker glass substrates was observed to result in a lateral expansion at the shell-base during the glass reflow process. This visible lateral expansion at microspherical shell-base could be eliminated by reducing the thickness of the bonded glass layer which resulted in near spherical bubbles. Following optical resonance measurements, microspherical shells were cleaved at the equatorial plane and the sidewall thicknesses were measured using a scanning electron microscope (SEM). For glass microspherical shells blown from 100 µm thick glass layer, #1-#3, the thickness of the shell wall thickness ranged from 6.7 µm-8.6 µm whereas reducing the thickness of the glass substrate to 50 µm, shells # 4-#6, resulted in a wall thickness of 1.1 µm-2.2 µm. Plasma etching of the glass substrate in microspherical shells #7-#9 followed by the subsequent thinning of the glass substrates to realize even thinner glass regions of 30 µm resulted in either spherical or vertically elongated shells depending upon the radius and the enclosed cavity volume. Based on the volumetric redistribution of the glass covering the cavity opening into the spherical shell, the shell wall thickness can be estimated and agrees well with the measured thicknesses for all microspherical shells. For the etched glass substrates with a substrate glass thickness of 30 µm, shells with wall thicknesses as small as 300 nm were obtained. Furthermore, if a microspherical shell was overblown and was split open on the top, e.g. the broken shell seen in the background in the image of shell #7, optical resonance could still be sustained, so long as the remaining structure presented a spherical profile at the equatorial plane. Thus, through accurate control of: (i) the etched cavity geometries and dimensions, (ii) glass substrate thickness via micromachining, and (iii) the sealing and blowing conditions, wafer level glass blowing process can be customized to achieve glass microspherical shells of various sizes, sphericities, and wall thicknesses. The ultra-smooth surfaces obtained through the glass reflowing process are ideally suited for sustaining ultrahigh-Q optical resonances. Characterization of Optical Resonance in Microspherical Shells. The experimental set-up used for characterizing optical resonance in the glass microspherical shells is shown in Fig. 3(a). The excitation source consists of a tunable 760 nm laser (Thorlabs, TLK-L780M). The laser tuning was driven via a triangle wave at 10 Hz and corresponds to 15 GHz (Δλ = 28.87 pm) shift from the center wavelength of 760 nm. The light was evanescently coupled to the resonator via a tapered optical fiber. The fiber was fabricated using a hydrogen torch placed in the middle of the fiber and then being pulled at a constant rate from both ends. The polarization of the incident laser was adjusted using a fiber polarization controller to optimize coupling efficiency. After passing by the resonator and the fiber taper, the transmitted light was monitored using a photodiode (Thorlabs DET36A). Excitation of the resonance modes sustained in the equatorial plane of the glass microspherical shells manifest as dips in the transmission spectrum. The full width at half maximum (FWHM) of the transmission dips indicates the quality factor of the resonance. Figure 3(b) shows an optical micrograph of microspherical shell #9 with a mode in which the light is confined to the equatorial plane of the microspherical shell. Optical resonance modes in WGM resonators occur when the coupled light can constructively interfere with itself by completing integral number of cycles for each revolution around the shell's equatorial circle. Assuming that the mode is tightly confined within the resonator medium, for a laser wavelength of λ, the condition for WGM resonance in a dielectric annulus of radius r can be expressed as 2πn eff r = mλ, where n eff is the mode index; and m is the azimuthal mode number and corresponds to integral number of orbital wavelengths 33 . Table 2 lists the physical dimensions of the various microspherical shells studied in this work, the corresponding experimentally measured highest Q-factor, and the various resonance parameters calculated through COMSOL ® finite element simulation of the optical resonance characteristics. Azimuthal mode number m is calculated from the equation of 2πn eff r = mλ with λ = 760 nm and n r = 1.467 for borosilicate glass is used to estimate n eff . Eigen-frequencies f nml were simulated with azimuthal mode number m, refractive index of bulk borosilicate glass n r and shell geometry. Effective refractive index n eff can be expressed as n eff = mc/(2πr f nml ), where f nml is the simulated resonance frequency of fundamental TE mode, and c is the speed of light in vacuum. The effective refractive index is an indicator of how well the mode is confined within the glass shell and the thinner the shell thickness, the lower is its value as can be seen in Table 2. The small mode volume of microspherical shell #6, diameter of 345 μm and thin sidewall thickness of 1.1 μm, results in less than 10 observed resonance modes in the transmission spectrum within the 15 GHz frequency span as shown in Fig. 4(a). Asymmetry in the transmission spectrum was observed upon scanning the laser frequency up and down as shown in Fig. 4(b) and arises from thermally induced linewidth broadening/compression effect in optical micro-resonators 34,35 . The inset image of Fig. 4(a) shows a resonance mode with a very high Q-factor of 5.19 × 10 7 which was deduced by fitting a Lorentzian curve to the transmission spectrum. For this resonance mode, the calculated finesse was 2.45 × 10 4 . For the microspherical shell resonator #5, the resonance spectrum shows equally-spaced resonance frequencies in the transmission spectrum of as shown in Fig. 4(b). Since the free spectral range for this microbubble was 117 GHz, these peaks with a frequency spacing of 0.76 GHz must arise due to azimuthal mode splitting. Azimuthal mode splitting typically arises from the removal of degeneracy of polar quantum number l in the solution of the spherical harmonic mode function 36 due to eccentricity of the microbubbles. The analytical expression for azimuthal mode splitting is derived using perturbation method and is given by 37 where ε is the eccentricity of the microspherical shell, f nml is the resonance frequency of the mode with radial mode number n, azimuthal mode number m, and polar mode number l. For a shell with polar radius r p and equatorial radius r e , eccentricity ɛ is defined as ε= − r r r p e e . Hence, the splitting between successive azimuthal mode numbers within the free spectral range can be given by 38 The resonance frequency of microspherical shell #5 was calculated by COMSOL® simulation to be f nml = 400.437508 THz under the assumption of an ideal spherical shell of uniform wall thickness and a radius of 277 μm. Using the equation for resonance condition, for microbubble #5, and λ = 760 nm, the azimuthal mode number can be calculated to be m = 3359. Under the assumption, that the observed peaks in Fig. 4(b) are due to the splitting of the fundamental TE azimuthal mode (m ≈ l ≈ 3359), the frequency spacing of 0.76 GHz leads to a corresponding eccentricity of ɛ ≈ 0.65%. Dimensional data of microspherical shell #5 from Table 1, can be used to calculate value of eccentricity which gives a value of 2.5%. This is ~4 times larger than the eccentricity estimated using eq. (8). The large uncertainty of ~5 μm in determining the microspherical shell diameter and height using optical images can easily account for the observed discrepancy and therefore, the two eccentricities can be considered to agree well within the errors of the measurements. Temperature Sensitivity of Optical Resonance in Microspherical Shells. From the WGM resonance condition, it is clear that the resonance frequencies depend on both the size and refractive index of the resonator. A small change in the size or the refractive index can cause a significant resonance frequency shift. Since both the refractive index and the size of the microspherical shells depend upon temperature due to thermo-optic and thermal expansion effects, a WGM resonator can be configured as a sensitive thermometer. Assuming a linear dependence of thermal expansion and refractive index for small temperature variations, these can be expressed as dr/r = αdT and dn r = βdT; where α and β are the temperature coefficient of expansion (TCE) and thermo-optic coefficient respectively of borosilicate glass. Taking a variation of the resonance condition, we can now express the fractional change in the frequency as The frequency shift per unit change in the temperature of the microspherical shell can be estimated using eq. (9) by using borosilicate material properties at λ = 760 nm, i.e., thermo-optic coefficient β = 3.41 × 10 −6 K −1 39,40 , temperature coefficient of expansion α = 3.25 × 10 −6 K −1 41 , and n r = 1.467 42 . This gives a theoretical frequency shift of 5.574 ppm K −1 . The sensitivity of the microspherical shells to temperature changes was experimentally measured by placing the device on the hot side of a calibrated Peltier cooler. WGM mode of microspherical shell #7, with a Q-factor of ~10 7 , was monitored as a function of temperature. As seen in Fig. 5(a), the resonance frequency decreases with increasing temperature. The induced frequency shift as a function of temperature, Fig. 5(b), shows a linear dependence with an outstanding thermal sensitivity of −1.81 GHz K −1 (equal to a wavelength shift of −3.48 pm K −1 ) and corresponds to fractional frequency temperature sensitivity of −4.23 × 10 −6 K −1 . Assuming the frequency resolution of measurement system to be 100 kHz at a Q-factor of 10 7 , the microspherical shell temperature resolution can be determined to be 55 µK. For microspherical shell #7, the resonance frequency f nml was first calculated using COMSOL® modeling at 20 °C. Thereafter, using the temperature coefficient of expansion and the thermo-optic coefficient, the microspherical shell dimensions and refractive index were changed to the corresponding values at the increased temperature and the new f nml was modeled. Through this method, the expected frequency change was modeled through the range of the experimental temperature values and resulted in a modeled slope of −2.23 GHz K −1 . Clearly the ideal model overestimates the slope in comparison to the obtained experimental slope of −1.81 GHz K −1 . The values measured in this work agree very well to those reported on solid silica microspheres where the authors reported a temperature sensitivity of frequency of 1.808 GHz/K at a wavelength of 1530.335 nm for a 430 μm bead 43 . It must be noted that the ultimate change in the microspherical shell equatorial radius is not only a function of the TCE of the glass bubble but is also affected by the TCE mismatch between the borosilicate glass and the bonded silicon substrate at the base. To account for these issues, we parametrized the effective TCE of glass and modeled the frequency shift to match the experimental data. As shown in Fig. 5(b), a near ideal fit was obtained by using an effective TCE of borosilicate glass, α| eff = 2.19 × 10 −6 K −1 . Figure 5(c) shows the measured thermal sensitivity of microspherical shells #8 and #9 performed with a much finer temperature scan. The experimentally obtained linear slopes for these silica shells of 1.78 GHz K −1 is very similar to that obtained for microspherical shell #7. It must also be noted that the microspherical shells #8 and #9 are located on the same chip and, although of significantly different dimensions, show near identical thermal dependence of resonance frequency shift. This can be considered as further evidence of the fact that the effective TCE of the microspherical shells sensitively depends upon the stresses induced in the between the glass and silicon substrates during bonding as well as the temperature at which the glass shells are blown. Dependence of Optical Resonance in Microspherical Shells on Refractive Index Changes. A major advantage of WGM resonators consisting of hollow shell structures is that fluidic analyte samples can be introduced and made to interact with the optical resonance mode through the inner surface of these structures 8 . As we have already demonstrated, through microfabrication processes the thickness and the diameters of the microspherical shells can be precisely controlled and reproduced. Sensitivity to the fluid contained in the inner volume of the optofluidic microspherical resonator as a function of the shell wall thickness was experimentally examined. For these experiments, on-chip glass microspherical shells were coated and protected with crystal A blue-shift of the resonant modes was observed as the water-filled microspherical shell core dries out. Inset image shows 0.51 GHz frequency shift observed in a 2.5 × 10 6 Q-factor mode. (b) COMSOL simulated frequency shifts between water-core and air-core microspherical shells with diameters of 600 µm as a function of the shell thicknesses ranging from 300 nm to 10 µm. Experimental data for two microspherical shells of thicknesses 4.7 μm and 6.4 μm is also shown. (c,d) FEM solved fundamental TE mode showing the spatial distribution of the electric field intensity in 0.6 µm shell thickness with water and air core respectively. (e,f) Electric field intensity is plotted in logarithmic scale for water and air cores in 0.6 µm thick shell and clearly exhibits penetration of electric field into water core in (c). (g,h) FEM solved fundamental TE mode in a 8 µm thick microspherical shell with water and air core respectively. (i,j) Electric field intensity plotted in logarithmic scale for the two cores for the 8 µm thick microspherical shell. The simulations clearly show that the TE mode electric field interacts strongly with the fluid in the core of thinner walled microspherical shells than for thicker shell walls and explains the larger frequency shift obtained for thinner walled shells. at 20 °C and 760 nm) in a vacuum chamber. The filled water within the microspherical shell was held inside the cavity in atmosphere due to surface tension at the small opening. However, the water gradually evaporated and eventually dried out in the microspherical cavity. When the water in the micro-bubble evaporates away, the resonance frequency blue shifts due to a decrease of the effective refractive index due to the interaction of optical resonance on the inner surface. The water-filled microspherical shells #10 a with a wall thickness of 4.7 µm and #11 with a wall thickness of 6.4 µm were coupled with fiber taper and the resonance modes were monitored and tracked as the water dried out in the microspherical shells in real-time. The transmission spectrum of microspherical shell #10, in Fig. 6(a), shows a blue-shift due to the decrease in the effective refractive index inside the microbubble cavity as a consequence of the water drying out and being replaced by air. Inset in Fig. 6(a) shows a zoomed-in image of a shifted resonance mode with and without water in the microspherical shell. A frequency shift of 0.51 GHz and an increase in the Q-factor from 2.51 × 10 6 to 2.69 × 10 6 was observed as the core changed from water to air. Resonance frequency shift was barely observed in the transmission spectrum of microspherical shell #11. The shift in the resonance frequency of the first radial order fundamental TE mode between water core and air core of a 600 µm diameter microspherical shell resonator was simulated as a function of the shell wall thicknesses using COMSOL® and is shown in Fig. 6(b). Experimentally measured frequency shifts of microspherical shells #10 and #11 are in good agreement with the COMSOL® simulations. The very small frequency shift observed for microspherical shell #11 is due to the much larger shell wall thickness of 6.4 µm. The electric field confinement in a 0.6 µm thick shell at the fundamental eigenfrequency is shown in Fig. 6(c) for water filled and in Fig. 6(d) for air filled shell cores. Figure 6(e) and (f) plot the intensity of the electric field on log scale for the 0.6 µm thick shell and show that the electric field clearly penetrates into the water core within the microspherical shell. On the other hand, Fi. 6(g) and (h) show that the electric field is entirely confined to inside the 8 µm thick glass shell with minimal interaction with the fluid contained in the cavity of the microspherical shell. Figure 6(i) and (j) show the electric field intensity for the 8 µm shell on log scale. Thus, thicker walled shells are expected to show little sensitivity to any fluidic core changes or interactions. Conclusions In this work, we have demonstrated optical resonance in microspherical glass shells fabricated on a silicon substrate using microfabrication methods. The chip-scale glass blowing technique allows for the fabrication of microspherical shells with radii ranging from 0.2 mm-1 mm and thicknesses ranging from 0.3 μm-10 μm. The paper explains the effect of the various processing parameters on the final dimensions of the glass microspherical shells. The fabrication method described is highly reproducible for manufacturing the resonators with very high dimensional control and tolerance. The microspherical glass shells have been shown to be excellent optical resonators with Q-factors in excess of 50 million. These optical resonators can be used as ultrahigh sensitivity temperature sensors with possible resolutions of 55 μK. The small thermal mass and integration with on-chip microfluidic channels can allow these to be configured into sensitive lab-on-a-chip devices. The modulation of the effective refractive index in thin walled (≤1 1 μm) microspherical shells through the introduction of fluids with varying refractive index in the inner core or by selective adsorption of various molecules on the inner walls, is likely to provide a very sensitive and reproducible platform for bioanalyte detection and lab-on-a-chip applications. The microspherical shells are also sensitive to forces on the bubble structure and based on this principle sensitive magnetometers have been recently investigated 44 . In summary, the presented chip-scale, borosilicate glass, microspherical shell resonators provide a very sensitive platform for various applications including temperature, vapor pressure, motion, pressure, bioanalytes, and optomechanical sensing.
6,932.8
2017-11-02T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Editorial: Optical Approaches to Capture Plant Dynamics in Time, Space, and Across Scales 1 Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, Kirkkonummi, Finland, Center of Excellence in Laser Scanning Research, Masala, Finland, Warnell School of Forestry and Natural Resources, University of Georgia, Athens, GA, United States, 4 Institute of Bioinformatics, University of Georgia, Athens, GA, United States, Department of Plant Biology, University of Georgia, Athens, GA, United States, 6 Balaton Limnological Institute, Centre for Ecological Research, Hungarian Academy of Sciences, Tihany, Hungary, Department of Geodesy and Geoinformation, Technische Optical Approaches to Capture Plant Dynamics in Time, Space, and Across Scales The quest to decipher the phenotype to genotype relationships involves the quantification of plant adaption to the environment at various scales to solve some of the world's most pressing problems . The role of phenotype to genotype relationships within initiatives to increase crop yields for food, fiber, and fuel and to improve prediction of future environmental conditions (IPCC, 2014) is central to the function and well-being of societies world-wide. Quantifying plant morphology over time captures the dynamic structure and function relationships of how plants interact with and respond to environmental stimuli (Balduzzi et al., 2017). A deeper understanding of plant adaption can be achieved if technologies to monitor plants across spatial and biological scales are further developed. Spanning biological scales from the community over the organismal down to the molecular scale is inherently coupled to spatial scales. A wide variety of technological concepts are facilitating the revival of the science of plant morphology and anatomy (Ledford, 2018). Over the last decades, optical imaging and remote sensing developed the fundamental working tools to monitor and quantify our environment and plants in particular. Satellite imaging increased its spatial, temporal, and spectral resolution to levels where individual trees in forests can be partly identified. However, it remains a challenge to develop pipelines that quantify the traits of plants and their processes from the level of plant populations down to the detail obtained with microscopy. The reason for the pipeline challenge is the heterogeneity of the obtained data ranging from unorganized point clouds over surface models to rasterized imaging and hyperspectral data. For example, airborne methods increased their importance in ecology by measuring aggregate tree traits such as crown width or stem diameter to study community composition (Wieser et al., 2017). However, combining airborne and highresolution terrestrial data is non-trivial despite improved automatization and higher resolutions of measured plant traits. Unmanned aerial vehicles (UAV) equipped with cameras and multispectral sensors allow the fusion of temporal, spatial, and spectral data to record plant dynamics on ecosystem level at resolutions that also allow the quantification of individual plant morphology. In particular, the advances in algorithms to handle heterogenous data of various sensors revealed details of plant adaptation to discover the genes controlling adaptation mechanisms. The wealth of options presents a new challenge in testing and selecting an appropriate approach that scales well with the research questions. Making modern technology applicable to the hypothesis driven science process requires more than learning a few techniques. In the light of a particular hypothesis it requires also resources to acquire, build, and refactor equipment and software. Here, a wider view to different research disciplines and how they utilize their state-of-the-art techniques can be most beneficial in demonstrating potential use cases. Ideally, this will lead to joint use of best practices in technology innovation and discovery in the plant sciences. The Temporal dynamics were quantified by Zlinszky et al. who measured variation in nocturnal branch movements between different species with short interval LiDAR. Similarly, Herrero-Huerta et al. studied the movement of leaves in Calathea roseopicta under different lighting conditions with a LiDAR system. The outcome of these investigations challenges our view of plants as passive, static organisms, and shows that they are capable of short-term changes in shape at the whole plant level. While not completely non-invasive, Doan et al. published a first low-cost imaging system to detect formation of free amino acid groups on maize roots to elucidate exudate responses. In doing so, a special, ninhydrin-injected paper was monitored over a 3-week growth period. Jiang et al. reported an application using low-cost consumer RGB-D cameras to monitor and quantify the growth of cotton canopies over a 3-month period. In addition, two manuscripts harnessed spectral information to study dynamic responses in plant leaves outside the visible region. Cen et al. detected photosynthetic fingerprints of citrus Huanglongbing (so-called "greening") disease with fluorescence spectroscopy by combining imaging of specific fluorescence properties with advanced image analysis. Junttila et al. estimated leaf equivalent water thickness using combinations of different LiDAR wavelengths. Their results showed significant correlation with the standard laboratory reference measurements. These findings demonstrate the significant potential in expanding multi-wavelength laser scanning measurements to whole forests to accurately monitor ecophysiological parameters. Our Research Topic proves that optical technologies have reached a level of precision, reliability and detail that is sufficient to study minute physiological processes in plants. In addition to documenting plant status at various levels, imaging, and scanning methods allow following processes of adaptation, movement or disease in real time as they happen. In addition to serving as early detection systems (Cen et al.; Junttila et al.) for management purposes, these methods allow new insights into plant function. One particular example for this is the observed short-term periodicity in plant movement and trunk diameter, which suggests a new approach to water transport in trees (Zlinszky and Barford, 2018). The Research Topic covered a plethora of methodological approaches as suggestions for best practices in the light of a particular research question. As such, the Research Topic collected papers that demonstrate how technology development and scientific discovery in the plant sciences can accelerate each other toward deciphering the adaption of plants as part of the quest to reveal the phenotype to genotype map. The future challenge will be to extend the multi-scale measurement of static traits to multi-scale measurement of chemical processes within and between plants. For example, how do diseases or ripening processes affect the community structure or how does microclimate fluctuation of single plants affect overall edible yield? How do shape changes in canopies relate to the hydraulic properties at tissue level? We believe that the optical methods in this Research Topic will become openly available such that all researchers across the plant sciences will be capable to perform research at this exciting frontier.
1,486.8
2018-06-11T00:00:00.000
[ "Environmental Science", "Biology" ]
The role of endospore appendages in spore-spore contacts in pathogenic bacilli Species within the spore-forming Bacillus cereus sensu lato group are recognized for their role in food spoilage and food poisoning. B. cereus spores are decorated with numerous pilus-like appendages, called S-ENAs and L-ENAs. These appendages are believed to play crucial roles in self-aggregation, adhesion, and biofilm formation. By using both bulk and single-cell approaches, we investigate the role of S-and L-ENAs as well as the impact of different environmental factors in spore-to-spore contacts and in the interaction between spores and vegetative cells. Our findings reveal that ENAs, and particularly their tip fibrilla, play an essential role in spore self-aggregation but not in the adhesion of spores to vegetative cells. The absence of L-BclA, which builds the L-ENA tip fibrillum, reduced both S-and L-ENA mediated spore aggregation, emphasizing the interconnected roles of S-and L-ENAs. Increased salt concentrations in the liquid environment significantly reduced spore aggregation, implying a charge dependency of spore-spore interactions. By elucidating these complex interactions, our study provides valuable insights into spore dynamics. This knowledge can guide future studies on spore behavior in environmental settings and aids in developing strategies to manage bacterial aggregation for beneficial purposes, like controlling biofilms in food production equipment. Introduction Members of Bacillus cereus sensu lato (s.l.) group (B.cereus spp.) are widespread in various natural environments, including air, water, vegetation, and soil.These facultative anaerobic Gram-positive bacteria are frequent contaminants of raw food material (Rahnama et al., 2023) and food production equipment, leading to both emetic and diarrheal types of food poisoning (Shaheen et al., 2010a;Yang et al., 2023).An additional concern is the release of various enzymes that degrade food components, resulting in decreased food quality and shortened shelf life.A major challenge for food producers is the ability of B. cereus spp. to form highly resilient endospores (spores), which makes the decontamination of food products and production equipment a difficult and costly task.These spores can withstand harsh conditions such as wet heat, irradiation, chemicals, and desiccation -conditions that would normally eliminate the vegetative cells.The endospores' resilience is attributed to several factors, including a dehydrated core enriched with calcium-dipicolinic acid (CaDPA) and protective small acid-soluble proteins, alongside several semi-permeable layers, with each layer providing unique protection (Setlow, 2006;Sunde et al., 2009;Swick et al., 2016). It has been shown that the B. cereus spores are more adhesive than their vegetative counterpart and bind more readily to materials found in food production facilities, such as stainless steel, plastic, and glass surfaces (Peng et al., 2001;Rönner et al., 1990).The high adhesiveness complicates their removal (Faille et al., 2016) and provides increased resistance to environmental challenges (Simmonds et al., 2003).Furthermore, adhered spores may serve as a foundation for the development of biofilm on these surfaces (Faille et al., 2014). Biofilms attach to surfaces and establish a dynamic microenvironment that promotes collective survival and adaptation (Kragh et al., 2016).Consequently, the presence of biofilms in food production equipment poses a persistent contamination risk, exhibiting strong resistance to removal and resulting in substantial costs related to cleaning and sanitation within the food industry.To improve the efficiency of spore removal methods, additional insights into the interactions between spores and their surroundings are needed. Bacterias' ability to interact and autoaggregate (adhere to one another) also contributes to their resilience (Burel et al., 2021).Autoaggregation is common among environmental and clinical bacterial strains and is associated with protection against environmental stresses and immune responses (Demirdjian et al., 2019).The process of bacterial autoaggregation is typically mediated by specific adhesion molecules on the bacterial surface, such as proteins, polysaccharides, and longer surface structures (Dogsa et al., 2023).Several studies have highlighted the role of structures like pili, including chaperone-usher pili (Faris et al., 1983), type IV pili (Bieber et al., 1998) and curli (Boyer et al., 2007), in facilitating this selfadhesion in bacteria such as Yersina eneterocolitica and Escherichia coli.Whereas the molecular determinants of autoaggregation of vegetative cells are well described, little is known about the aggregation process seen in spores. Recent studies have revealed that most B. cereus spp.carry genes for pilus-like structures known as spore-specific endospore appendages (ENAs) (Pradhan et al., 2021).The architecture of two types of ENA, denoted S-and L-ENA, expressed on the surface of a foodborne outbreak isolate of Bacillus paranthracis NVH 0075/95, was recently solved (Pradhan et al., 2021;Sleutel et al., 2023).The S-ENAs are described as "staggered", μm long proteinaceous fibers that are tens of nm wide.They end in "ruffles" featuring four to five thin tip fibrils that in general appearance resemble the tip fibrils found on adhesive pili like type 1 pili and P-pili of uropathogenic E. coli (Pradhan et al., 2021).The S-ENAs, which are the most abundant type of appendages on NVH 0075/95 spores, are believed to be anchored beneath the exosporium layer, directly to the spore coat.In contrast, the "ladder-like" L-ENAs, are thinner and shorter than the S-ENAs and are linked to the exosporium layer via the ExsL protein (formerly denoted CotZ) (Sleutel et al., 2023).Each L-ENA fiber is decorated with a single tip fibrillum composed of the collagen-like L-BclA protein (Sleutel et al., 2023). Notably, deleting the l-bclA gene leads to L-ENAs lacking the tip fibrillum, while it does not affect the presence of tip fibrilla on S-ENAs'.This distinction in tip fibril and the structural differences between the two types of ENAs suggests specific roles in B. cereus spp.spores, as highlighted in recent research (Sleutel et al., 2023). Until recently, the lack of structural and genetic data on ENAs has limited understanding of their function in B. cereus spp.As reviewed in Zegeye et al., these fibers may play a role in adherence to surfaces and biofilm formation (Zegeye et al., 2021).In this context, it is important to note that a mono-species biofilm of B. cereus spp.can comprise as much as 90% spores (Wijman et al., 2007), suggesting that the ENAs may constitute a substantial part of the biofilm matrix.Further research into the S-ENA revealed that they are rigid yet flexible fibers, maintaining integrity under extension, which supports their proposed role in facilitating spore-to-spore binding and potentially enhancing autoaggregation (Jonsmoen et al., 2023).The limited understanding of autoaggregation in B. cereus spp.spores prompts the need for a focused investigation, driven by the potential implications of this phenomenon on biofilm formation, environmental persistence, and, ultimately, food safety. In this work, we examine the role of S-ENA and L-ENA, as well as the tip fibrillum of L-ENAs, in the self-aggregation behavior of B. cereus spore populations, using both wildtype (WT) and ENA-deficient mutant spores (Fig 1A).We investigate their roles in spore-spore and spore-vegetative cell interactions, focusing on how these interactions are influenced by factors such as ionic strength and surfactants in the surrounding liquid.Our findings provide insights into the mechanisms that potentially enhance spore resistance to environmental stress and aid in biofilm formation.These insights could contribute to the development of more efficient strategies for removing B. cereus spores from food production environments and equipment.Accordingly, it has the potential to not only save costs and reduce food waste but also promote more sustainable dairy production practices. Strains and spore preparations The food outbreak isolate B. paranthracis NVH 0075/95 (Lund & Granum, 1996) and the isogenic mutants used in this paper are listed in Table 1 and their phenotype illustrated in Fig 1 .For preparation of spores, the bacteria were streaked onto LB agar plates and incubated for no less than two weeks at 37°C.When approximately 98% of the bacteria had sporulated, as determined by phase contrast microscopy (Olympus BX51, Olympus Corporation, Japan), the spores were harvested as described before in Jonsmoen et al (2023) (Jonsmoen et al., 2023). Pure spore suspensions were stored at 4 °C until the start of the experiment.This study Sedimentation assays To study the sedimentation behavior, three batches of spores, suspended in sterile distilled water, were pooled and the OD600 of the spore suspensions were adjusted to 10, before being mixed thoroughly by vortexing for about 15 sec.The suspensions were placed in three borosilicate sample tubes (14 mm X 130 mm, DWK Life Sciences) and left to settle.A SONY ILCE-5100 camera with a SELP1650 lens was used to capture 24.3 megapixel images of the tubes at 0, 3, 5, and 10 hours.Subsequently, images were taken every 24 hours until the samples had fully settled. To verify that the spores had not germinated during the experiment, all samples were homogenized after the experiment had ended and spore status were inspected using phase contrast microscopy (Olympus BX51, Olympus Corporation, Japan). Optical tweezer setup For the analysis of binding between single spores or between a spore and a single vegetative cell, we used an optical tweezer (OT) setup integrated in an inverted microscope (Olympus IX71, Olympus Corporation, Japan) integrated with a water immersion objective (UPlanSApo60XWIR, 60×/1.2N.A, Olympus) (Stangner et al., 2018).The samples were recorded in bright field mode using a 1920 × 1440 pixel complementary metal oxide semiconductor (CMOS) camera (C11440-10C, Hamamatsu Photonics, Japan).The OT system has been optimized to reduce fluctuations using the Allan variance analysis as described in (Andersson et al., 2011) placed on the sample holder, which is temperaturecontrolled and maintained at 25°C.A 1064 nm DPSS laser (Rumba, 05-01 Series, Cobolt AB, Sweden) was used for trapping spores and bacterial cells.We made sure that the irradiation dose was below the threshold for disrupting the spore bodies, about 50 J, when trapping the spores (Malyshev et al., 2022). OT catch-and-release test Three types of catch-and-release tests were conducted to observe the role of ENAs in adherence.The first analysis was done on isogenic spores, where 1 µl of concentrated spore suspension was added to the middle of a sample slide (24 × 60 mm, no. 1, Paul Marienfeld, Lauda-Königshofen, Germany) with double sticky tape (product no.34-8509-3289-7, 3M) on both sides.Then, 10 µl of the suspension media, either Milli-Q water, PBS (137 mM NaCl, 1.7 mM KCl, 10 mM Na2HPO4, 1.8 mM KH2PO4) at different concentration (1:1, 1:2, 1:10), 0.05% Tween-20 (BP337-100, Fisher BioReagents™) or 1% BSA (A6003, Sigma-Aldrich), was added on top of the concentrated spore suspension.The sample was then enclosed with a coverslip (20 × 20 mm, no. 1, Paul Marienfeld), and the edges sealed off.Two spores were captured in the trap of the optical tweezer and brought to the spore-free zone closer to the edges of the sample and held together for approximately 5 seconds in the optical trap using a holding a power of 1 W and a 1064 nm laser.After the spores were released from the trap, we followed them for at least 1 minute, to see if the spores stayed attached to each other or drifted apart by Brownian motion.The interaction process was recorded, and the outcome was noted.A total of 30 interactions were initiated for each experiment, except for the WT strain (S+L+) where 60 interactions were initiated. The second catch-and-release test was done on vegetative cells or on spores of different genotypes.A channel was made by adding a double sticky tape on both sides of the sample to create a channel, closed off by a coverslip.A volume of 2 µl of each spore suspension to be tested was applied at either end of the channel so that a spore-free zone was created in the middle.Bald spores were deposited at one side of the channel, while another spore type was deposited on the opposite side (illustrated in Fig. 6).The channel was then filled with Milli-Q water.One spore from each side of the channel was then brought to the spore-free middle zone by using the OT.As in the prior experiment, the spores were held together, released and the resulting outcome was noted. For catch and release analysis of interaction between a single spore and a vegetative cell, strain NVH 0075/95 was grown overnight in LB at 30°C under shaking at 200 rpm.The following day, the overnight culture was diluted 1:100 with fresh medium before left to grow for 3 hours.The bacteria were pelleted by centrifugation at 2000 rpm for 5 minutes.The supernatant was discarded, and the pellet was resuspended in Milli-Q water.The catch-andrelease test was performed following the same procedure used for testing binding between individual spores, except that in this case, spores were added on one side and vegetative bacteria on the other. Electron microscopy sample preparation To obtain images of vegetative cells, 100 µL of an LB overnight culture of strain NVH 0075/95 was transferred to a fresh media and left to grow for 4 hours at 37°C under shaking at 200 rpm.The cells were then centrifugated down for 10 minutes at low speed (2500 rpm), and the pellet was washed once in PBS before fixated (2% paraformaldehyde).A copper grid (400 mesh) covered with FCF400-CU Formvar Carbon Film was placed on top of a droplet of the fixed cell suspension for 1 minute.The excess suspension was removed by capillary force using dry filter paper.The grid was transferred to a droplet of 4% uranyl acetate staining for another minute before the dried grid was ready for analysis using a JEM-2100Plus Electron Microscope (JEOL Ltd., Japan). S-ruffle measurements For the S-ENA ruffle measurements, we first collected raw negative stain micrographs.For this, a suspension of spores was applied onto formvar/carbon-coated copper grids (Electron Microscopy Sciences) with a 400-hole mesh.The grids were glow-discharged (ELMO; Agar Scientific) with 4mA plasma current for 45 seconds.3 μl of a bacterial spore suspension was applied onto the glow-discharged grids and left to adsorb for 1 minute.The solution was dry blotted, followed by three washes with 15 μl Milli-Q.Next, 15 μl drops of 2% uranyl acetate were applied three times for 10 seconds, 2 seconds, and 1 minute respectively, with a blotting step in between each application.The excess uranyl acetate was then dry blotted with Whatman type 1 paper.All grids were screened with a 120 kV JEOL 1400 microscope equipped with LaB6 filament and TVIPS F416 CCD camera.Next, micrographs were loaded into ImageJ 1.54f and scaled using a pixel resolution of 1.94Å/pix.Ruffle lengths were measured using the segment tool by drawing a segmented line starting from the tip of the S-ENA pilus to the terminus of the tip fibrillum, following the curvature of the ruffle. Data analysis and statistics Images from the sedimentation assays were analyzed using ImageJ 1.54f (Rueden et al., 2017).The size of the pellet after 6 days was determined by measuring it against an internal standard on the images.Additionally, the pixel intensity of the spore suspensions was assessed by generating a profile plot, normalizing it according to the highest value, and then balancing it into 200 datapoints using a randomized algorithm in RStudio (Version 2023.12.1).The intensity profiles were plotted against time and datapoints crossing 50% light intensity were extracted and plotted as a function of time.ImageJ was also used to estimate the center-to-center distances (µm) between adhered spores or spores adhered to vegetative cells using the line tool.Each interaction was measured three times during the recording, and the average was noted. Using linear regression analysis on the linear phase of the settling process, the spores' sedimentation rate was estimated.Further, Tukey's test in RStudio was applied to determine significant differences between the WT and mutant strains.Graphs were plotted in Origin 2024 (OriginLabs, Version 10.1.0.178).We used the non-linear curve fitting tool in Origin to fit the segmentation data to Boltzmann-type curves, aiding visual interpretation. S-and L-ENA influence spore sedimentation The interaction among bacterial cells or spores within a population is often assessed by Appendages facilitate close interactions among spores To further test the role of ENAs in spore-spore interactions, we performed a catch-andrelease analysis of spore-to-spore binding using an optical tweezers (OT) system.This allowed us to capture two spores and keep them in close proximity for a set duration (Fig 3ac).Upon releasing the trapping force, the spores might either remain adhered or separate due to Brownian motions (Fig 3d-f).This method offers a unique opportunity to study the interactions between different types of spores and between spores and vegetative cells.For each strain, a spore was trapped and paired with another spore of the same genotype 30 times, and the frequency of attachment -defined as instances where the two spores remained together instead of drifting apart upon release of the optical trap -was recorded (Fig 4A ). WT spores remained attached in 26.7% (16/60) of the time, a frequency surpassed only by S-L+ spores, which showed a 51.6% (32/62) adherence frequency.The lowest attachment frequency was observed in the S-L+ ΔL-bclA spores, which displayed an even less attachment than the bald spores.Consistent with sedimentation assay results, spores lacking L-BclA (S-L+ ΔL-bclA) demonstrated significantly reduced binding compared to those with intact L-BclA.This trend toward further reduced binding in ΔL-bclA spores, especially when also lacking S-ENAs, is evident from the S+L+ ΔL-bclA spores' inability to remain attached after the trapping force was removed.Altogether, the results from the catch-and-release assay correspond well with what we observed in the sedimentation assay.The strains that settled faster to the bottom of the glass tubes, were also the ones that were most likely to stick together when released from the optical trap.S3.Error bars are not included as this is an observational study. Trapping two spores of the S++L+ strain together was notably difficult, as they consistently repelled each other, preventing the successful confinement of both within the trap.Therefore, the data represents 30 attempts to trap spores together rather than 30 actual interactions.Out of these attempts, trapping was successful in only six cases, and among these, binding was observed in only three instances.Interestingly, the binding that occurred was more distant than the close contact seen with WT or S+L-spores; typically, when one spore was trapped, the other was either pushed away or drawn closer, as seen in Figs 5A and B. The average center-to-center distance between the S++L+ spores was significantly larger at 5.8 ± 0.8 µm, nearly three times the distance of aggregated WT spores, which was 1.9 ± 0.3 µm.In the other 24 trapping attempts, only one spore remained in the trap at any given time. Next, to test if binding requires the presence of ENAs on both interacting spores, we again performed catch-and-release experiments.In this test, we assessed the binding capabilities of WT S+L+, S-L+ and S-L+ ΔL-bclA spores when paired with S-L-(bald) spores, see Fig 4B. We observed that having S+L+ ENAs on only one of the spores resulted in a slightly reduced binding frequency.However, we noted that the S-L+ spores exhibited a marginally higher binding frequency than WT spores (5/20 and 2/20, respectively).In contrast to the lack of binding seen in the experiment where interaction between genetically identical spores was tested, successful binding was observed between S-L+ ΔL-bclA and bald spores in three out of 20 captures. Spore-to-vegetative cell aggregation is not dependent on ENAs Since B. cereus spp.communities comprise both spores and vegetative cells, we next tested whether spores can attach to their vegetative counterpart and the potential role of ENAs in such interaction.By using the catch-and release assay, we detected frequent binding between vegetative cells B. paranthrasis NVH 0075/95 and WT spores (23.3%), spores that express longer S-ENAs (S++L+ (46.7%)) and bald spores (43.3%) (Fig 5E).Unlike spore-spore binding, in most cases the interaction between spores and vegetative cells occurred in much closer proximity, and the spores were observed to mainly adhere to the poles of the vegetative cells (Fig 5A).We detected two instances of more distant connections between a spore and a vegetative cell for the S++L+ mutant.In these cases, when the spore was manipulated with the OT, the vegetative cell followed along (as illustrated in Fig 5B).The interaction distance (center-to-center) between the vegetative cell and the spore was 5.6 ± 2.5 µm and 6.4 ± 0.8 µm for the two instances where this occurred.Finally, we also tested interactions between vegetative cells, but no aggregation was observed.between spores and between spores and vegetative cells (F).The asterisk symbolizes a significant difference (p<0.05).Average distances are given in Table S4. PBS disrupts the binding of isogenic spores. To investigate how the aggregation behavior is affected by environmental conditions, we repeated the sedimentation experiments of WT (S+L+) spores and bald (S-L-) spores suspended in different dilutions of PBS (Fig 6).We observed that PBS in the suspension drastically reduced the sedimentation rate of the WT strain by about 2-orders of magnitude from 2.59 ± 0.01 µm/sec to 0.10 ± 0.01 µm/sec.Similarly, a significant reduction was observed in the sedimentation rate of the bald spores, with a reduction of 35% from 0.23 ± 0.01 µm/sec to 0.15 ± 0.01 µm/sec.We observed no difference in the size of the pellets between WT and bald spores after six days of settling with PBS in the suspension, Figure S5. However, the pellet of the WT strain suspended in water was significantly larger than when PBS was added to the suspension.This difference in pellet size between water and PBS was not observed in the sedimentation of bald spores.S6. To test if we could detect similar interference in aggregation at the single spore level using the catch-and-release assay, we examined the behavior of WT, and S-L+ spores that were most prone to aggregation in water.As shown in Fig 7, almost no interaction was observed when the spores were suspended in 1x PBS.Already at a ten-fold dilution of PBS (total ionic strength of 15.05 mM) a strong reduction in spore-spore interactions was observed.However, the addition of 0.05% of the non-ionic surfactant Tween-20 or 1% bovine serum albumin (BSA) did not result in any significant alteration in the interaction frequency between isogenic spores.S7. Discussion The present work builds upon the research conducted by Jonsmoen et al. ( 2023), where single-cell analyses demonstrated the role of S-ENAs in maintaining spores in a gel-like state by mediating distant spore-to-spore interactions (Jonsmoen et al., 2023).Here, by utilizing both bulk and single-cell approaches, we undertook an investigation of the role of L-and S-ENAs, as well as the tip fibrillum of the L-ENAs in the aggregative behavior of B. paranthracis spores, and how different factors in the surrounding liquid influences contacts between spores.We also tested whether spores adhere to vegetative cells and the role of ENAs in such interaction. Interestingly, the S+L+ ΔL-bclA mutant also demonstrated a significantly reduced sedimentation rate compared to the S+L-spores, in which the aggregation behavior is only supported by S-ENA.This suggests that the absence of L-BclA also influences spore aggregation mediated by S-ENA. Noteworthy, the tip fibrilla of S-ENA closely resembles those of L-Ena, both having a diameter of 2 nm and terminating in a globular domain (Sleutel et al., 2023).Consistent with the finding that depletion of L-BclA also affects S-ENA mediated interactions between spores, TEM images comparing the S-ENA tip fibrilla of WT spores with those of S+ L+ ΔL-bclA spores revealed the absent fibers corresponding to the length of the L-BclA on L-ENA on S+L+ ΔL-bclA spores.This suggests that L-BclA may also be present on the surface of S-ENA.The lack of S-ENA mediated spore-spore adhesion resulting from the depletion of L-BclA was confirmed with the catch-and-release assay. Despite the depletion of L-BclA, the consistent number of tip fibrilla still present on S-Enas on the S+ L+ ΔL-bclA suggests the presence of BclA homologs, which likely fill the void left by its absence (Sleutel et al., 2023).After identifying L-BclA as a functional component of the S-ENA-fibers, it is intriguing to consider that the genetic identity of the other tip fibrilla on S-ENAs remains unknown.In the NVH0075/95 genome, there are 12 collagen-like genes, which lack the sequence encoding the exosporium localization domain proteins (Files S8-10). It's plausible that one or several of these homologs serve as "alternative" S-Ena tip fibrilla. Furthermore, while the ena3 gene cluster is only present in about 10% of the B. cereus s. l. Assuming that all these ENA fibers exhibit tip fibrilla, it is likely that these structures are encoded by BclA homologues other than L-BclA (Sleutel et al., 2023).However, whether such homologs also facilitate clustering of spores or serve other functions requires further investigation. In addition to the reduced sedimentation rate of the ΔL-bclA depleted strain (S+L+ ΔL-bclA) we also observed that the S++ L+ strain sediments significantly slower than the WT strain. They also showed a distinct sedimentation behavior by settling more collectively in contrast to the more diffuse sedimentation pattern observed for spores of the other strains (Fig 1B). The pellet that was formed by S++ L+ spores at the bottom of the tube, was also less dense compared to those of the other spore suspensions.The resistance to dense packing suggests that the S-ENAs present a steric hindrance preventing close encounters between spores.The S++L+ spore pellet was gradually compressed over the days the experiment continued, indicating that the steric hindrance was reduced over time.In Jonsmoen et al., 2023, we demonstrated that spores expressing S-ENA exhibit a gel-like state when suspended in water i.e., a viscosity that indicates relaxed interconnection between spores (Jonsmoen et al., 2023). The gel-like behavior was not observed for ENA-depleted bald spores, which moved independently of each other when suspended in water.The concept that S-ENAs act as a physical barrier, hindering close encounters between spores, is supported by the difficulty of capturing two S++L+ spores simultaneously in the optical tweezer trap. We also demonstrate that the composition of the surrounding medium influences the binding between spores.During sedimentation of spores, we observed a significant reduction in the sedimentation rate of both WT (S+L+) and bald (S-L-) spores when PBS was added to the surrounding medium.A similar reduction of binding was observed during the catch-andrelease assay where we tested WT (S+L+) spores and spores of the strain only expressing L-ENAs (S-L+), which both showed the highest tendency to bind in water.When PBS was added to the spore suspension, the density and ionic strength of the suspension media increases, affecting its resistance towards sedimentation and the level of free ions that may influence protein interactions.In biological research, PBS is commonly used to preserve biological function by being isotonic and keeping a physiological pH (7.4).This is, however, not necessarily a relevant condition for all bacteria, as the salinity differs between different environmental habitats.Therefore, we tested how lower salt concentrations influenced sporebinding in the catch-and-release assay.Already, with a salt concentration of 15 mM in the surrounding liquid medium (0.1x PBS), we noted a decrease in spore aggregation at the single spore level, while at higher concentrations, the spores failed to aggregate.The decreased binding observed at increased ionic strength suggests that the interactions observed are specific and charge dependent. Examining the frequencies of spore aggregation in different liquid suspensions provides valuable insights into the mechanisms driving spore-spore adhesion.It is, however, important to acknowledge the role played by the non-specific physicochemical surface properties of bacteria in autoaggregation (Trunk et al., 2018).When the non-ionic surfactant Tween-20 was added to the suspension, we observed no decrease in binding affinity, indicating that non-specific hydrophobic interactions were not the driving force behind ENA-mediated spore-spore interactions.Similarly, the presence of the protein blocking agent BSA did not inhibit the spores from binding. Investigations into spore interaction in different conditions are important for increasing our understanding of their collective behavior and their interactions with their environment.We speculate that the aggregation properties of the spores may contribute to their persistence in the upper layers of the soil, where nutrient and organic matter concentrations are higher compared to deeper layers.In these surface layers, spores are more accessible to grazing The ability of bacteria to form aggregates is associated with pathogenesis.The effect is passive, but beneficial as it protects the bacteria from microbicidal agents, harsh environmental conditions, and host defenses (Trunk et al., 2018).The prolonged persistence enhances the chance of survival and for successful colonization of the host.In the case of B. cereus infection, where a high infectious dose of 10 5 CFU/mL is required (Granum & Lund, 2006), the aggregation of spores might play a pivotal role.By sticking together, the spores enhance their likelihood of reaching the host gut, where they germinate and initiate infection. While the germination behavior of spores in aggregates has not been investigated, it is known that the release of CaDPA triggers germination of surrounding spores (Setlow, 2014), suggesting a well-coordinated germination behavior in a spore population.Yet, in a population of spores, a small fraction is considered to be hyper dormant (Ghosh & Setlow, 2009), a bet-hedging strategy where some spores resist cues that typically trigger germination, potentially providing an additional evolutionary advantage for aggregating spores. Conclusion This study aimed to explore the role of ENAs in spore-to-spore and spore-to-vegetative cells aggregation.Our findings, obtained through ensemble sedimentation studies and single cell techniques, reveal the involvement of both S-and L-ENAs in the aggregation of B. paranthracis NVH 0075/95 spores and highlight the role of the L-BclA tip fibrilla as a crucial functional element of both ENA types (Fig 8).While we observe that the ENAs are not important for binding of spores-to-vegetive cells, we recognize the potential benefits of their rigidity and stiffness in enhancing the rigidity and functionality of a biofilm matrix. Furthermore, our results demonstrate that the aggregation mechanism is sensitive to the surrounding environment of the spores, being disrupted by higher ionic strength in the suspension while remaining unaffected by surfactants.This suggests that spore interactions are charge dependent and affected by environmental changes. Fig 1 . Fig 1. Sedimentation dynamics.Schematic overview of the strains used in the study (A).Timelapses showing spores of three strains sedimenting in a glass tube over a period of four days (B): the S++L+ strain (left), WT spores (middle), and bald spores (right).The red line represents the 50% intensity mark for the corresponding sample.Sedimentation patterns of WT spores and all seven mutants in a water suspension, fitted with Boltzmann curves (C) For overlaying sedimentation patterns see Figure S1.Sedimentation rate is displayed in µm/sec (D).Bars that share a letter are not significantly different, as indicated by Tukey's grouping (p<0.05).Numerical averages are given in TableS2. Fig 2 . Fig 2. Pellet size and S-ENA tip fibrilla distributions.Pellet size after six days (144 h).Bars that share a letter are not significantly different, as indicated by Tukey's grouping (p<0.05).Numerical averages are given in Table S2(A).Length distribution of S-ENA tip fibrilla on S+L+ (WT) and S+L+ ΔL-bclA (B) from TEM images.The line represents the mean fibrilla length. Fig 3 . Fig 3. Optical tweezers catch-and-release assay.Spores of different genotypes are placed on opposite sides of a fluid channel.Spores are trapped on each side of a microfluidic channel with the OT and dragged into the center (a and b).They are held together in the optical trap (c).After release from the optical trap, the spores may either adhere to each other through close interactions (d) or maintain a more distant connection, (e) or separate and drift as a result of Brownian motion (f). Fig 4 . Fig 4. Percentage of adhered spores in water.Spores adhered to isogenic spores using the OT assay, with n = 30 interactions for all strains except S+L+ (WT) and S-L+, where n = 60, (A) and spores adhered to bald spores (B), where n = 20.Number of interaction outcomes is given in TableS3.Error bars are not included as this is an observational study. Fig 5 . Fig 5. Spore-cell aggregation.Close aggregation of a WT (S+L+) spore and a vegetative cell (WT) (A) from the catch-and-release assay.Distantly connected S++L+ spore with a vegetative cell (WT) after the catch-and-release assay (B).Here the spore is picked up by the OT and the vegetative cell followed along.The surface of the vegetative cells is also decorated with flagella-like appendages (average width of 15.3 ± 2.5 nm) with distinct morphology from the endospore appendages (C, D).The bar chart shows the percentage of Fig 6 . Fig 6.Sedimentation patterns of spores in water or PBS.Sedimentation of spores in water or PBS over time for WT (S+L+) and the ENA depletion mutant (S-L-) of B. paranthracis NVH 0075/95 with fitted Boltzmann curves (A), and calculated sedimentation rate in µm/s (B).Bars that share a letter are not significantly different, as indicated by Tukey's grouping (p<0.05).For overlaying sedimentation patterns, see Figure S1 and numerical averages for sedimentation rates are given in TableS6. Fig 7 . Fig 7. Spore interactions in water, PBS, Tween-20, and BSA solutions.The percentage of self-adherence observed during 60 and 30 interactions for WT and S-L+ strain, respectively, using the OT setup.The comparative interaction results obtained in water are the same as those shown in Fig 4A.Number of interaction outcomes is given in TableS7. Fig 8 . Fig 8. Graphical conclusion.Spore aggregation is mediated through ENAs and likely through the ENA tip-fibrilla, particularly important are the L-BclA.S-ENAs provide a steric hindrance for close interactions.The ENAs are not involved in adhesion to vegetative cells, and the spore-spore aggregation is reduced by salt in the surrounding media. Table 1 . List of Bacillus paranthracis NVH 0075/95 strains included in this study. Name Genotype S-ENA L-ENA L-BclA Morphology Reference (Jonsmoen et al., 2023) the levels of spores fluctuate(Ryu & Beuchat, 2005)due to shedding or germination, with remnants of spores becoming a part of the biofilm matrix.Spore appendages are highly resilient to proteinases and other degrading enzymes(Pradhan et al., 2021)and may accumulate in biofilm over time.The tensile stiff S-ENAs may consequently become an important constituent of the biofilm matrix, contributing to its structure and rigidity.This hypothesis is further supported by the ability of ENAs to aggregate spores, as well as to create a gel-like state of interconnected spores in a suspension.The steric resistance of the S-ENA fiber might also prevent too close encounters between neighboring spores and cells which may improve the flow of nutrients through the biofilm.Further, the greater hydrodynamic diameter attributed to the presence of appendages on the spores(Jonsmoen et al., 2023)may ease spore detachment from the outer layer of the biofilm and help spread spores into the environment.Even so, we observed no effect of ENAs in binding of spores to vegetative cells.
7,833
2024-04-22T00:00:00.000
[ "Biology", "Environmental Science" ]
SCHEDULED BASED CLOUD RESOURCE ALLOCATION The objective of this research the tremendous implementation of cloud computing technology has become a new trend that users can easily utilize high resources through IaaS platform. IaaS is more economical and easier way to have physical resources; in this case Virtual Machines in the cloud, rather than building the infrastructure by their own. To deliver internet services to users such as website, email service or other software applications, a service provider can utilize IaaS platform by leasing virtual infrastructure from a cloud provider and deploy their services on that VMs. However, it becomes a challenge for a service provider to maintain their services due to the increasing number of user requests. They have to maintain resources availability to provide maximum performance to meet their user satisfaction with optimal resources utilization. The approach in this paper will solve this problem by providing service provider a resource monitor module. The module monitors VMs workload based on schedule approach; peak time and off-peak time. According to these two criteria, the service provider can predict and allocate sufficient resources. Introduction The advancement of technology in broadband networks, web services, computing systems, and applications has created a massive change in cloud computing concept.As a result, cloud computing has become a new technology trend in which users can easily access high computer resources [5].To obtain resources needed by users, cloud service providers can either set up their own infrastructure or use Infrastructure as a Service (IaaS) platform.Of those two methods, using IaaS is more economical and easier than build their own infrastructure.They can use API provided by IaaS providers, such as Amazon Web Service (AWS) or Google Compute Engine (GCE), to create and configure Virtual Machine, disk, memory, load balancer, etc.By paying the resources as they used, it will save the cost a lot.However, it becomes a challenge for service providers to maximize the performance and minimize the financial cost (Lee, et.al, 2010).Auto scaling mechanism is one approach used in IaaS in which service providers can maintain the resources and reduce waste resources by automatically increase or decrease them when needed.Some of the cloud services support auto-scaling functionality.They can monitor information about CPU utilization, disk I/O and network I/O on a server side to trigger system configuration [5].Yet, it is still difficult to predict for client side which later affects in decreasing performance because of the lack of computing instances.To solve this problem, schedule based resource allocation proposes a method to monitor system workload.In this case, the service providers can predict peak time or off-peak time and then prepare such a sufficient resources.The proposed architecture can be adopted by service providers to evaluate their system performance before releasing their services.The rest of the paper is organized as follows.We will discuss related work in section related work.Section proposed architecture describes the proposed model, while section evaluation and discussion, evaluates and discusses the solutions.Finally, section conclusion and future work, concludes the proposal and discusses some future works. Related Work To minimize the cost and to satisfy the requirement of performance are two most important issues for cloud service providers.Workload monitoring plays the main role in providing enough resources to satisfy the demanded capacity while reducing the waste of resources itself.There are some approaches have been conducted to measure the cloud server performance.In [1], it uses measured capacity of VM instance and arrival request rate to estimate the response time or cumulative distributions of the response time on a certain number of VM instance.But these approaches cannot generally adapt to different service architecture.Because the system may be composed of many different other services, to get all resources detail capacity may be impossible. There is an SLA-driven system [2] that just needs to set a request processing time between load balancer and application servers.And the load balancer in front of server nodes checks the server-side response time.In summary, current related works have a problem in determining a good indicator to do resource scaling.To solve these problems, in this work, we proposed an approach to monitors the cloud server workload in peak-time and peak-off time to provide information for a service provider to do schedule-based scaling to give a better performance for the user in the peak-time and reduce the resource waste in the peak-off time.This approach can also be applied to many architectures because it is based on the peak and peak-off time.The comparison among those approaches is shown in Table 1 below. Proposed Architecture Resource Monitoring Resource monitoring module is a module to continuously collect workload information about the use of hardware (CPU, memory, disk, and network) and software (file handles and modules).This module then uses this information for the future decision to allocate demanded resource. Scenario Used The scenario used in this architecture is described as follow.There are 3 main actors involved in this scenario; Cloud provider as an infrastructure provider (VMs), the service provider (SP) who provides internet services, and users accessing SP's services.All actors are connected each other via an internet connection.Main focus observed in this paper is the service provider.In figure 1, service provider leases virtual resources (VMs) from the cloud provider.Hence, service provider deploys their services such as website, email service, or other software applications on those VMs.These services will be accessed by users through an internet connection. Figure 2. Flow Chart To maintain its services to the users, it is important for the service provider to monitor their resources.This monitoring module will monitor VMs workload which is hosted in cloud provider infrastructure.Therefore service provider can use this resource monitoring module to monitor current workload, then uses scheduled-based approach to allocating resources needed to serve users. Evaluation and Discussion To evaluate this approach, we built the component and network based on the architecture in figure 1 Figure 1.In addition, we divided into two main important parts; cloud provider side to host our virtual machines, and service provider side to apply monitoring module. -Cloud provider side: we use Google Compute Engine (GCE) as an IaaS model to host our VMs.Google Compute Engine (GCE) is the Infrastructure as a Service (IaaS) component of Google Cloud Platform which is built on the global infrastructure that runs Google's search engine, Gmail, YouTube and other services.Google Compute Engine enables users to launch virtual machines (VMs) on demand.To evaluate our approach, we installed a web server on the VMs.This web server then will handle user request (HTTP request connection), and later we observe network traffic and VM resources load.-Service provider side: on this side, a resource monitoring module is used to observe VMs workload.We use Locust which is python based code as a testing tool to generate user request.It is completely event based, therefore it is possible to support thousands of concurrent users on a single machine [8].The scenario will be defined as the following figure.In this paper, we define peak time and off-peak time as follow: (a) peak time: 8.00 AM -5.00 PM and (b) off-peak time: 5.00 PM -08.00 AM.The evaluation scenario will monitor the workload of VMs hosted in GCE.During the peak time, we will generate high workload traffic and assign all the VMs to satisfy user request.On the other hand, during the off-peak time we only assign one VM to satisfy the user request and generate low workload traffic.a. Experimental Setup To simulate our approach, we use the following parameters which are shown in Table 2.In this experiment, we measure some parameters such as resource utilization, successful requests and failure rate with a different number of user requests for a different number of active Vms.The first experiment is to measure CPU utilization.Figure 4 shows that more resources are needed during peak time (represented by a large number of requests).Not only CPU resources are needed, but also it depends on the network bandwidth.Therefore more requests must be handled by activating more VMs.The second experiment is shown in figure 5. Sending more requests cannot be handled by only 1 VM due to the limited resources.Therefore the more the number of the requests, the more VMs are needed.From the result, it can be seen that successful requests sent depend on the amount of active VMs which handles the request.The relation with figure 5 can be seen in figure 6 .It shows the failure rate of the requested sent to the VMs.Failure rate can be reduced by allocating more VMs to satisfy the huge number of requests.As we can see in the picture, we need more VMs as the number of requests grows to achieve zero failure. Conclusion and Future Work In this paper, we proposed a monitoring and resource allocating module for service providers to maintain their VMs hosted on the VMs cloud server.It is important to observe VMs workload before publicly releasing it to the cloud, so the service provider can estimate the resources needed in the peak or off-peak time in order to satisfy user requests with optimal resources utilization.To evaluate our approach, we use Google Compute Engine as VM cloud server and Locust as a monitoring module, then statically allocate appropriate resources based on scheduled time.The results from these experiments show that the higher the number of the requests, the more resources needed.More allocated resources are able to handle user requests with minimum failure rate. The proposed approach only can monitor VMs workload based on certain given time, but this metric is very limited.In order to dynamically adapt with real-time traffic condition, it is needed to add more flexible metrics.Therefore the future direction of this paper will be the expansion of scaling method by applying dynamic resource allocation.
2,223.2
2017-11-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Computing the inducibility of B cell lineages under a context-dependent model of affinity maturation: Applications to sequential vaccine design A key challenge in B cell lineage-based vaccine design is understanding the inducibility of target neutralizing antibodies. We approach this problem through the use of detailed stochastic modeling of the somatic hypermutation process that occurs during affinity maturation. Under such a model, sequence mutation rates are context-dependent, rendering standard probability calculations for sequence evolution intractable. We develop an algorithmic approach to rapid, accurate approximation of key marginal sequence likelihoods required to inform modern sequential vaccine design strategies. These calculated probabilities are used to define an inducibility index for selecting among potential targets for immunogen design. We apply this approach to the problem of choosing targets for the design of boosting immunogens aimed at elicitation of the HIV broadly-neutralizing antibody DH270min11. Introduction Vaccination aims to induce antibodies -the secreted versions of B cell receptors -that can neutralize pathogens and thus provide protection against infection.In natural infection, B cells evolve their antigen receptors to specifically recognize pathogens through progressive rounds of mutation and selection based on affinity to antigen.October 12, 2023 1/18 However, rapidly mutating viruses such as HIV, influenza and SARS-CoV-2 can escape the B cell response through viral diversification.In such cases, it is of great interest to develop vaccination strategies to elicit broadly neutralizing antibodies (bnAbs).However, bnAbs are rarely elicited in infection due to a variety of factors [1].For example, HIV bnAbs typically originate from B cells with low precursor frequencies in the human B cell receptor repertoire [2,3].They also typically have high numbers of mutations, some of which may be essential for broad neutralization but be made infrequently by activation-induced cytidine deaminase (AID), the enzyme responsible for mutating the B cell receptor [4].Acquisition of these improbable mutations is a key bottleneck for bnAb induction [4,5].In these situations, traditional vaccine design strategies have proven ineffective at eliciting bnAbs.This has motivated the development of advanced vaccine design strategies which aim to use known bnAbs as templates to design immunogens that can direct B cell evolution towards their induction [2,[6][7][8][9]. One promising HIV vaccine design approach, called sequential prime-boosting, aims to first induce low-frequency bnAb B cell precursors with a priming immunogen, and then mature those clonal lineages with one or more additional, distinct boosting immunogens.A key step of this approach is the inference of a reconstructed B cell receptor (BCR) sequence of the bnAb precursor, referred to as the unmutated common ancestor (UCA), from observed, clonally-related sequences [6,10].Recent progress has been made on the first component of this sequential prime-boost approach through the design of priming immunogens that can initiate precursors of bnAb B cell clonal lineages [11].However, the design of boosting immunogens that can select for sets of required, improbable mutations in the initiated bnAb B cell clonal lineages remains a central challenge [2]. Current experimental approaches to develop sequential prime boost vaccine regimens are highly labor-and time-intensive, relying on cycles of immunogen design, immunization of animal models, and B cell receptor repertoire sequencing to assess whether desired mutations were elicited [5,8,12].An alternative approach, which we adopt here, is the use of computational models to accelerate the design process.Specifically, we consider the use of stochastic models of the somatic hypermutation process to better inform the design of immunogens and development of vaccine regimens to efficiently guide a maturing bnAb response by vaccination. Critical to this approach is the recognition that fixation of mutations is the product of two separate forces: mutation and selection.While vaccination with carefully chosen immunogens can introduce targeted selection, it is hypothesized that one of the primary difficulties in eliciting bnAbs (in HIV, say) is the low frequency with which essential mutations occur [4].If this frequency is sufficiently low, there are unlikely to be any instances of the mutation in the B cell population on which this selection can act.The use of stochastic models of the somatic hypermutation process to understand this frequency is therefore critical.A key step in formalizing this problem is answering the question: given an ancestral B cell receptor sequence x within a B cell clonal lineage, what is the probability of obtaining a specified target sequence y along the lineage under a realistic model of affinity maturation.However, because mutation of B cell receptors is sequence context-dependent due to biases in mutational targeting and substitution by AID, mutations do not arise independently during B cell evolution.This context dependence raises the key technical challenge to be addressed in this paper, namely that the context-dependent nature of somatic hypermutation makes standard approaches for computing P (x → y) under molecular evolution models intractable. October 12, 2023 2/18 Challenges in Sequential Prime Boost Vaccine Design Practical limitations on the number of boosting immunogens that can be administered mean that optimizing the sets of mutations to be selected by each immunogen is an important consideration in the design of a sequential prime-boosting vaccine regimen. Due to the sequence context-dependence of somatic hypermutation, the order in which the immunogens are administered and select for desired mutations within a vaccine regimen will also affect the probability of eliciting a bnAb response.Additionally, since it is not known a priori what the success of the chosen immunogens will be, the vaccine designer must make choices about which mutations to target first without information about the complete vaccine regimen.The ability to calculate P (x → y) enables us to compute more generally P (x (1) → x (2) → . . .x (p−1) → y) for any ordered set of intermediate sequences x (2) , . . ., x (p−1) , a key step for such design choices.The ability to accurately calculate these full-length BCR sequence evolution probabilities, and to rank BCR sequences by their inducibility, has a number of practical applications in the sequential vaccine design process.We outline three common scenarios encountered in designing sequential prime-boost vaccine strategies where such calculations can be used to inform the design of specific immunogens: (Design scenario I) In ab initio lineage-based vaccine design, the first step is to design a priming immunogen that optimally engages bnAb precursor B cells and maximizes the probability that B cells will evolve along bnAb maturation pathways.We assume that the priming immunogen already binds with high affinity to the unmutated precursor bnAb B cells.Given that a limited number of mutations can be selected by any one immunogen, the challenge is to identify the set of mutations to target for selection first, in order to give the B cells the highest chance of success to eventually mature into bnAbs. (Design scenario II) At intermediate stages of the design process, the vaccine designer has developed an initial set of immunogens in a vaccine regimen and has evaluated the B cell response to this partial regimen by sequencing the antibody repertoire of immunized bnAb UCA knock-in mice [5,8,[12][13][14].By measuring the frequency of occurrence of targeted mutations in immunized knock-in mice, the vaccine designer can determine which of the mutations necessary for broad neutralization are induced by immunization with the regimen.Often the regimen is partially successful, in that a subset of the necessary mutations are selected, leaving others still in need of selection by addition of subsequent immunogens to the regimen.During this iterative process, the probability of evolving the target bnAb conditional on the observed intermediate(s) can be used to identify the set of mutations to target for selection with the next boosting immunogen, in order to maximize the probability of bnAb induction.2 Methods Background and Motivation The ARMADiLLO model [4] is a recently developed model for forward simulation of the somatic hypermutation process in affinity maturation.The simulation procedure uses a set of mutation rates and base frequencies derived from NGS data [15].Unlike the continuous-time Markov chain (CTMC) models common in molecular evolution, sites in the ARMADiLLO model evolve in discrete jumps, a process that can be viewed as the skeleton of a time-inhomogeneous CTMC.However, a critical aspect of the somatic hypermutation model encoded in ARMADiLLO is the sequence context-dependence of mutation and substitution rates, arising from the sequence targeting preferences of the AID enzyme.Just as in dependent-site models of sequence evolution [16][17][18], calculating p(y | x) under the ARMADiLLO model is computationally difficult due to the dependence among sites which precludes the use of Felsenstein's pruning algorithm [19]. Although forward simulation of the ARMADiLLO model has been used to estimate the probability of individual mutations arising by chance [4], this approach becomes prohibitively expensive when trying to calculate p(y | x) for a specific, full sequence y.In Section 2.3, we develop an importance sampling algorithm for tractably approximating p(y | x).Our approach samples mutation orderings (trajectories) of observed mutations rather than entire path histories as in previous dependent-site models, and handles the multimodality that can plague other sampling approaches. Section 3 demonstrates that our method can be used to easily approximate a large number of such transition probabilities quickly. The ARMADiLLO Model of Somatic Hypermutation Let x = (x 1 , x 2 , . . ., x n ) and y = (y 1 , y 2 , . . ., y n ) denote two nucleotide sequences.Let d H (x, y) = n i=1 δ(x i ̸ = y i ) denote the Hamming distance between x and y and let xi = (x i−2 , x i−1 , x i+1 , x i+2 ) for i ∈ {1, . . ., n} denote the two-nearest-neighbor context of site x i .(For i = 1 and i = n, we assume that the two left and right flanking nucleotides, respectively, are fixed).Each site x i is assigned a mutability score m(x i ; xi ) and a set of substitution probabilities π(x i , x ′ i ; xi ), where π(x i , x i ; xi ) = 0 and b∈N π(x i , b; xi ) = 1 for N = {A, G, C, T }.The ARMADiLLO algorithm is given in Algorithm 1.Note that the transition probability under ARMADiLLO is defined by a probability matrix Q (i) with entries appearing in (1) correspond to the probability of selecting site x i for mutation and transitioning to nucleotide base b, respectively.For information on how the mutability scores and transition probabilities are estimated, see [15]. Algorithm 1 ARMADiLLO Input: Initial Sequence x and Mutation Number r ≥ 1 Algorithm 1 was used [4] to estimate the marginal probability p(y i | x) of a codon at a given location in x transitioning to a target amino acid by simulating many terminal nucleotide sequences y (1) , . . ., y (M ) and calculating the proportion pyi = 1 M M j=1 1 yi (y (j) ) of times that the target amino acid occurs at the desired location.Because the emphasis is on the marginal probability p(y i | x) at a single site, a reliable estimate of this probability appears to be obtainable with a feasible number of simulations.However, as noted this approach does not scale to computation of p(y | x) for full-length sequences y.Instead, we develop an efficient importance sampling algorithm for evaluating full-length sequence probabilities of this form. Estimation using Importance Sampling Given two sequences x and y such that r = d H (x, y), we wish to approximate p(y | x) under Algorithm 1.To do this, we sample orderings in which the r mutations occur in the transition from x to y.More formally, let S = (s 1 , . . ., s r ) for s j ∈ {1, . . ., n} be the set of sites at which the two sequences differ.Let S r be the set of all permutations of the elements of S. The goal is to approximate where ; x σ(0:j−1) , xσ(0:j−1) , y σ(sj ) ; xσ(0:j−1) denotes, for a given σ ∈ S r , the value of the ith nucleotide after k updates to x according to the permutation σ.That is (so e.g.x σ(0) i = x i , and x σ(0:r) i = y i ).We refer to σ ∈ S r as a 'path' from x to y. Let Q(σ) = Z −1 q q(σ) where Z q = σ∈Sr q(σ) denote a distribution on S r (the instrumental distribution) which can be easily sampled.Then the corresponding importance sampling estimator for (2) is for σ (1) , . . ., σ (N ) iid ∼ Q(σ).The performance of this importance sampling algorithm depends strongly on the choice of q, with max σ π/q controlling Var(p IS (y | x)).Because π is unnormalized here, we require Z q in order to recover Z π := p(y | x).Consequently, we need to choose Q complex enough to approximate π reasonably well but simple enough so that Z q can be feasibly computed. Recall that under the ARMADiLLO model the mutability score m(x i ; xi ) for a site x i depends only on its two-nearest-neighbor context xi .However, x i mutates to a ∈ N with probability equal to its normalized mutability score m(a; xi ).This makes the normalization Z π for the ARMADiLLO path distribution intractable because the product of normalization terms appearing in π(σ) induces long-range dependence among sites that do not have overlapping pentamers, and consequently the probability that x i mutates depends on all sites x 1:n := (x 1 , . . ., x n ).This suggests replacing m(a; x, xi ) with m(a; xi ) in π(σ) to make Z π tractable; we use this form as our instrumental distribution Q.In particular, we set ; xσ(0:j−1) , y σ(sj ) ; xσ(0:j−1) ). Since the context xi plays the most important role in determining the mutation probability for x i , this provides an instrumental distribution Q that closely approximates π while also yielding a tractable normalizing constant Z q . To see that Z q is tractable, call two subsets s, s ′ ⊂ S separated if for any s ∈ s and s ′ ∈ s ′ we have |s − s ′ | > 2. Under q, the probability of a permutation is invariant with respect to the re-ordering of mutations belonging to different separated sets.More formally, let (s 1 , . . ., s k ) be a partition of S into separated sets.Let r j = |s j | and let S rj be the set of all permutations of the elements of s j (so |S rj | = r j !) with σ j = (σ j (s j1 ), . . ., σ i (s jr j )) denoting a corresponding permutation σ j ∈ S rj .Let Sr = S r1 × . . .× S r k be the product group consisting of all k-tuples (σ 1 , . . ., σ k ) with σ j ∈ S rj and note that Provided q(σ i ) ̸ = q(σ j ) for i ̸ = j (this will generally be the case), then the equivalence classes Putting it all together, we have by ( 2), (3), and ( 6) the estimator for σ (1) , . . ., σ Sampling from q proceeds by enumerating all σ ∈ Sr and evaluating (4).Z q is then obtained by summing and multiplying by C(r, k) as in (6).We can then sample index i with probability q(σ i )/Z q for σ i ∈ Sr corresponding to one of the equivalence classes [σ i ].Finally, we sample uniformly from the equivalence class [σ i ] to obtain the desired sample from q. Results We begin with a simulation study to evaluate the performance of our approach, followed by the application to design calculations for HIV-I immunogen design. Simulation Study We first evaluate the approximation algorithm of Section 2 on a set of example sequences where the true transition probabilities can be calculated exactly.Test sequences are chosen to span a range of values of both the largest separated set r ⋆ := max j r j and the quantity α := r 2 /n which quantifies the number of mutations r relative to the size of the sequence n.(We expect that the variance of our estimator increases with α.)This set of test sequences demonstrates the performance of the October 12, 2023 7/18 algorithm as the complexity of the problem varies.We investigate the performance of the estimator (7) as a function of r ⋆ and α, measured in terms of both coefficient of variation and effective sample size (ESS): . In all test sequences, the number of mutations is kept low (r = 10) so that the exact value of the transition probability can be computed for comparison. For each test sequence, the algorithm was run 1000 times using a sample size N = 1000 in each run.Table 1 shows the results, including the exact value, and the mean and relative standard deviation of the 1000 estimates.Across all values of r ⋆ and α the transition probability is estimated accurately up to at least the second significant digit.The coefficient of variation decays with α as expected, and appears to be unaffected by r ⋆ .The most difficult case is when α and r ⋆ are large ( 1. Results for estimating transition probabilities on test sequences.1000 runs of the algorithm were performed for each sequence, each using N = 1000.Shown are exact calculations (True prob.), mean estimate over the 1000 replicates, mean effective sample size, and standard deviation of estimates relative to the mean (CoV).We see that probabilities are estimated accurately across all values of r ⋆ and α, despite the problems getting more challenging (lower ESS, higher CoV) when α and r ⋆ are large. Application to B-Cell Evolution We now return to the problem of sequential immunogen design and the scenarios described in Section 1 encountered by vaccine designers in which the transition probabilities between the UCA sequence and target mature antibody sequences can be used to inform the design of specific immunogens. For example, Fig 1 shows (see also Table 3 below) that the order of mutation selection in a clonal lineage can significantly affect the probability of obtaining specific bnAb maturation pathways.Since multiple immunogens may be required to direct the evolution of B cells when a large number of mutations must be acquired for bnAb activity [13,20], the ordering of prime and boosting immunogens -each selecting distinct sets of mutations -within a lineage-based vaccine regimen may critically affect the probability of successful bnAb elicitation.It is therefore of great interest to determine a mutation ordering that maximizes this elicitation probability for a specified target bnAb.This in turn can be used to develop a sequence of immunogens to target bnAb mutations in the most probable order of acquisition. Transition probabilities for protein sequences In calculating maturation pathway probabilities, consideration of both nucleotide and the amino acid sequences is critical.The UCA sequence is the result of recombination of germline-encoded gene segments and addition of non-templated nucleotides at the gene segment junctions, providing a defined starting nucleotide sequence.However, selection acts upon the BCR at the protein level, where binding to the immunogen is determined by the amino acid sequence, and thus all nucleotide sequences giving rise to the target bnAb amino acid sequence must be accounted for.Because AID acts at the nucleotide level, the ARMADiLLO model describes transitions between nucleotide sequences. Marginalization is then required to obtain probabilities of amino acid sequences.In what follows then, we compute the probability that a known unmutated common ancestor (UCA) nucleotide sequence transitions to a target amino acid sequence.For purposes of vaccine design, it will also be of interest to consider "intermediate" amino acid sequences along potential mutation pathways, as potential targets for sequential immunogen design.We formalize these calculations below. Let a(y) denote the amino acid sequence arising from a nucleotide sequence y and A(y) = {z : a(z) = a(y)} the equivalence class of nucleotide sequences giving rise to the same amino acid sequence.Denote by a i (y) the ith amino acid in a(y), and let c i (y) denote the number of codons that encode a i (y).We estimate transition probabilities from an inferred UCA (initial) nucleotide sequence x to a any terminal nucleotide sequence z ∈ A(y), where y is an observed bnAb (target nucleotide sequence). Let m = {m 1 , . . ., m k } for m i ∈ {1, . . ., n 3 } be the set of amino acid sites where a(x) and a(y) disagree and m 1 , . . ., m p be all k q subsets of m of size q, with elements Denote by x ij the jth element of N(x, m i , y).So x ij is an intermediate nucleotide sequence on a trajectory from x to some sequence in A(y) which is equal to x outside of (the codons indexed by) m i and matches a(y) at all sites m i (i.e. a mi (x ij ) = a mi (y)).The superscript j indexes the possible combinations of codons giving rise to a mi (x ij ) = a mi (y), of which there are J i := q r=1 c mi r (y) for i = 1, . . ., p. Let z be an end-of-trajectory sequence for x ij , meaning that z ∈ A(y) and z differs from x ij only at nucleotide positions corresponding to amino acid sites m \ m i of the mutations not yet acquired by We estimate the transition probability of x i,j to each z ∈ Z(x, y, i, j) by assuming exactly r = d H (x ij , z) mutational events occur (i.e.unmutated nucleotide positions remain fixed, and no reversions).Starting from intermediate x ij for i = 1, . . ., p and j = 1, . . ., J i , this yields transition probability Similarly, the probability of obtaining the initial amino acid mutations in m i is given by October 12, 2023 10/18 where again the conditioning indicates that exactly r mutations occur in the transition from x to x ij , with all other nucleotide positions remaining fixed.Finally, we calculate the joint probability of first obtaining initial amino acid mutations m i on the way to obtaining the full set of mutations m as the joint probability: Conditioning on the number of mutational events that occur and assuming unmutated nucleotide positions remain fixed in our estimates amounts to ignoring synonymous mutations and multiple-mutation reversions.We make these assumptions for computational tractability.Indeed, if n a = n 3 is the length of a(y), then the number of terminal sequences that give rise to a(y) can grow exponentially in n a via synonymous mutations due to the redundancy in the genetic code. Inducibility of a minimal set of critical mutations in an HIV bnAb We applied this approach to study the inducibility of critical mutations in DH270.6, an HIV bnAb.Here the target sequence y is the heavy chain sequence of DH270min11, an antibody engineered from the DH270.6 bnAb to contain only those amino acid mutations determined to be functionally important for neutralization breadth [21].Here x is the corresponding estimated UCA sequence for the DH270.6 clone [22] obtained by clonal lineage reconstruction using Clonalyst [10].(Alternatively, a distribution over UCAs accounting for reconstruction uncertainty may be obtained by probabilistic methods such as Partis [23][24][25]; see Discussion.)In this case, x and y differ by seven nucleotides, and a(x) and a(y) differ by six amino acids.The DH270min11 mutations are given in Table 2. Notice that the largest separated set is of size r ⋆ = 2 and α = 0.005 since n = 382.Of the observed amino acid mutations, only one requires multiple nucleotide substitutions in the corresponding codon.We first consider the critical mutations individually in turn.So q = 1, and m i consists of only a single amino acid location (i.e.we let m i = {m i }, i = 1, . . ., k, and J i = c mi (y)), so x ij is equal to x except at a single amino acid site.In total, there are 6 i=1 c mi (y) = 19 intermediate sequences x ij corresponding to the total number of codons that encode the six amino acids where a(x) and a(y) differ.Table 3 where ⌊⌉ denotes rounding to the nearest integer.This index equates sequences whose evolution probabilities are of the same order of magnitude, and facilitates direct comparison between potential targets with practically significant differences in inducibility.A lower inducibility index therefore indicates a sequence that has higher a priori probability of arising in the absence of selection.3. Transition probability results for q = 1.Initial amino acid mutations and corresponding codons are given in the first and second column.The starred codons correspond to the actual codons observed in the DH270min11 sequence.In the third column, we give the probability of transitioning to any nucleotide sequence z ∈ A(y) (i.e.any nucleotide sequence that gives rise to the same amino acid sequence as y) conditional on first obtaining the amino acid mutation m i via codon j.In the fourth column, we give the estimate of the probability of obtaining the initial amino acid mutation in m i by codon j.The weighted estimate in the fifth column is the product of the third and fourth columns.The sixth column is the average effective sample size of the transition probability estimates. We see that the transition probability p(A(y) | x ij ) to the terminal DH270min11 sequence can be maximized by acquiring the G110Y mutation first.G110Y requires two base substitutions to occur within its codon for the amino acid transition from glycine (G) to tyrosine (Y).Comparing the calculated probabilities, we conclude that a vaccine regimen that successfully selects for the G110Y mutation first would increase the calculated probability of induction by 5 orders of magnitude, an order of magnitude (or more) improvement over selecting any of the other mutations first. The vaccine designer can then use this information (Design scenario I) to aim for a priming immunogen that both binds with high affinity to the DH270 UCA (to initiate October 12, 2023 12/18 the clonal lineage), but also with even higher affinity to the UCA+G110Y mutation, in order to select for G110Y and guide the B cell response along the most probable bnAb maturation pathway. As noted, the G110Y mutation requires two base changes.Multiple required base changes within a codon will typically result in a low transition probability of a targeted amino acid.However, it also provides an opportunity for the vaccine designer to accelerate its acquisition.For example, for the G110Y mutation, a single base change in codon 110 (GGT, glycine) of the DH270.6UCA can transition through either GAT (aspartic acid) or TGT (cysteine).Our calculations indicate that the transition through aspartic acid is ≈ 1.5× more probable than the alternative path through cysteine. Therefore, adding an immunogen to the vaccine regimen that selects for the intermediate amino acid state of aspartic acid at position 110 could accelerate induction of the critical and highly improbable G110Y mutation. Multiple simultaneous mutations in immunogen design calculations In design scenarios 1 and 2, we may often wish to consider the induction of multiple simultaneous mutations.Here we consider initial target mutation sets of size q = 2 or q = 3. Tables 4 and 5 list the path probabilities conditional on all initial pairs and triplets of DH270min11 mutations.We see that different initial subsets differ by orders of magnitude in their probabilities, while the overall joint probabilities obtained as the products differ only by small constants (this also reassures that the approximation error is small in each case).We observe that (31D, 55T), (31D, 51M), and (31D, 98T) are the most probable pairs of mutations to arise in the absence of a selecting immunogen (have the highest "inducibility"), and (31D,51M,55T) and (31D,55T,98T) the most probable triples.Whereas (57R,110Y), (98T,110Y) and (51M,110Y) are the least probable pairs to arise by chance, and we therefore might expect them to be more difficult to elicit via immunogen selection.Similarly for the triples (57R,98T,110Y) and (51M,57R,110Y). Conversely, if we were able to design a priming immunogen to elicit the mutation pair (57R,110Y) or triplet (57R,98T,110Y), it would be of high impact as we would expect this to maximize the probability of obtaining the full mature bnAb using a boosting immunogen.5. Transition probability results for q = 3.The first column indicates which amino acids are targeted first.The second column corresponds to the probability of transitioning to z ∈ A(y) conditional on first obtaining the amino acid mutations in m i .The third column corresponds to the probability of first obtaining the amino acid mutations in m i .The fourth column corresponds to the probability of transitioning from x to any z ∈ A(y) (not the product of the second and third columns).The final column gives the average minimum effective sample size for the estimates of p(z | x ij , r = d H (x ij , z)), where the minimum is across all z ∈ Z(x, y, i, j) and the average is across j = 1, . . ., J i . Comparison with experimental results It is interesting to compare these results with recently obtained experimental data [20], where we have immunized DH270 UCA knock-in mice with a priming immunogen, sequenced their heavy chain BCR repertoires, and measured the frequency of the 6 DH270min11 mutations both individually and in combination.The G110Y mutation is observed to be the second least frequent mutation selected by our priming regimen.Our probability calculations (Table 3) indicate that of all individual mutations, G110Y selection maximizes bnAb maturation probability.Thus, adding a boosting immunogen to this vaccine regimen that can select for G110Y would be an optimal strategy for maximizing the probability of bnAb elicitation. (Design scenario III) In our repertoire sequencing data, the I51M mutation is the lowest frequency of the six DH270min11 mutations.Contrasting this with our probability calculations (Table 3),which estimate that I51M is the third most probable mutation in the absence of selection.Such differences between model calculations and observed frequencies may be indicative of selection effects; thus one explanation for the low I51M frequency observed in immunized mice is that our priming immunogen lacks 6. Starting from (G31D, R98T), the most frequent pair observed in immunized mice, and transitioning to the next pair. Design scenario II When information is available about the performance of the first immunogen(s) in a sequential boost vaccine regimen, the vaccine designer can use the estimated transition probabilities to make decisions about which boosting immunogen(s) to administer next in the series.From our experimental data, the highest frequency mutation pair induced by our priming immunogen was (G31D, R98T). For current purposes, we assume a single immunogen can select for only one pair of mutations, and use the model calculations to choose which pair of mutations should be selected for with the next boosting immunogen.Table 6 shows the estimated transition probabilities starting from the UCA + (G31D,R98T), for all remaining 4 2 pairs of mutations.We observe that the transition probability to the DH270min11 sequence is maximized upon acquiring (57R,110Y).Thus, the optimal sequential boosting strategy is to use a first boosting immunogen to select for G57R and G110Y followed by a second boosting immunogen to select for S55T and I51M. Conclusion We have introduced a model-based approach to sequential immunogen design using the ARMADiLLO model of somatic hypermutation.To calculate design-relevant marginal sequence transition probabilities in the face of context-dependent mutation, we have developed a fast and accurate Monte Carlo approximation scheme.We have demonstrated that this model performs well on test sequences of varying complexity. Finally, we have applied this approach to answer questions of great current significance regarding mutation targeting for boosting immunogen design for ongoing efforts to elicit the HIV bnAb DH270min11.These results are now being used to guide efforts for immunogen design in the Duke Human Vaccine Institute. Our approach relies on knowledge of a precursor sequence.Typically this is obtained as an estimated UCA obtained from a set of clonally related sequences.However, the process of inferring the UCA retains residual uncertainty both due to choices regarding which sequences to include, and the probabilistic information content of the sequences themselves.Although we use the Clonalyst procedure here to produce a single maximum likelihood UCA, other methods [23] are available which account for (some of) the reconstruction uncertainty by providing a distribution over UCA sequences, and our approach easily extends to use such information.However those methods rely on phylogenetic computations which become intractable in the face of context-dependent mutation, and so do not currently account for this aspect of the somatic hypermutation process, which has been our focus here.Applying the accurate, rapid approximation of October 12, 2023 15/18 marginal sequence likelihoods developed here to the problem of lineage reconstruction in the face of context-dependent mutation is a promising area for future work.We expect that this may only impact results in the CDRH3 region, as the posterior probabilities on V genes and even alleles are expected to be close to one, but it may indeed be important for the CDRH3 (R98T and G110Y are in the CDRH3, for example).Another source of uncertainty in the inferred UCA arises from the selection of sequences to define the clone itself; we are exploring approaches to account for context-dependence in this step as well.Finally, given the speed of computation, our approach here could be extended to sets of bnAb UCAs that define an entire precursor class, i.e. a common set of precursor sequences evolving to a common set of paratopic features that define that bnAb class's ability to recognize a conserved site of vulnerability on a pathogen. Figure 1 . Figure 1.The use of estimated transition probabilities to inform sequential prime-boost vaccine design.A) CDRH2 sequence of the HIV bnAb CH235.12 inferred UCA.AID hot spots (high mutability) are highlighted in red and cold spots (low mutability) highlighted in blue, with amino acid alignment of the UCA and CH235.12mature sequence shown below.Dots represent sequence matches and denote unmutated amino acid positions.B) Graph of amino acid transitions for a subset of three mutations at sites 54, 55, and 57.Arrow widths are proportional to estimated transition probabilities (shown).Based on the highest probability full path (yellow), C) the vaccine designer can choose the order of immunogens in a sequential prime-boost vaccine regimen to maximize the probability of induction of the three targeted mutations. y)}, where N denotes the set of nucleotides.(The condition n m c i = x m c i is a slight abuse of notation, indicating that all the codons outside of the set m i are equal.)So N(x, m i , y) is the set of nucleotide sequences of length n which match x in all positions m c i , and which map to the same amino acids as y in positions m i .(N is a set due to the redundancy of the genetic code.) Description of the amino acid changes in the DH270min11 sequence.Columns show (in order): (1) amino acid change; (2) UCA sequence codon at change site; (3) corresponding codon observed in the DH270min11 sequence (mutations underlined); (4) minimal number of base substitutions required for amino acid change;(5) number of possible terminal codons that encode amino acid change. lists the calculated path probabilities to DH270min11 conditional on each of the 6 individual amino acid mutations occurring first.To aid in the interpretability of results, we define an inducibility index I(y) = ⌊− log 10 P (x → y)⌉ October 12, 2023 11/18 Table 1 , first row), whereas the ideal case is when both of these quantities are small (last row), and we see this difficulty reflected in the average effective sample size.
8,288
2023-10-17T00:00:00.000
[ "Computer Science", "Biology" ]
Farming and risk attitude Data from a survey among Norwegian farmers (n=514), combined with tax register data and data on farming area and production, are used to explore various questions related to future farming, general attitudes to farming and risk attitude. To complement the raw data, several new variables were constructed in order to compare how variables such as investment, consumption and intensity of farm production in the farmers' opinion would be affected by sudden, unexpected monetary losses and gains. These new variables say something about how various types of behavior are asymmetrically exhibited when facing unexpected gains and losses. In general, and as expected, farmers claim that a sudden gain would affect their behavior less than a sudden loss. The main exception is farm investment, which would be more affected by a sudden gain than a sudden loss would affect divestment. This is interesting, as it captures some important aspects of farming lifestyle: A sudden gain is likely to be invested in the farm, but a sudden loss would, if possible, be financed without farm divestment, as this often would lead to giving up farming, or at least some important aspects of the farming lifestyle. Overall, these results are not surprising. Nevertheless, it is argued that the results could have some interesting policy implications, both with regards to design of hedging schemes, general agricultural support schemes, and rural policy. Introduction The purpose of this paper is to contribute to the literature on risk attitude among farmers with insights from data on Norwegian farmers and their plans, behavior, and risk attitude. Risk attitude among farmers (for instance related to investment and operation plans) is interesting for several reasons. First, farmers are dealing with real investments, with potentially important and longterm consequences. Second, for many farmers, important decisions also have an emotional impact because the decisions will affect both future lifestyle and personal identity as a farmer -for the current and possibly future generations. Examples of such decisions include decisions to quit farming, leave the farm, or change to a different farming system. On the other hand, large investments may be under consideration which would be likely to "lock in" the farmer and his family for a number of years. A fundamental starting point is the relation between farm/farmer characteristics, perceptions and decisions/behavior. Several similar studies (Flaten et al., 2005;Borges and Machado, 2012) use van Raaij's (1981) model as a building block. This model implies that different farmers will have -and state -different risk perceptions, which in turn will lead them to different decisions and different economic behavior. Hence, understanding risk perceptions in relation to differences in farm characteristics is useful with regards to understanding decisions and economic behavior. Whereas expected utility theory traditionally has been the most commonly used model for decision-making under risk, a number of studies point out that this approach often fails to explain observed behavior (some classic references include Kahneman andTversky, 1979 andRabin andThaler, 2001). Hence, it is even more important to understand the decision maker's frame of reference and perceptions of risk, as these often affect decisions and behavior more than "objective" considerations about expected utility (March and Shapira, 1987). A large body of literature exists on risk and risk management in agriculture. Useful starting points are Moschini and Hennessy (2001) and Hardaker et al. (2004). In particular, Flaten et al. (2005) conducted a survey among Norwegian farmers about their risk perceptions. This literature provides a fairly good overview of the relevant risk sources and risk attitudes, both among farmers in general and Norwegian farmers in particular. Our objective in this study is more modest -to assess whether Norwegian farmers are equally risk averse in all areas, and what consequences this might have. Flaten et al. (2005) touch upon this by asking farmers to rate their willingness to take risks compared to other farmers in three areas: production, marketing, and finance and investment. The differences between the three areas seem small, with a slightly larger comparative risk aversion in the marketing area. However, this does not necessarily translate to a higher risk aversion in that area. On the other hand, similar comparative risk aversions in these three areas does not exclude the possibility that farmers as a group have very different risk attitudes in different situations, as long as each farmer perceives his own attitude compared to his peers to be similar in all three areas. In this study, we look at (perceived) risk attitudes in several areas, and instead of comparative risk attitudes, we are concerned with how farmers would spend a large and sudden gain versus how they would finance a large and sudden loss. Our dataset only allows a proxy measure of risk aversion, but this measure fits well with earlier measurements of risk aversion, and allows us to look at differences in risk attitude in various areas, and potential consequences for policy and risk management schemes. Materials and methods In this paper, we utilize data from a 2008 survey among Norwegian farmers (n=514), combined with tax register data and data on farming area and production, to explore various questions related to future farming, general attitudes to farming, risk aversion, and how different factors affect the utility curve of farmers. For the development of the survey used in this study, we in part used a structure that had been used for another survey (Flaten et al., 2005;Lien et al., 2008). We also looked at questionnaires from other countries as a source of reference (e.g., Pennings and Garcia, 2001). Before the survey was conducted among the farmers, a draft questionnaire was tested, and the final questionnaire was the result of several rounds of testing. Most questions were closed, i.e., each respondent was asked to tick one/several of a number of pre-defined alternatives. Attitudes towards listed statements were mostly measured by 7-points Likert scales, where the respondent was asked to rate his or her degree of agreement/disagreement on a scale from 1 to 7. The final question was open, and respondents were asked to give comments in their own words. The response quality was very good, indicating that the questions were understandable and not too numerous. The questionnaire was sent by mail to a stratified sample (with regard to age, region, and size of farms) from the Norwegian Agricultural Authority's register of farmers receiving production support. Virtually all farmers in Norway are on this register. In total, 1001 questionnaires were mailed out. Those who had not responded were sent a reminder postcard approximately four weeks later. In total, 551 responses were received. This constitutes a response rate of 56%, which is satisfactory for a mail response survey. As mentioned above, the general quality of responses was very good. Nevertheless, 37 forms were incomplete and had to be rejected, thus leaving us with a total sample size of 514. We were able to merge the survey data on attitudes etc. with financial data obtained from the Norwegian Tax Authority. These records include both farm and off-farm income for both the farmer and partner, typically specified with regard to income source (income from farming, income from other farm-related activities, off-farm salary, and capital income/capital gain). The financial data also contained information on taxable wealth, debt etc; thus giving a reasonably good overview of the farm household's financial situation. Finally, we also merged the two datasets from the survey and the Tax Authority with a third dataset; the Norwegian Agricultural Authority's register of farming area and production. In short, this register contains information about farmland used for different purposes, and livestock numbers. To study risk attitudes in different areas more carefully, some new variables were constructed in order to compare how variables such as investment, consumption and intensity of farm production in the farmers' opinion would be affected by sudden, unexpected monetary losses and gains. To construct the new variables, we used responses to two different questions. In one question, farmers were asked to consider a situation where they won MNOK 1 (approx. $160 000), for instance through a lottery. They were presented various opportunities to spend/invest this amount, and were asked to rate each opportunity on a scale from 1 (nothing) to 7 (everything) with regard to how much of the prize they would allocate to each opportunity. The opportunities were farm investments, investment in farm-related activities, off-farm investments, running the farm less intensively (thus probably reducing the income), work less off the farm, increase private consumption, gifts/inheritance to children, gifts to charity, pay off debt and/or bank savings, and buy shares/equity. In a related question, they were asked how they would finance a sudden loss of MNOK 1. Again, they were asked to consider various opportunities on a scale from 1 (would definitely not use) to 7 (would definitely use). The opportunities here were sale of the farm (or parts of it), increased forest harvesting, running the farm more intensively, work more off the farm, sell shares/equity, reduce private consumption, and reduce bank savings/increase loans. A reviewer introduces a highly relevant pointthat risk perceptions could depend on the source of the sudden gain or loss. A sudden drop in market prices leading to a loss of 1MNOK does not necessarily trigger the same response as a sudden cut in subsidies leading to the same loss. However, such differences are beyond the scope of this paper to explore. In this study, the farmers were asked to consider the gain as a result of winning the lottery, and the loss as a result of losing some unspecified, unexpected legal case. Hence, both the gain and loss were framed as unexpected, unrelated to the day-to-day business of farming, and without any direct implications for future farming. Responses to the two questions were paired, and reordered in an increasing/ordinal order from zero to six. In other words, spend/invest alternatives in case of gain were scaled from 0 (nothing) to 6 (everything), while spend/save alternatives in case of loss got a scale between 0 (would definitely not use) and 6 (would definitely use). The constructed measures of risk attitudes are illustrated in Table 1 below, and simply measure the difference between the responses to the two questions. If the value of the new variable "invest" equals 0, this means that the average effect on investment from a sudden gain is reported to be as strong as the effect on divestment (sale of farm) from a sudden loss. If a farmer claims that a gain he would use "everything" from a gain to invest in the farm (score 6), and would "definitely not" finance a loss by sale of the farm, this would give him a score of 6-0 = 6 for the new variable "invest". In short, a positive value indicates that the effect on divestment from a loss is stronger, whereas a negative value indicates that the effect on investment from a gain is stronger. At this stage, it is important to emphasize how these new variables should -and shouldn't -be interpreted. First, we know from standard expected utility (EU) theory that risk aversion implies that a monetary gain increases utility less than a loss of the same size reduces the utility. Our new measures are related to this concept, but it is not quite the standard risk aversion we are measuring. First of all, we are not concerned with the (perceived) effect on utility, but rather with the effect on behavior. Although a conceptual deviation from the standard EU framework, this is beneficial, as it is behavior we are interested in. It is, however, worth pointing out that it is known from the behavioral economics literature that a loss -or something framed as a loss -has larger consequences for behavior than a gain (see e.g. Bertrand et al. (2006) and references therein). Hence, we would expect the same tendency in our material, that (potential) losses have larger behavioral consequences than gains. One additional weakness should be pointed out. As the questionnaire was designed for several purposes, the wording in the two questions (about gains and losses) is slightly different. Farmers were asked how large a share of a gain they would spend/invest in various areas; while they were asked how likely they were to use various sources to finance a loss. One could argue that whereas the sum of scores in the gain question should add up to 100% (or, 7 on our scale), no such relation exists in the loss question -there is nothing wrong with definitely using all the listed sources to cover the loss, and hence getting a 7 as response to all the sources. This problem is important to be aware of, yet we will argue that it does not destroy the study. First of all, we are uncertain to what extent the respondents in fact has grasped the difference in wording. We assume that many respondents have treated the two questions in the same way, and simply ignored the difference in wording. Finally, it is worth noting that any such difference -by definition -should be identical for all the new variables. This means that no matter how much faith one has in the overall result, the relative size of each new variable (compared to the other new variables) should still be reliable. Results and Discussion Some general, descriptive statistical results from the survey are presented in Bergfjord et al. (2011). Some of the main results from the raw data are as follows: We observe that the farmers exhibit a clear, but not extreme risk aversion. The differences between different subsets of the sample are generally small, with the exception that fulltime farmers with high household incomes are less risk averse than others. This is reasonable. These farmers probably are better equipped to take on some risks in order to achieve future gains. Also, some of the difference is probably caused by underlying differences in marital status. Highincome households usually have two incomes, often including one off-farm income, whereas the group with low income to a larger extent consists of oneincome households. This means that the highincome group has a more diversified income base, and thus better opportunities to pursue more risky business strategies. The focus of this paper is the constructed measures of risk attitude, the information they provide about behavior, and their possible implications. The size, standard errors and 99% confidence intervals of these new variables are presented in the table below. All variables are significantly different from 0, and all risk attitude measures are estimated to have absolute values above 0.5. All the new variables are negative, except from the first one. This can be interpreted as follows: 1. A gain means more for farm investment than a loss means for farm divestment 2. A gain means less for "extensification" than a loss means for "intensification" 3. A gain means less extra off-farm work than a loss means reduced off-farm work 4. A gain means less extra private consumption than a loss means reduced private consumption 5. A gain means less extra bank savings/reduced loans than a loss means increased loans/reduced savings 6. A gain means less extra investments in shares/equity than a loss means sales of shares/equity 7. A gain generally means less change in behavior than a loss In general, and as expected, farmers claim that a sudden gain would affect their behavior less than a sudden loss. Although the measure used is not strictly comparable to risk aversion, this is in line with both the risk aversion derived from the raw data in Bergfjord et al. (2011) as well as results from behavioral economics that losses affect behavior more than gains. We have not found important differences between different subgroups of farmers, for instance based on on-and off-farm income or production types. This is interesting, and indicates that the attitudes found are common for most types of farmers. However, there is a strong correlation between the various constructed variables for each farmer. This means that if a farmer's equity holding is more affected by a loss than a gain, it is likely that also for instance his consumption behavior will be more affected by a loss than a gain. This is also to be expected. Overall, our results are not surprising, in the sense that both moderate risk aversion, larger effect of losses than gains, and generally small differences between different types of farmers in this sample could expected based on theory and earlier work. Although expected, some further comments regarding the different variables could be useful. For the second variable, about intensification, part of the reason might be the structure of Norwegian agriculture. Many farms are already run rather extensively, and many farmers have other, more important sources of income, almost reducing farming to a hobby. Thus, further "extensification" is difficult and/or pointless, whereas intensification is an option if financial reasons make it necessary. For our third variable, about off-farm work, there are several potential explanations, in addition to the standard risk aversion component. A related reason is that for less wealthy farmers, increased off-farm income could be a necessity to compensate for a large loss. There are hence good reasons to expect farmers to increase off-farm work in face of a loss, but why do they not reduce off-farm work as much after a gain? One potential reason is the structure of the labor market: If you have an offfarm job, it is often not easy to, say, reduce this from a 100% to a 50% position -often the alternative would be to quit the job altogether, which might not be an attractive option. Finally, this can be viewed as a sign that many farmers find their off-farm work rewarding also beyond the financial aspects, and hence would like to work approximately as much as they currently do, even if a financial gain allowed them to work less. For our fourth variable, about private consumption, we see the same pattern. A potential extra reason for this is that many farmers already have a "sufficiently" high level of private consumption, and expect their consumption to stay more or less the same even if they could afford to spend more. For our fifth variable, about debt and savings, one additional reason for the negative value could be that farmers have good bank connections and are relatively comfortable with their current debt level, and thus see no urgent need to reduce this. Another reason might be that other options are more attractive than debt payments or bank savings after a sudden gain -for instance farm investments, as indicated by variable 1. For our sixth variable, about shares and equity, the variable is again -as expected -negative. An alternative interpretation of this could be low interest in the stock market -alternatively, low expectations about future returns in the stock market. The one important exception in this picture is farm investment, where a gain apparently means more for investment than a loss means for divestment. Although unexpected based on general theory, we will argue that the result makes sense in this particular setting. Many farming families have off-farm income which is enough to cover most daily expenses. The farm thus is not only a source of profit, but also a home and an instrument for saving. Whatever is left after all the bills are paid is invested in the farm, maybe to generate larger profits from the farm in the future, but also to make it a better home, and because it is considered a good investment alternative. This explains one side of the equation -why a sudden gain is perceived to have large impacts on farm investment. The other side of the equation -why a sudden loss would have relatively little impact on farm divestment -is also relatively easy to understand. The extreme effect of farm divestment would be to give up farming. A loss of MNOK 1 is large enough for this to be a real threat for many farmers. If quitting is not necessary, a less drastic farm divestment is likely to be both less convenient and more emotionally painful than, say, a sale of off-farm assets. In a sense, the lifestyle importance of farming is supported by the results from this study: A sudden gain could well be invested in the farm, but to finance a sudden loss, most other options would be preferred rather than farm divestment and a possible new lifestyle as a non-farmer. Implications The results are overall not surprising. The general risk attitude corresponds well to previous studies, and the negative sign of the new «invest» variable is also easy to explain. Nevertheless, we think the results could have some interesting policy implications. 1. Risk management schemes Farmers are very reluctant to farm divestment and the treat of quitting farming. A reasonable interpretation of this is that the average farmer is able and willing to cope with risk and sudden losses -as long as he is able to handle them without divesting. Hence, it could be proposed that both private (i.e., insurance) and public risk management schemes should be adjusted to account for this. General crop insurance or any similar scheme is likely to be imperfect for most farmers. For relatively wealthy farmers, a bad crop will not force them to divest -they will be able to handle the loss by other means. Hence, insurance could be useful, but it will not be crucial for survival, and self-insurance might be as beneficial in the long run. For some farmers, however, the situation will be the opposite. Standard crop insurance might be useful, but it will typically not eliminate all extra costs associated with a bad crop. For farmers with few off-farm assets, even the (relatively small) losses they have to carry will make it necessary to divest and possibly quit farming. Hence, the insurance scheme will, in some sense, be of little use to them, because a bad crop will force them to give up farming anyway. Policy and general support programs Most developed countries spend large amounts on agricultural subsidies -often through different schemes with different target groups and objectives. Lobley and Potter (2004) recommend a more integrated agricultural policy to take into account the diversity of farmers, including «lifestyle oriented» policies, directed at improving rural living in general, instead of supporting certain types of farm production in particular. Our study supports this recommendation. A reasonable interpretation of the reluctance to divest is that this is based on a strong will to stay on the farm and maintain a farming lifestyle -even if it becomes necessary to cut into other savings or increase the offfarm workload. If more of the support schemes are directed at improving rural living conditions, this would make it easier for many to stay on the farm. Even if their agricultural production is different or smaller than before, and they for instance have to work more offfarm, this is likely to be a good alternative for many farmers. Conclusions Farmers claim that a sudden gain would affect their behavior less than a sudden loss. The main exception is farm investment, which would be more affected by a sudden gain than a sudden loss would affect divestment. This is interesting, as it captures some important aspects of farming lifestyle: A sudden gain is likely to be invested in the farm, but a sudden loss would, if possible, be financed without farm divestment, as this often would lead to giving up farming, or at least some important aspects of the farming lifestyle. These results are not surprising, as they are in line with both theory and earlier studies. However, the results could have implications for both insurance schemes and general support programs. As most farmers are very reluctant to farm divestment, this treat could be considered more specifically when designing insurance schemes. Also, as maintaining farming lifestyle is considered so important, general support programs could aim at improving rural living in general, rather than supporting specific types of production.
5,570.6
2013-01-01T00:00:00.000
[ "Economics" ]
SIGN LANGUAGE DACTYL RECOGNITION BASED ON MACHINE LEARNING ALGORITHMS A major advance in the field of information technology over the past ten years can be called the digitalization of human-computer interaction at the visual level. This achievement primarily solves communication problems of people with hearing disabilities and allows for rapid human-computer interaction. In this regard, the gesture is one of the main forms of visual communication of people. The actions and relative positions of body parts and their changes over time correspond to certain messages, and recently became also promising in the interaction of technical systems and humans. Thanks to the detection capabilities of visual communication primitives, gesture recognition has become one of the most widely researched topics in recent years [1, 2]. The results of automatic gesture recognition and classification are used to train people with hearing impairments and help them communicate with strangers using sign language. They can also be used as a quick message for digital smart devices. This is the social significance of sign language recognition. As video data has become ubiquitous in practical applications, the research and development of gesture recognition automation is finding application in many human-machine communication systems. The phonological structure of a sign language is usually divided into five elements: articulation point, hands configuration, movements type, hands orientation, and facial expressions [1]. Each gesture is perceived through a combination of these elements. These blocks represent valuable sign language elements and can be used by automated intelligent sign language recognition (SLR) systems. It should bear in mind, that in a sign language, one gesture means one whole word. In contrast, dactylology is a peculiar form of speech where the dactylic alphabet is used. Each hand gesture illustrates a specific letter of this language. Each natural language, like the Kazakh language, has its own dactylic language, which is also different from the dactylic language of other languages. How to Cite: Kenshimov, C., Buribayev, Z., Amirgaliyev, Y., Ataniyazova, A., Aitimov, A. (2021). Sign language dactyl recog- Introduction A major advance in the field of information technology over the past ten years can be called the digitalization of human-computer interaction at the visual level. This achievement primarily solves communication problems of people with hearing disabilities and allows for rapid human-computer interaction. In this regard, the gesture is one of the main forms of visual communication of people. The actions and relative positions of body parts and their changes over time correspond to certain messages, and recently became also promising in the interaction of technical systems and humans. Thanks to the detection capabilities of visual communication primitives, gesture recognition has become one of the most widely researched topics in recent years [1,2]. The results of automatic gesture recognition and classification are used to train people with hearing impairments and help them communicate with strangers using sign language. They can also be used as a quick message for digital smart devices. This is the social significance of sign language recognition. As video data has become ubiquitous in practical applications, the research and development of gesture recognition automation is finding application in many human-machine communication systems. The phonological structure of a sign language is usually divided into five elements: articulation point, hands configuration, movements type, hands orientation, and facial expressions [1]. Each gesture is perceived through a combination of these elements. These blocks represent valuable sign language elements and can be used by automated intelligent sign language recognition (SLR) systems. It should bear in mind, that in a sign language, one gesture means one whole word. In contrast, dactylology is a peculiar form of speech where the dactylic alphabet is used. Each hand gesture illustrates a specific letter of this language. Each natural language, like the Kazakh language, has its own dactylic language, which is also different from the dactylic language of other languages. Research on the development of a Kazakh sign dactyl language recognition system is currently insufficient for a complete representation of this language. When developing methods and systems for recognizing the Kazakh dactyl sign language, a number of difficulties arise, mainly associated with spelling, sign language and other features of the language [3]. The alphabet of the Kazakh language has 42 letters, of which 33 are borrowed from the Russian alphabet, the remaining 9 are specific to this language. This condition is also relevant for the Kazakh dactyl language. Since the Kazakh language belongs to the family of Turkic-speaking languages and most words, letters and sounds are similar, these tasks are relevant for the majority of the population of Turkic-speaking peoples, which now numbers more than 200 million people. But it should also be borne in mind, that the Kazakh language, unlike its relatives, is just beginning to move from the Cyrillic to the Latin alphabet. Also, one of the problems is the division of the types of gesture into static, when there is no need to make any movement of the hands, the position of the hand and fingers is stationary in space during the considered time, as well as dynamic when gestures are reproduced by moving the hand. In most cases, systems that provide real-time gesture reading have only one of the proposed forms. That is, they rely in comparison on static or dynamic data in their database. Literature review and problem statement The paper [1] provides an overview to create a consistent taxonomy to describe recent research, divided into four main categories: development, structure, recognition of other hand gestures, and reviews. An analysis of glove systems for the characteristics of SLR devices was carried out, a technology development plan was developed, existing limitations were considered, and valuable information about technological environments was provided to help explore opportunities and challenges in this area. This paper uses the low-level features of the human hand, used for machine learning algorithms. It then applies this data in the recognition and classification process. The main disadvantage of this approach was that special gloves for recognizing the position of the hands cannot always be at your fingertips, and not everyone has them. The article [2] aims to recognise sign language characters, trained using images of American sign language letters. The use of capsule networks for learning processes was proposed. The test results were compared with the results of the LeNet architecture. As a result of the study, it was noticed that for effective character recognition in sign language, capsule networks are useful and give a successful result than LeNet. In [4], a technique for visual gesture recognition is proposed by combining several spatial and spectral representations of gesture images manually using a convolutional neural network. The technique, proposed in this paper, allows us to calculate the Gabor spectral representations of spatial images of hand gestures and uses an optimized neural network to classify gestures into appropriate classes. The authors of this paper have considered various ways to combine both types of modalities to determine a model that increases the reliability and accuracy of recognition. It should be noted, that the material of this work emphasizes the development of sign language recognition, gradually moving from a variety of auxiliary tools to more everyday ones, such as a smartphone or tablet, which a person can carry with them in everyday life, without additional cargo, to provide convenience and save the budget of the average consumer. In the paper [5], a classification of the Turkish sign language was implemented using finite automata based on pose marking, which uses depth values in location-based functions. A grid-based signature space clustering scheme has been developed, and cluster numbers are used as objects for a set of connections. A pose marking algorithm for recognizing a predefined set of gestures in TSL is proposed. The labels, assigned to poses, are used to classify gestures concerning known vocabulary using the FSA. A set of complex gestures is selected to evaluate the technique; however, their scheme is also expanded for a new gesture, simply providing an appropriate FSA based on its poses. The general classification scheme deals only with position labels and not low-level and spatiotemporal features, reducing the space and time requirements. The authors of [6] described the results of using the longterm, short-term memory (LSTM) model, which improved the machine translation of Google Translate. One of the closest works on the study of the Kazakh sign language is [3], where the Kinect sensor is used for gesture recognition, the coordinates of the skeleton of the hand and key characteristics are processed through XML files using tools and calculations in MATLAB. It is easy to understand that this approach was implemented for the old Kazakh alphabet. In the article [7], several real-time gesture recognition systems were compared using convolutional neural networks. The system, proposed in this paper, can recognize words from a natural language with gestures, using signs for each letter. The approach of this work was evaluated in the American and Russian sign languages. For the American sign language, a data set, prepared by Massey University and the Institute of Information and Mathematical Sciences, was used. Russian sign language recognition quality lagged behind the high result due to the complexity of the real data set for the Russian sign language. According to the results of the study, the accuracy of typing the American Sign Language showed a high result, which we took into account for the experience in the design of architecture. The article [8] presents an effective framework for solving the problem of static gesture recognition based on data, obtained from web cameras and the Kinect depth sensor. In this paper, the video sequence is taken as input data. That is, the sequence of frames and the classification is performed separately without any frame information. The accuracy of the method, proposed by the authors, was estimated based on the collected images, consisting of 2700 frames. In [9], an intelligent system for the Turkish sign language recognition was developed. It is based on 33 basic signs of the Turkish Sign Language. To determine the signals, a Microsoft Kinect v2 sensor was used. The proposed system is designed to help people with hearing and speech impairments and other people and solve communication problems between these people. We can apply this development for our own purposes, however, there is a linguistic difference between the Kazakh and Turkish languages, which is the main barrier. The scientific works [10] present a review of the scientific literature on sign language recognition systems. In [11,12] they were identified and analyzed for their direct relevance to sign language recognition systems. In the article [13] the classification is considered based on six dimensions (data collection methods, static/dynamic signs, signature mode, one-handed/two-handed signs, classification technique, and recognition speed). The research paper [14] analyzes statistics on the use of various data collection methods, used in sign language systems, and 12 sign languages were selected for this purpose. Among these languages, the American sign language is the first to be analyzed. For the review of this language, the literature was used [15,16]. The article [17] presents a method for recognizing gestures of the American sign language using the method of principal component analysis to minimize the similarity of gesture classes. The scientific work [18] implements a system for recognizing the letters of the American alphabet using surface electromyography to allow people to spell words. The developers of the recognition system, presented in the article [19], used the MAdaline network for image processing and classification. The article [3] describes the sign symbols, used to record the structure of gestures in writing. The choice of L. S. Dimskis' sign notation in relation to the Kazakh sign language is also justified, the features of the representation of the Kazakh sign language using L. S. Dimskis' sign notation in the course of compiling a dictionary of frequently used gestures are revealed. As a result of the analysis of the existing methods, proposed in the above works, most are characterized by insufficient accuracy and speed of recognized gestures. Also, many studies often require conditions, such as wearing special gloves and other devices, good lighting, etc. Many scientific studies related to the recognition of the Kazakh sign language were conducted with the old alphabet, consisting of 42 letters [3,20]. Also, the recognition accuracy did not exceed 90 %, which requires updating and improving the recognition systems. The current Kazakh alphabet consists of 31 letters, changes were made to the spelling of specific Kazakh letters, digraphs were also introduced. Therefore, the development of an accurate and high-speed algorithm for recognizing the new Kazakh sign language in real time in order to facilitate communication with people with hearing disabilities is an urgent task. The aim and objectives of the study The aim of this work is to implement a recognition program with the highest accuracy of the Kazakh dactylic sign language with an updated alphabet using machine learning methods. The scientific novelty of this work is the development of a new system that provides a solution to the problems of gesture recognition, both dynamic and static, combined into one base for building practical systems for human-computer interaction. To achieve this aim, it is necessary to solve the following tasks: -collect a dataset with images of each gesture for training and testing samples, using the MediaPipe framework, implement hand and finger tracking and identify key points of the hands in three-dimensional space; -implement a recognition program of the Kazakh dactylic sign language and classify gestures using machine learning algorithms; -conducting a numerical evaluation of the quality of algorithms in order to determine the best classifier in gesture recognition problems, build a three-dimensional model, containing metrics, such as precision, recall and f1-measure for all gesture classes. Materials and methods In the course of the work, the American, Russian and Turkish sign languages were analyzed [8,9,15], and a program for recognizing the Kazakh sign language was implemented on their basis. In this paper, classical algorithms for gesture recognition are applied, combining two types of data into one database, which is reflected in the architecture of the recognition system. Therefore, works, devoted to the study of sign dactyl language, mainly take into account only one specific language, determine the type of data (single frame or multiple frames) that we cannot use as proposed solutions for the Kazakh language. 1. Random Forest The first method, chosen to classify fingerprint language gestures, is the random forest algorithm. Fig. 1 shows how the random forest algorithm works. By representing a set of decision trees, this algorithm combines them to produce a more accurate result. The training sample is divided into subsamples of a certain size, from which the trees are built. To build a split in the tree, the maximum value of the random functions was viewed. Each new partition of the tree is made by determining its random features, the best feature is selected, and the tree structure continues until the choice is exhausted, that is, until only one representative of the class remains. But in the latest implementations of this algorithm in our work, we see that there are parameters that limit the height of trees and the number of objects in the subsample when recognizing gestures. 2. Support Vector Machine The next research method is the classifier of support vector machines. This algorithm can be divided into two parts: training the classifier and recognizing the characters, supplied to the input. At the first stage, a software implementation of the mathematical apparatus of the support vector machine is developed to create a classifier model. The SVM model represents different classes on a hyperplane in a multidimensional space. This hyperplane is generated iteratively to minimize the error. The purpose of this classifier is to divide data sets into classes to find the maximum limit hyperplane. One of the important concepts in SVM is support vectors, a collection of data points, located closest to the hyperplane, and using these points to determine the dividing line. A hyperplane is a decision plane, a space, divided between a set of objects with different classes. In the second stage, the recognition and classification process is implemented. The hyperplane that separates the classes correctly is selected. The main advantages of the SVM classifier are the ability to show high accuracy and the ability to work well with a large space. SVM classifiers mostly use a subset of training points. Hence very little memory is used as a result. They have a long learning time, so they are not suitable for large datasets in practice. Another disadvantage is that SVM classifiers do not work well with overlapping classes. 3. Extreme Gradient Boosting The third algorithm, used in this research paper, is the XGBoost classifier. This algorithm is based on gradient boosting of decision trees. First, we construct an ensemble of weak predictive models, in this case, decision trees. The training of the ensemble is performed sequentially. At each iteration, the deviations of the predictions of the already trained ensemble on the training sample are calculated. By adding the new tree's predictions to the trained ensemble's predictions, the average deviation of the model, which is the goal of the optimisation problem, is reduced. New trees will be added to the ensemble as long as the error is reduced. The XGBoost algorithm is designed for classification tasks that work with structured and tabular data. Using the gradient descent architecture, the algorithm enhances the performance of weak classifiers. The main parameters of the algorithm are the number of trees, the step size to prevent overfitting, the change in the value of the loss function to divide the leaf into subtrees, the maximum depth of the tree, and the regularisation coefficient. To support the parallelisation of the tree building process, a block structure is used. It is possible to continue training for additional training on new data. The parallelisation of the algorithm is possible due to the interchangeable nature of the loops, used to build the training base: the outer loop lists the leaves of the trees, the inner loop calculates the features. Finding a loop inside another one prevents the algorithm from parallelizing, since the outer loop cannot start its execution if the inner one has not finished its work yet. Therefore, to improve the running time, the order of the loops is changed: initialisation takes place when reading data, then sorting is performed using parallel threads. This replacement improves the performance of the algorithm by distributing the calculations across threads. 1. Creating a dataset To create a dataset, first of all, a real-time image output program was used. After that, the hand was detected, that is, the area of interest for further classification. After detecting and tracking the hand, the frame of the hand is drawn by the key points of the hands. The frame is displayed on an empty frame, which will be saved in the dataset in the corresponding, predefined folder. The first step of our research is to get an image from a webcam, since the program works in real-time. This is followed by the process of hand detection by the MediaPipe neural network framework, as shown in Fig. 2. The ability to perceive the shape and movement of the hands is used to understand sign language and control hand gestures. Reliable real-time hand perception is a challenge for computer vision, as the hands often close together and do not have high-contrast patterns. The MediaPipe framework, by creating multi-modal machine learning pipelines, returns accurate three-dimensional key points of the hand. Fig. 3 shows drawing a hand frame and drawing a three-dimensional hand reference using 21 key points from just one frame. The Fig. 4 shows the process of selecting the area of interest, that is, the area of the detected hand (these examples in the pictures were made in the laboratory by our scientists, with their permission). The frame of the hand (Fig. 5) is moved to another empty window, and the image is saved. The program code for saving the image is shown in Fig. 6. The first version of the Kazakh alphabet consists of 42 letters. The development of a database for the Kazakh sign language, consisting of a dactylic alphabet of 42 gestures, is the initial step in creating a system for automatic recognition of individual hand gestures. The dactylic alphabet for the first Kazakh sign language is shown in Fig. 7. In 2017, a decree was signed on the transition of the Kazakh alphabet from Cyrillic to Latin. After the changes, the new Kazakh alphabet includes 31 characters of the Latin alphabet, which completely covers all the sounds of the Kazakh language. This article is relevant because research related to gesture recognition of the updated Kazakh alphabet has not yet been conducted. Fig. 8 shows a dataset of 31 gestures, each gesture corresponds to one letter of the new Kazakh alphabet. After forming the dataset, we proceed to the recognition process. The collected data is divided into training and test data. For the correct recognition and classification of the Kazakh sign language, the machine learning algorithms, specified in section 4, were applied. 2. Development of a program for recognizing the Kazakh sign language After the dataset is collected from the image for each gesture, a program for recognizing the Kazakh sign language using machine learning methods will be implemented. As shown in the pseudocode of the algorithm (Fig. 9), real-time streaming video is accepted as input. Then the process of reading the capture from the camera comes. If the hand area is in the frame, the coordinate calculation function is performed. The coordinates are calculated based on the key points found. After the coordinates are determined, the function of drawing the frame of the hand is performed. The resulting image is converted to an array of data and goes to the classification function. As a result of the classification, you will get a text with a label about the gesture. Otherwise, when the hand is out of the frame, the label "None" is displayed since no gesture will be detected. . 10 shows the output results of the labels corresponding to each class of hand frames shown. Fig. 11 shows the result of detecting each gesture class in the dataset. The dataset consists of 31 classes. Each class contains more than 5000 drawings of the hand frame for a single gesture. As shown in the figure, the recognition of gestures, corresponding to the letters of the Kazakh alphabet, occurs in real time. That is, it is responsible for the ability to distinguish a given class from other classes. When the model makes many incorrect positive classifications, the value of this metric decreases. Recall measures the model's ability to detect samples that belong to the positive class. It is responsible for the ability to detect a particular class. Recall takes into account the correctness of the prediction of all positive samples. However, it ignores the erroneous classification of representatives of negative ones, predicted as positive. And the f1-measure contains information about these two metrics, defined as their average harmonic value. Accuracy is a metric that describes the overall accuracy of the model classification across all classes. Accuracy . Table 1 shows the value of these metrics for each class, and it shows that the overall accuracy of each class is at least 98-99 %. Fig. 12 shows the accuracy and completeness diagram (x and y axes, respectively) and their corresponding F1 score (z-axis) for the Random Forest classifier. When the precision value reaches one, and the completeness is zero, the F1 measure remains 0, ignoring the precision. If one parameter is small, then the second parameter will not matter, since the F1 measure emphasises the smallest value. Using the colour indicator, shown on the right side of the picture, you can see the ratio of accuracy and completeness for each class. Another metric for assessing the classification quality is the ROC curve, which represents a graph of the relationship between true-positive and false-positive indicators. TP TN TP FP FN TN The quantitative interpretation of this curve gives an indicator of the area of the AUC (Fig. 14), bounded by the ROC curve and the axis of the proportion of false-positive classifications. The higher the AUC result, the better the classifier works. Table 2 shows the numerical AUC values of the Random Forest classifier for each class. Table 3 shows the quality metrics of the Support Vector Machine classifier. According to the data, you can see that this algorithm was slightly mistaken in the classification of objects of the 1 st and 7 th class. In other cases, it showed good results. Fig. 15 shows the accuracy and completeness diagram (x and y axes, respectively) and their corresponding F1 score (z-axis) for the Support Vector Machine classifier. When the precision value reaches one, and the completeness is zero, the F1 measure remains 0, ignoring the precision. If one of the parameters is small, then the second parameter will not matter, since the F1 measure emphasises the smallest value. Using the colour indicator, shown on the right side of the picture, you can see the ratio of accuracy and completeness for each class. Another metric for assessing the classification quality is the ROC curve, which represents a graph of the relationship between true-positive and false-positive indicators. The quantitative interpretation of this curve is given by the AUC area indicator (Fig. 17), which is bounded by the ROC curve and the axis of the proportion of false-pos-itive classifications. The higher the AUC result, the better the classifier works. Table 4 shows the numerical AUC values of the Support Vector Machine classifier for each class. Table 5 shows the quality metrics of the XGBoost classifier. The Recall metric reaches the lowest value in detecting Class 1 objects. The ability to distinguish one class from other classes, the Precision metric, showed good results. Fig. 18 shows the accuracy and completeness diagram (x and y axes, respectively) and their corresponding F1 score (z-axis) for the XGBoost classifier. When the precision value reaches one, and the completeness is zero, the F1 measure remains 0, ignoring the precision. If one of the parameters is small, then the second parameter will not matter, since the F1 measure emphasises the smallest value. Using the colour indicator, shown on the right side of the picture, you can see the ratio of accuracy and completeness for each class. Another metric for assessing the classification quality is the ROC curve, which represents a graph of the ratio between true-positive and false-positive indicators. The quantitative interpretation of this curve is given by the AUC area ( Fig. 20) indicator, which is bounded by the ROC curve and the axis of the proportion of false-positive classifications. The higher the AUC result, the better the classifier works. Table 6 shows the numerical AUC values of the XGBoost classifier for each class. As shown in the Table 6, the values of the AUC ROC metrics are in the range between 0.92 and 0.99, which proves the good quality of the algorithm. In this section, an assessment of 5 metrics was made to check the quality of algorithms. Discussion of experimental results of comparative analyzes of algorithms, obtained during the study In this paper, a system for recognizing the Kazakh dactylic sign language, consisting of the dactylic alphabet of 31 gestures in real time, has been developed. The In other scientific studies [6,20,21] of gesture speech recognition, the support vector method was also used, which we also used in our work. But compared to previous works, our recognition accuracy is high. The peculiarity of the method, proposed in our work, is the combination of static and dynamic data types into one database, which makes it possible to interpret gestures in real mode (dynamic gestures), as well as in cases when there is no need to track hands (static gestures). For this task, there are such limitations as the quality of camera visibility, the quality of illumination of the recognition zone, also the main problem is the moderate use of resources, since it has limitations on computing devices, etc. The advantage of this work is the high recognition accuracy, which is very important for use in human-machine communication systems. Also, our research work is one of the first works that implemented a gesture recognition system for the updated Kazakh alphabet. As a lack of research, we can note FPS drawdowns, which affect the speed of recognition of machine learning algorithms. In the future, parallelization is planned in order to improve the performance and increase the speed of the algorithms. To do this, we consider solutions to problems due to the training time that arise when working with a large number of training examples. Conclusion 1. The presented research work is aimed at the correct recognition of the Kazakh sign language. To achieve this goal, a dataset was created that contains more than 5000 images for each 31 gestures. With fewer photographs used, our results were less accurate. Lighting also affects the quality of recognition, and we took this parameter into account in order for our development to give a satisfactory result. 2. The classification of gestures was carried out according to three classification algorithms. The average accuracy of the Random Forest classifier was 98.86 %, the SVM algorithm showed 98.68 % accuracy, and XG-Boost has a result of 98.54 % correct recognition. In addition, the classifier's quality is evaluated by the speed of execution and the performance of the algorithm. In terms of training time, Random Forest was faster than the support vector machine and XGBoost. To check the accuracy, cross-validation was performed, where the data was divided into five blocks. As for the speed of predicting in a real-time task, Random Forest, although it won in the learning speed, is inferior in the execution speed, as FPS drawdowns begin. Thus, the prediction accuracy in the three methods is about the same. However, SVM and XGBoost have shown themselves to be better due to execution speed when working in real-time. 3. The conducted research allowed us to draw the following conclusions based on the estimates of the algorithms: the average precision for the RF algorithm was 0.859, for the SVM algorithm was 0.895 and for XGBoost was 0.794. The average recall for the RF was 0.825, for the Ccontinuation of SVM algorithm was 0.797, and for XGBoost was 0.773. For most classes, these metrics showed good results, here is the average value of these metrics for each algorithm. In the future, it is planned to improve the performance of these classifiers by parallelizing them using CUDA and OpenCL technologies.
7,120.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Poly[1-ethyl-3-methylimidazolium [tri-μ-chlorido-chromate(II)]] The title compound, {(C6H11N2)[CrCl3]}n, was generated via mixing of the ionic liquid 1-ethyl-3-methylimidazolium chloride with CrCl2 in ethanol. Crystals were obtained by a diffusion method. In the crystal structure, the anion forms one-dimensional chains of chloride-bridged Jahn–Teller distorted chromium(II) centers extending along the [100] direction. The imidazolium cations are positioned between these chains. Experimental Crystal data (C 6 100°C will catalyze the conversion of glucose to 5-hydroxymethylfurfural (HMF) in 70% yield (Zhao et al., 2007). The proposed active catalyst in this system is a compound formulated as [EMIM]CrCl 3 . While alkali metal, ammonium, and tetramethyl ammonium chromium(II) trihalides have been previously reported in the literature (Hardt & Streit, 1970), the title compound is the first structurally characterized imidazolium analog. The structure consists of infinite linear chains of Jahn-Teller-distorted chromium centers ( Fig. 1) bridged by a facial array of chloride ligands (Fig. 2). Each Cr II has four Cr-Cl bonds of σim 2.39-2.45 Å and two longer Cr-Cl interactions (2.87-2.91 Å). The Cr···Cr distance is 3.33 Å. The Cl-Cr-Cl bond angles are in the range of 87-90°. The shortest Cr···Cr distance between chains is 9.19 Å. A number of differences are evident in the structures of [EMIM]CrCl 3 (collected at 150 (1) K) and the previously reported [N(CH 3 ) 4 ]CrCl 3 (collected at room temperature; Bellitto et al., 1984). Specifically, the chromium center in [EMIM]CrCl 3 has pseudo D 4h site symmetry whereas [N(CH 3 ) 4 ]CrCl 3 contains trigonally distorted chromium centers (C 3v site symmetry) positioned in alternating compressed and elongated face-sharing octahedra. Similar site symmetry to that found in [N(CH 3 ) 4 ]CrCl 3 was identified in the room temperature structure of α-CsCrCl 3 , see: McPherson et al. (1972) and Crama & Zandbergen (1981). This C 3v site symmetry is described as resulting from randomly distributed elongation of Cr-Cl bonds along three principal axes of the octahedron. Experimental Under a N 2 atmosphere, a solution of CrCl 2 (23 mg, 0.19 mmol) in ethanol (2 ml) was added to solid 1-ethyl-3-methylimidazolium chloride (23 mg, 0.16 mmol). The resulting teal colored solution was stirred at ambient temperature until all of the solid had dissolved. Addition of ethyl acetate (2 ml), followed by diffusion of Et 2 O, produced pale yellow crystals suitable for X-ray analysis. Special details Experimental. The program DENZO-SMN (Otwinowski & Minor, 1997) uses a scaling algorithm (Fox & Holmes, 1966) which effectively corrects for absorption effects. High redundancy data were used in the scaling program hence the 'multi-scan' code word was Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq
753.2
2009-01-28T00:00:00.000
[ "Chemistry" ]
Marine particle microbiomes during a spring diatom bloom contain active sulfate-reducing bacteria Abstract Phytoplankton blooms fuel marine food webs with labile dissolved carbon and also lead to the formation of particulate organic matter composed of living and dead algal cells. These particles contribute to carbon sequestration and are sites of intense algal-bacterial interactions, providing diverse niches for microbes to thrive. We analyzed 16S and 18S ribosomal RNA gene amplicon sequences obtained from 51 time points and metaproteomes from 3 time points during a spring phytoplankton bloom in a shallow location (6-10 m depth) in the North Sea. Particulate fractions larger than 10 µm diameter were collected at near daily intervals between early March and late May in 2018. Network analysis identified two major modules representing bacteria co-occurring with diatoms and with dinoflagellates, respectively. The diatom network module included known sulfate-reducing Desulfobacterota as well as potentially sulfur-oxidizing Ectothiorhodospiraceae. Metaproteome analyses confirmed presence of key enzymes involved in dissimilatory sulfate reduction, a process known to occur in sinking particles at greater depths and in sediments. Our results indicate the presence of sufficiently anoxic niches in the particle fraction of an active phytoplankton bloom to sustain sulfate reduction, and an important role of benthic-pelagic coupling for microbiomes in shallow environments. Our findings may have implications for the understanding of algal-bacterial interactions and carbon export during blooms in shallow-water coastal areas. Introduction Microalgae inject gigatons of organic carbon into coastal oceans e v ery year (Field et al. 1998 ) and phytoplankton blooms r epr esent primary productivity hotspots.It has been estimated that over 90% of algal-produced carbon is consumed by heter otr ophic bacteria in the immediate vicinity of algal cells, the phycosphere, during typical bloom situations (Seymour et al. 2017 ).Some of these bacteria ar e dir ectl y associated with living algal cells, or with sinking a ggr egates of senescent or dead algae, and ther efor e play an important role in the biological carbon pump.Despite their importance in carbon sequestration (e.g.Bligh et al. 2022 ), vertical connectivity (Mestre et al. 2018 ), and their documented complexity (e.g.Reintjes et al. 2023 ), particle-associated (PA) bacterial comm unities ar e less well understood than their fr ee-living counterparts.In fact, they are often overlooked due to the sometimes fr a gile nature of particles, in combination with the practice of prefiltration to exclude larger organisms prior to molecular analyses (e.g.Simon et al. 2002, Thiele et al. 2015, Heins et al. 2021 ). Aggregates composed of living algae are known for a div erse micr obiome of aer obic, heter otr ophic bacteria that degr ade complex algal organic matter, such as polysaccharide-rich exudates (Enke et al. 2019, Reintjes et al. 2023 ), although anaerobic metabolism such as diazotrophy is also known to occur (Riemann et al. 2022 ).Within this micr obiome, bacteria ar e selected by factors such as host physiology and genotype (Ahern et al. 2021 ), the surrounding environment (Barreto Filho et al. 2021 ) and via stoc hastic pr ocesses (Stoc k et al. 2022 ).It is methodologicall y c hallenging to study associations between algal and bacterial taxa during natural phytoplankton blooms by direct observations, due to the transient nature of these associations (Seymour et al. 2017, Heins et al. 2021 ), as well as innate complexities and r a pid dynamics of algal and bacterial communities during bloom e v ents (Teeling et al. 2016 ).High-r esolution tempor al co-occurr ence analysis, in combination with measurement of microbial functional potential, offers an indirect way to infer algae-bacteria associations, which can facilitate generating hypotheses about specific interactions and their potential functional implications. A defining feature of marine particles are the steep chemical and redox gradients that PA bacterial communities are exposed to, compared to free-living, planktonic bacteria (Ploug et al. 1997 ).These gradients have been studied in the context of bathypelagic sinking particles (i.e.marine snow), which harbor micronic hes enabling anaer obic metabolism, suc h as micr obial sulfate reduction (Shanks and Reeder 1993, Bryukhanov et al. 2011, Bianc hi et al. 2018 ).Dir ect micr oelectr ode measur ement of oxygen concentrations in marine particles formed in roller tanks have demonstrated the importance of particle size and sinking velocity (Ploug et al. 1997 ), surrounding oxygen concentration (Ploug and Bergkvist 2015 ), but also the species composition of diatom detritus making up the particles (Zetsche et al. 2020 ) for the formation of anaer obic nic hes.Further, ele v ated concentr ations of sulfide, inside artificial marine snow and field-collected particles compar ed to surr ounding water masses, indicates that sulfate reduction takes place within suc h nic hes (Shanks and Reeder 1993 ).Sulfate-reducing bacteria have also been detected in oxygenated surface waters in the Black Sea, complementing observations of sulfate reduction above 30 m depth in these waters (Bryukhanov et al. 2011 ).Using a modeling a ppr oac h, Bianc hi and collea gues predicted that anaerobic particle microenvironments enabling for example sulfate reduction may be more widespread in the global ocean than pr e viousl y assumed (Bianchi et al. 2018 ).In the photic zone, anaer obic micr o-nic hes hav e been less widel y inv estigated and the pr e v alence of sulfate-r educing bacteria is uncertain in the proximity of oxygen-producing living algal cells. Here , we in vestigated microbial community dynamics of PA bacterial and eukaryotic taxa during a spring phytoplankton bloom in the southern North Sea at the shallo w-w ater long-term ecological r esearc h site Helgoland Roads (Wiltshir e et al. 2010 ) in the year 2018.We aimed to identify PA bacterial taxa co-occurring with the major eukaryotic taxa (diatoms and dinoflagellates) during the bloom.We hypothesized that bacteria co-occurring with diatoms would be compositionally and functionally distinct from those co-occurring with dinoflagellates.Using 16S and 18S rRNA gene amplicon data from a well-resolved time series (near daily sampling) collected between early March and late May in 2018, we constructed co-occurrence networks focusing exclusiv el y on bacteria-eukaryote co-occurrences in the particle fraction (larger than 10 μm).In addition, we addressed bacterial functional gene expression by analysis of metaproteomes from three selected time points during the bloom. Sampling and sample processing A large volume of seawater (40 L-140 L) was sampled using a clean bucket from 1 m depth below the water surface as pr e viously described (Teeling et al. 2012, Wang et al. 2024 ) from the r esearc h v essel Aade in the morning at near dail y interv als between beginning of March and end of May in 2018 at the long term ecological r esearc h (LTER) site Helgoland Roads (50 • 11.3 N, 7 • 54.0 E; DEIMS.iD:https:// deims.org/1e96ef9b-0915-4661-849f-b3a72f5aa9b1 ).The site is located near the small island of Helgoland in the south-eastern North Sea and has a water depth of 6-10 m depending on tide (Wiltshire et al. 2010 ).For chlorophyll a (chl a ) analysis, sample filtration was carried out in a laboratory under dim light to avoid the loss of pigments during the filtr ation pr ocess.We used a combined method of Za pata et al. ( 2000 ) and Garrido et al. ( 2003 ) for chl a extraction and analysis.Pigments wer e separ ated via high-performance liquid c hr omatogr a phy (HPLC) (Waters 2695 Separation Module), and detected with a Waters 996 Photodiode Array Detector.Secchi depth was measured from the vessel on site .T he abundance of "detritus" (non-identifiable matter) was estimated micr oscopicall y on a scale from 0-6, corresponding to "none" (0), "moderate" (3) and "massive" (6) levels .Publicly a vailable wind data from the weather station on Helgoland were obtained via the website www.wetterk ontor.de .Water le v el data, collected at the harbor on the island, were obtained from http:// www.portal-tideelbe.de/ .For 16S and 18S rRNA gene amplicon sequencing and metagenome anal ysis, plankton biomass fr om a 1 L seawater subsample was filtered using 10 μm pore size polycarbonate membrane filters (47 mm diameter, Millipor e, Sc hwalbac h, German y) to separ ate PA microbes ( > 10 μm) from smaller size fractions (not analyzed in this study).At three selected time points during the bloom (J ulian da ys 107, 128 and 144, r epr esenting earl y-, mid-and late bloom phases), a separate filtration was performed using larger (142 mm diameter, Millipore) 10 μm pore size polycarbonate membrane filters for metaproteomic analysis.In order to maximize biomass harvest while avoiding clogging, filtered volumes varied between 15 and 30.5 L per filter for meta pr oteomic anal ysis .T he > 10 μm particle fraction comprises e v erything lar ger than 10 μm, and can include living phytoplankton cells, phytoplankton a ggr egates, zooplankton, fecal pellets and resuspended material of benthic origin.For the purpose of this study, we refer to microbes as particle-associated (PA) if detected in this filter fraction without making assumptions about the nature of the particle. rRNA gene amplicon sequencing and analysis Samples from 52 time points throughout the bloom were collected and analyzed.DN A w as extracted from the filters using the Qiagen DNeasy Po w er soil Pro kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.Dislocation of microbial cells from the filters and mechanical lysis were ac hie v ed by bead beating in a FastPrep 24 5 G (MP Biomedicals , Irvine , C A, USA).DN A concentrations w ere measured at a Qubit 3.0 fluorometer (In vitrogen, Carlsbad, C A, USA).Extracted DN A w as amplified with primer pairs targeting the V4 region of the 16S rRNA gene [515f: 5 -GTGYCAGCMGCCGCGGTAA-3 , 806r: 5 -GGA CTA CNV GGGTWTCTAAT-3 (Walters et al. 2016 )] and the V7 region of the 18S rRNA gene [F-1183mod: 5 -AATTTGA CTCAA CRCGGG-3 , R-1443mod: 5 -GRGC ATC AC AGACCTG-3 ] (Ra y et al. 2016 ) coupled to custom adaptor-barcode constructs.PCR amplification and Illumina MiSeq (Illumina, San Diego, CA, USA) libr ary pr epar ation and sequencing (V3 chemistry) were carried out by LGC Genomics (LGC Genomics, Berlin, Germany). Sequence r eads fr ee of ada ptor and primer sequence r emains wer e pr ocessed using the D AD A2 pac ka ge (v1.2.0) in R (Callahan et al. 2016 ).In summary, forw ar d and r e v erse Illumina MiSeq r eads wer e truncated to 200 bp, filtered (maxEE = 2, truncQ = 2, minLen = 175), dereplicated and error rates were estimated using the maximum possible error estimate from the data as initial guess.Sample sequences were inferred, paired forw ar d and rev erse r eads wer e mer ged and c himeric sequences wer e r emov ed using the r emov eBimer aDenovo function.The resulting amplicon sequence variants (ASVs) were taxonomically classified using the Silva database (nr 99 v 138.1, Pruesse et al. 2007 ) for 16S rRNA and the PR2 database (version 4.13, minboot: 50, Guillou et al. 2013 ) for 18S rRNA sequences using the build-in RDP classifier.16S rRNA gene amplicon reads classified as c hlor oplasts and mitochondria, as well as the 18S rRNA gene reads classified as Metazoa (zooplankton) were removed prior to downstream analyses (a single time point was excluded due to suspected contamination).The diatom genera Nitzschia , Navicula and Cocconeis were classified as primarily benthic based on local reference literature (Hustedt 1959 ).Corr elation anal ysis was carried out on the r elativ e abundance of these benthic diatom genera and Delsulfobacterota against water le v el, wind speed, and Secc hi depth data using linear r egr ession (function lm). Co-occurrence networks wer e gener ated in R using Spearman r ank corr elation, as described pr e viousl y (Bengtsson et al. 2017 ).Briefly, we excluded r ar e ASVs with a total abundance < 100 reads (16S) and < 500 reads (18S) across the whole dataset.Then, pairwise correlations between all remaining ASVs were calculated using the rcorr function (Hmisc R pac ka ge), follo w ed b y Pvalue adjustment for multiple testing (function p.adjust) using the Benjamini-Hoc hber g method (Benjamini and Hoc hber g 1995 ).For the final netw ork, w e consider ed exclusiv el y corr elations between 18S and 16S ASVs, and with a correlation coefficient > 0.7 and an adjusted p < 0.01.The netw ork w as then plotted using the igr a ph R pac ka ge (Csardi andNepusz 2006 , R Cor e Team 2023 ). Metaproteomics Pr oteins wer e extr acted fr om biomass filters of the 17th of April, 8th of May and 24th of May 2018 (Julian days 107, 128, and 144) as described pr e viousl y (Sc hultz et al. 2020 , Sc hultz 2022 ) and analyzed with liquid chromatography-tandem mass spectrometry in triplicates.Briefly, pr oteins wer e extr acted via bead-beating follo w ed b y acetone pr ecipitation.The extr acts wer e separ ated and fractionated by 1D SDS-PAGE and in-gel trypsin digested.After desalting and concentration of the peptides using C18 Millipore ® ZipTip columns, the samples were measured with an Orbitrap Velos TM mass spectrometer (ThermoFisher Scientific, Waltham, MA, USA).After conversion into mgf file format using MS conv ert in Pr oteoWizard (P alo Alto, CA, USA), spectr a wer e matc hed against a metagenome-based database containing 14764755 entries .Mascot (Matrix Science , London, UK) and Scaffold (Proteome Software , P ortland, OR, US) were used for peptide-spectrummatc hing, pr otein identification and pr otein gr ouping.Instead of setting an FDR-threshold, identification of protein groups was based on number of peptide-matches with a minimum of two, a pr otein thr eshold of 99% and a peptide threshold of 95%.Identified pr otein gr oups wer e annotated via Pr ophane v6.2.3 (Sc hiebenhoefer et al. 2020 ), using the Uniprot-TrEMBL (as of September 2021) and NCBI nr (as of February 2022) databases for taxonomic and EggNOG v5.0.2 (Huerta-Cepas et al. 2019 ) and TIGRFAMs 15 (Haft et al. 2001 ) databases for functional annotation with Prophane default settings.Taxonomic information for Desulfobacterota proteins was manually confirmed against the most recent NCBI nr database (blastp, https://blast.ncbi.nlm.nih.gov, September 2023).TrEMBL taxonomic information for Desulfobacterales was manually curated and set to phylum level for better comparability with the Silv a database.Relativ e abundances wer e calculated as Normalized Spectral Abundance Factor (NSAF) values using the quantification method "max_nsaf" integrated in Pr ophane.Briefly, SAF v alues were calculated by dividing exclusive unique spectrum counts for each protein group by protein length of the longest sequence in that pr otein gr oup.For normalization, SAF v alues wer e then divided by the sum of all SAF values of the sample. Da ta visualiza tion Stacked bar plots for rRNA gene amplicon data and metaproteomics were created with R version 4.3.0 using the tidyverse pac ka ge (Wic kham et al. 2019 ) in combination with the svglite, pol yc hr ome, patc hwork, glue and ggnested pac ka ges. Particle microbial community dynamics during the course of the bloom The spring phytoplankton bloom in 2018 was c har acterized by an initial dominance of diatoms ( Bacillarioph yceae ), follo w ed b y an increase in dinoflagellate ( Dinophyceae ) relative abundances after the bloom peak (peak in chl a concentration, Fig. 1 A and B).The algal bloom (chl a measurements) peaked around the 26th of April (J ulian da y 116), coinciding with a decrease in water clarity, as indicated by Secchi depth rising at the onset of the bloom.16S rRNA gene amplicon sequencing r e v ealed a total of 17 615 ASVs in the particle bacterial micr obiomes, whic h consisted mainl y of Proteobacteria ( e.g.Spongibacteriaceae, Methylophagaceae, Rhodobacteriaceae) , Bacteroidetes (e.g.Polaribacter ), Verrucomicrobia (e.g.Persicirhabdus ) and Planctomycetes (e.g.Phycisphaeraceae ).Further, Actinobacteria (e.g.Illumatobacter ) , and, notably, Desulfobacterota were also r elativ el y abundant (Fig. 1 C). Taxonomic and functional composition as detected by metaproteome analysis Meta pr oteome sampling was performed on three time points that correspond to the early, mid and late phase of the bloom (Julian days 107, 128 and 144), as indicated in Fig. 1 A).We detected 4 584 pr otein gr oups in > 10 μm filter fr actions fr om these sampled time points.Meta pr oteomic anal yses confirmed high abundances of Proteobacteria and Bacteroidetes in the bacterial fraction (Fig. 2 A), and dominance of Bacillariopyhta in the eukaryote fraction (Fig. 2 B), with eukaryote proteins making up 90% of all proteins at the first selected time point.The proportion of eukaryotic proteins was reduced to 70% in the last selected time point (144), while bacterial proteins became comparably more abundant (25% on day 144 compared to 4% on day 107).Functional analysis revealed a predominance of eukaryotic pr oteins involv ed in metabolism, including energy production and conversion, and a shift to w ar ds expression of proteins relevant in cellular processes and signaling over the course of the bloom (Fig. 2 C). Co-occurrence network analysis Co-occurr ence anal ysis r esulted in distinct network modules center ed ar ound diatom-and dinoflagellate 18S rRNA gene amplicon sequence variants (ASVs).Based on significant positive correlations (Spearman Rho > 0.7, corrected p > 0.01) between eukary otic 18S rRN A gene and prokary otic 16S rRN A gene ASVs, including the 51 analyzed time points during the 2018 spring bloom, the netw ork w as dominated b y three major distinct modules (Fig. 3 A).Two of these modules were dominated by eukaryotic ASVs belonging to diatoms and dinofla gellates, r espectiv el y, while the third module mostly contained diatom ASVs and was linked to the dinoflagellate-dominated network module.Along the time line of the bloom, these modules r oughl y corr esponded to the phytoplankton taxa pr e v alent in the early stages of the bloom (module I, mainly diatoms), during the late stages of the bloom (mod-ule II, mainly dinoflagellates) and mid-bloom around peak chl a (module III, Fig. 3 A, Fig. 1 A).The composition of bacterial ASVs that co-occurred with diatoms and dinoflagellates is depicted in Fig. 3 Analysis of Desulfobacterota proteins Out of the 4 584 protein groups detected by metaproteome analysis, 19 were classified as belonging to different Desulfobacterota orders ( Table S1 ).The r elativ e abundance of predicted Desulfobacterota pr oteins av er a ged ov er all time points in the meta pr oteome was 0.14% (expressed as normalized spectral abundance factor-NSAF).This order of magnitude corresponded well to the relative abundance of Desulfobacterota ASVs in the microbiome at the same time points (av er a ge 0.38%, Fig 1 C).Remarkably, no less than 32% of the Desulfobacterota meta pr oteome fr action (in terms of protein abundance) consisted of k e y enzymes for dissimilatory sulfate reduction (Fig. 3 C).Of these , ATP sulfurylase , both the alpha and the beta subunits of dissimilatory sulfite reductase (DSR) and aden yl yl-sulfate (adenosine-5 -phosphosulfate) reductase (APS r eductase) wer e detected.We used a conserv ativ e threshold for protein identification of at least two matching peptides to consider a protein as validly detected. Assessment of sediment influence In order to assess the potential influence of resuspension of anoxic sediments on our results, we analyzed local wind speed maxima, tide le v els and Secc hi depth (Fig. 1 A), as well as detritus levels, and the abundance of benthic diatom taxa (Fig. 4 ) along the course of the bloom.The r elativ e abundance of total Desulfobacterota ASVs correlated significantly with Secchi depth (R 2 = 0.33, P < 0.01) and with wind speed (R 2 = 0.11, P < 0.01), but not with water le v el (R 2 = 0.05, p > 0.05), as illustr ated in Fig. 5 .In addition, low Secchi depth coincided with high estimated le v els of "detritus", i.e. micr oscopicall y unidentifiable particular material (can have both benthic and pelag ic orig in, Fig. 4 A).The r elativ e abundance of known benthic diatom genera was low (at most < 0.05% of 18S amplicon reads, Fig. 4 B).The benthic diatom genus Cocconeis significantl y corr elated with wind speed (R 2 = 0.14, P < 0.01), indicating resuspension from benthic en vironments .Further, we searched for other known sediment-associated or ganisms, suc h as Bathyarchaeota , in the 16S amplicon dataset, which were not detected.We also did not detect any enzymes involved in denitrification (nrfA, narG, napA, nirK, nirS, nor and nosZ) in the meta pr oteomes. Discussion The succession we observed during the 2018 spring bloom, with an initial dominance of diatoms, follo w ed b y dinoflagellates, using 18S rRNA gene amplicon sequencing a gr ees with the typical phytoplankton community succession at Helgoland Roads (Wiltshire et al. 2008, Käse et al. 2020 ).Meta pr oteome anal ysis highlighted the dominance of eukaryotic proteins in the sampled particles, making up 70-80% of detected proteins, most of which belonged to the major diatom phytoplankton.The high pr e v alence of pr oteins involved in basic metabolism such as energy production and conversion is consistent with the sampled > 10 μm particle fraction mostly comprising of living phytoplankton cells, especially at the beginning of the bloom.Ho w e v er, bacterial pr oteins wer e pr esent The network was calculated based on Spearman correlations (r > 0.7, p < 0.01) exclusively between 18S ASVs and 16S ASVs.Desulfobacterota (y ello w cir cles) w er e onl y associated with the diatom-dominated module I.The sizes of the symbols corr espond to the number of significant correlations of the nodes (degree).(B) The bacterial taxa co-occurring with diatoms and dinoflagellates, respectively, are displayed as horizontal bars with a length proportional to the number of ASVs belonging to eac h linea ge.Desulfobacterota (y ello w) as w ell as potentially sulfur-oxidizing Ectothiorodospiraceae ( Gammaproteobacteria , blue) were positiv el y associated with diatoms but not with dinoflagellates.(C) The pie chart displays r elativ e abundances for all 19 pr otein gr oups classified as belonging to Desulfobacterota during all thr ee time points sampled for meta pr oteomics .Of these , se v en pr otein gr oups (32%, bright y ello w) r epr esented enzymes involv ed in dissimilatory sulfate r eduction.Yello w cir cles indicate which of the k e y enzymes in this metabolic pathway were detected.(A) Secchi depth and abundance of "detritus" (unidentifiable material) along the course of the bloom.Detritus le v els wer e estimated under the microscope on a scale between 0 and 6 (0:"none", 3: "moderate" 6:"massive").(B) Relative abundance of Desulfobacterota (16S rRNA gene) in relation to r elativ e abundances of kno wn benthic diatom taxa ( Cocconeis , Navicula, Nitzschia , 18S rRNA gene). in all of the three selected time points, with a notable peak at day 144, and their taxonomic composition was very similar to that observed via 16S rRNA gene amplicon sequencing.Overall, the composition of particle bacterial microbiomes agreed with other reports from similar environments (Wang et al. 2024, Crump et al. 1999, Schultz et al. 2020, Heins et al. 2021, Reintjes et al. 2023 ). As hypothesized, bacteria co-occurring with abundant diatoms formed a distinct network module, with taxa that were different from those co-occurring with dinoflagellates in a second distinct module.A thir d netw ork module contained mostly diatom taxa, but also one dinoflagellate ASV and other phytoplankton taxa.These network patterns offer an alternative way to visualize the temporal dynamics of the bloom, and should not be inter pr eted as evidence of physical interactions between taxa (Röttjers and Faust 2018 ).Ho w e v er, one pattern that is striking in our network analysis is the exclusive co-occurrence of Desulfobacterota with diatoms. We observed a high relative abundance of Desulfobacterota (in total 0.38% of bacterial amplicon reads) in particles ( > 10 μm), especially in the early phase of the bloom when diatoms were dominating.Despite the limited resolution of meta pr oteome data compared to DNA-based methods, desulfobacterial proteins wer e r epr esented at all selected time points (in total 0.14% of meta pr oteomes).Network anal ysis further highlighted the tempor al co-occurr ence of Desulfobacterota with se v er al diatom taxa.This raises the question of the niche filled by these anaerobic bacteria during an active phytoplankton bloom. Diatoms, such as Thallassiosira spp.and Thalassionema spp., whic h wer e co-occurring with Desulfobacterota in this study (Fig. 3 A), are known to form aggregates (Thornton 2002 ).For example, se v er al species of Thalassiosira extrude long chitin fibrils, whic h pr e v ent sinking and bind exopol ymeric substances (EPS) also produced by the algal cells (Herth andBarthlott 1979 , Den et al. 2023 ).This creates a favor able envir onment for bacteria to attac h, whic h can in turn stimulate algal EPS production (Gärdes et al. 2011 ).EPS mak es particles adhesi ve and thus bacteria can be ca ptur ed in this sticky EPS layer.Smaller particles can a ggr egate to larger particles by collision and adhesion, in particular during phytoplankton blooms with high particle densities.Such aggregates can feature high numbers of living, photosynthesizing algal cells (Thornton 2002 ) which produce ample oxygen during the day, but at night r espir ation by the algal cells and their surrounding bacteria may deplete oxygen sufficiently for low oxygen or e v en anaer obic micr o-nic hes to form.Experiments with marine particles in laboratory roller tanks have demonstrated how oxygendepleted zones can form in diatom-derived particles (Ploug and Berggkvist 2015 ), in part due to EPS associated to the algae rendering the particles impermeable to water flow (Zetsche et al. 2020 ).Inter estingl y, sulfate r eduction was detected in suc h particles in an earlier study, and the reducing microzones where this process was pr esumabl y taking place wer e fr equentl y associated with diatom frustules (Shanks and Reeder 1993 ). Desulfobacterota have not been identified as frequent members of diatom microbiomes in either cultures or in the field so far (Helliwell et al. 2022 ).Howe v er, a r ecent global surv ey of the diatom interactome detected positive correlations between diatoms and sulfate-reducing bacteria ( Desulfovibrio ) in samples from the Tara Oceans expedition (Vincent and Bowler 2020 ).In addition, Desulfobacterota hav e pr e viousl y been r epeatedl y detected in particleassociated communities in the photic zone (Crump et al. 1999, Liu et al. 2020, Hallstrøm et al. 2022 ).In a par allel study, we r econstructed a genome of a Desulfobacterota member from metagenomic data from material sampled during the same phytoplankton bloom (Wang et al. 2024 ).Ectothiorhodospiraceae ar e pur ple sulfur bacteria (belonging to Gammaproteobacteria ), that oxidize reduced sulfur compounds as electron donors during anoxygenic photosynthesis and are anaerobic to microaerophilic (Imhoff et al. 2022 ).The y can o xidize H 2 S, e.g.produced via dissimilatory sulfate reduction by members of the Desulfobacterota .Our observation of Desulfobacterota as well as Ectothiorhodospiraceae co-occurring with diatoms is consistent with potential sulfur cycling during the diatom-dominated phase of this phytoplankton bloom under anoxic to very low oxygen conditions.While co-occurrence of diatom and Desulfobacterota rRNA genes does not by itself indicate that sulfate reduction is taking place in diatom-derived particles, detection of k e y functional enzymes by meta pr oteomics suggests that Desulfobacterota were actively carrying out sulfate reduction and thus gaining energy through anaerobic respiration during the bloom.The higher detection of these enzymes especially at the last time point (Julian day 144, Table S1 can likely be attributed to the lar ger pr oportion of bacterial pr oteins at this time point, when the diatom bloom was declining. Tempor al associations, suc h as detected by our co-occurrence network analyses , ha ve to be interpreted with caution as additional variable factors that were not taken into account may influence the observed correlations.Importantly, we cannot quantitativ el y assess the influence of underlying sediment microbiomes at the rather shallow sampling site (6-10 m depth depending on tide), whic h fr equentl y become r esuspended during times of heavy wind ther eby intr oducing anaer obic micr obes to the pela gic environment.Indeed, some of our results point to w ar ds a significant influence of sediment micr obiomes, suc h as the correlation between r elativ e abundance of total Desulfobacterota ASVs and secchi depth, as well as Desulfobacterota and wind speed.The detected Desulfobacterota ASVs (e.g.classifying as Desulfosarcina , Desulfocapsaceae , Desulfobulbaceae ) are related to typical benthic lineages (Rav ensc hla g et al. 1999 ), suggesting that they originated from the underlying sediments .T he diatoms co-occurring with Desulfobacteria were primarily classified as common pelagic genera such as Thalassiosira and Thalassionema .Ho w e v er, the genus Brockmanniella , which can also inhabit benthic biofilms (Hernández Fariñas et al. 2017 ), was also found within network module I.The primarily benthic diatom genera Nitzschia , Navicula and Cocconeis were not part of the network, but Cocconeis r elativ e abundances correlated with wind speed, indicating a turbulence-driven benthic-pelagic coupling.Ne v ertheless, a benthic origin of the Desulfobacterota detected in our study does not exclude physical interactions with the blooming pelagic diatoms.In shallow environments, benthic and pela gic micr obiomes ar e in close contact within the photic zone, and seeding of pelagic particles by benthic microbes is likely frequent, which may explain the observed co-occurrences.In fact, most of the enzymes involved in sulfate r eduction wer e detected in the last timepoint, when the diatom bloom was declining and wind speeds were moderate, indicating low sediment resuspension.Micr oscopic anal ysis r e v ealed high abundances of the ha ptoph yte ph ytoplankton Phaeocystis globosa were also detected at this timepoint (Wang et al. 2024 ), which may have contributed to a ggr egate formation (Schoemann et al. 2005 ) Importantly, our methodology does not allow us to determine the physical proximity of Desulfobacterota and diatom cells.We ther efor e cannot rule out that the observed co-occurrences reflect a pur el y tempor al association between pela gic diatoms and r esuspended sediment bacteria.Indeed, demonstrating a potential physical association between Desulfobacterota and diatoms would r equir e in situ microscopic investigation using for example taxonor gene-specific fluor escent pr obes.With this a ppr oac h, sulfate r educers wer e for example detected in oxygenated surface waters in the Black Sea (Bryukhanov et al. 2011 ).Likewise, it has yet to be clarified, whether any such temporal or physical associations ar e annuall y r ecurr ent, under what specific conditions they occur, and which specific diatom taxa are in volved.T hus , further studies are needed to confirm whether our results are re presentati ve for phytoplankton blooms in shallow water coastal areas. Our results highlight the complexity of particle microbiomes and corr obor ate the need to study algae-bacteria particles as spatiall y heter ogeneous entities .T he micr obiomes of a ggr egateforming phytoplankton may indeed be similarly complex as those of animals and other multicellular organisms, featuring distinct micr o-nic hes with sub-micr obiomes (analogous to e.g.human skin vs. gut microbiomes), whose compositions depend on the c hemical envir onment on a small spatial scale.Anaer obic micr oniches in phytoplankton-derived aggregates may affect carbon cycling insofar, as sulfate reduction can chemically alter particu-late organic matter, rendering it resistant to microbial degradation (Raven et al. 2021 ).Ho w ever, this effect has been demonstrated in oxygen-deficient zones of the ocean, and it is unclear this can be expected in oxygenated surface waters.Sulfate reduction in marine particles has been predicted to be prevalent in vast hypoxic ( < 60 μM oxygen concentration) waters of the global oceans, based on modeling of particle properties and oxygen regimes (Bianchi et al. 2018 ).The model could explain observed precipitation of Cadmium sulfide (CdS) in these waters (Janssen et al. 2014 ).Surface waters in the photic zone at our sampling site are expected to feature far higher oxygen concentrations at around 300 μM (7-10 mg/l, https:// dashboard.awi.de/?dashboard=34404 ).Howe v er, anaer obic nic hes in marine phytoplankton-deriv ed particles wer e alr eady fr equentl y r eported in the context of nitrogen fixation (i.e.diazotrophy, e.g.Riemann et al. 2022 ).Recent w ork has sho wn that many Desulfobacterota are in fact capable of both diazotrophy and sulfate reduction (Liesirova et al. 2023 ) and that diazotrophic Desulfobacterota can move to w ar ds phytoplankton-deriv ed or ganic matter via chemotaxis.Similar to diazotr ophs, sulfate r educers in turn provide additional niches for bacteria which scavenge their metabolic products , i.e .sulfur oxidizers such as the observed Ectothiorhodospiraceae , leading to higher metabolic diversity and complexity within particles.We suggest that incor por ation of sulfate reducing bacteria in phytoplankton-derived aggregates may be a r ele v ant phenomenon, especiall y in shallow coastal areas where the microbiomes of underlying sediments provide a pool of abundant sulfate reducers to colonize aggregates.Considering the significance of marine particle microbiomes in carbon sequestration, it is vital to understand the consequences of their metabolic and compositional complexity, and the resulting microbial interactions in these microbiomes. Figure 1 . Figure 1.Pr ogr ession of the 2018 spring bloom by Helgoland, North Sea.(A) Chl a (green) peaked around the 26th of April (Julian day 116), coinciding with a drop in Secchi depth (brown).Wind speed maxima (grey) reached storm levels (German: "Sturm", > 75 km/h, dotted line) on three occasions in the first half of the bloom.Points indicate water le v els (m from global reference zero point) and tidal direction (white points: falling, black points: rising).Asterisks indicate time points (Julian days) for which metaproteomic sampling was performed.(B) 18S rRNA gene amplicon sequencing r e v ealed that diatoms ( Bacillariophyta ) were the most abundant phytoplankton lineage, although dinoflagellates ( Dinophyceae ) dominated after the chl a peak.(C) 16S rRNA gene amplicon sequencing showed the highest r elativ e abundances of Desulfobacterota (yellow) during the first half of the bloom. B, and in more detail in supplementary Fig.1.Ele v en ASVs belonging to the Desulfobacterota were part of the main diatomdominated network module I and co-occurred exclusively with diatoms, including the genus Desulfosarcina , and the families Desulfocapsaceae and Desulfobulbaceae as well as the lineage Sva1033(Rav ensc hla g et al. 1999 ).In addition, 4 ASVs classified as Ectothiorhodospir aceae (genus Thiogr anum ) also co-occurred with diatoms (Fig.3 B). Figure 2 . Figure2.Taxonomic and functional annotation of particle meta pr oteomes fr om thr ee selected time points during the bloom.(A) The pr oportion of bacterial proteins increased during the course of the bloom (B) Eukaryotic proteins made up the majority of identified proteins and sho w ed an initial strong dominance of Bacillariophyta (C).Functional annotation of metaproteomes indicated a high contribution of metabolism-related eukaryotic pr oteins earl y in the bloom, while cellular pr ocesses and signaling incr eased during later time points. Figure 3 . Figure3.Co-occurrence of eukaryotes and bacteria, and detected protein groups of Desulfobacterota .(A) A network analysis of 18S rRNA (squares) and 16S rRN A (cir cles) gene ASV co-occurrences resulted in two major network modules containing diatom [I] and dinoflagellate [II] 18S ASVs, r espectiv el y, as well as one mixed module[III].The network was calculated based on Spearman correlations (r > 0.7, p < 0.01) exclusively between 18S ASVs and 16S ASVs.Desulfobacterota (y ello w cir cles) w er e onl y associated with the diatom-dominated module I.The sizes of the symbols corr espond to the number of significant correlations of the nodes (degree).(B) The bacterial taxa co-occurring with diatoms and dinoflagellates, respectively, are displayed as horizontal bars with a length proportional to the number of ASVs belonging to eac h linea ge.Desulfobacterota (y ello w) as w ell as potentially sulfur-oxidizing Ectothiorodospiraceae ( Gammaproteobacteria , blue) were positiv el y associated with diatoms but not with dinoflagellates.(C) The pie chart displays r elativ e abundances for all 19 pr otein gr oups classified as belonging to Desulfobacterota during all thr ee time points sampled for meta pr oteomics .Of these , se v en pr otein gr oups (32%, bright y ello w) r epr esented enzymes involv ed in dissimilatory sulfate r eduction.Yello w cir cles indicate which of the k e y enzymes in this metabolic pathway were detected.
8,467.2
2024-03-15T00:00:00.000
[ "Environmental Science", "Biology" ]
Asymptotic Behavior Analysis of a Fractional-Order Tumor-Immune Interaction Model with Immunotherapy A fractional-order tumor-immune interaction model with immunotherapy is proposed and examined.-e existence, uniqueness, and nonnegativity of the solutions are proved. -e local and global asymptotic stability of some equilibrium points are investigated. In particular, we present the sufficient conditions for asymptotic stability of tumor-free equilibrium. Finally, numerical simulations are conducted to illustrate the analytical results. -e results indicate that the fractional order has a stabilization effect, and it may help to control the tumor extinction. Introduction Tumor or tumour is a term used to describe the name for a swelling or lesion formed by an abnormal growth of cells. A tumor can be benign, premalignant, or malignant, whereas cancer is by definition malignant and is used to describe a disease in which abnormal cells divide without control and are able to invade other tissues. Cancer cells can spread to other parts of the body through blood and lymph systems [1], and so cancer is known as the leading cause of death in the world. During the last four decades, a large body of evidence has accumulated to provide support for the concept that the host immune system interacts with developing tumors and may be responsible for the arrest of tumor growth and for tumor regression [2]. Immunotherapy holds much promise for the treatment option and considered the fourth-line cancer therapy [3] by using cytokines and adoptive cellular immunotherapy (ACI) since adoptive immunotherapy using lymphokine-activated killer (LAK) cells or tumor-infiltrating lymphocytes (TIL) plus IL-2 has yielded positive results both in experimental tumor models and clinical trials [4]. e most current terminology used to describe cytokines is "immunomodulating agents" which are important regulators of both the innate and adaptive immune response. Examples of cytokines are protein hormones produced mainly by activated T cells (lymphocytes) in cellmediated immunity, and interleukin-2 (IL-2), produced mainly by CD4 + T cells, is the main cytokine responsible for lymphocyte activation, growth, and differentiation. ACI refers to the injection of cultured immune cells that have antitumor reactivity into the tumor-bearing host, which is typically achieved in conjunction with large amounts of IL-2 by using the following two methods: LAK therapy and TIL therapy. For more information on cytokines and ACI, the reader is referred to [5] and the references therein. By applying each therapy separately or by applying both therapies simultaneously, Kirschner and Panetta [6] considered a model describing tumor-immune dynamics together with the feature of IL-2 dynamics. ey proposed a model describing the interaction between the effector cells, tumor cells, and the cytokine (IL-2): dE dt � cT − μ 2 E + p 1 EI L g 1 + I L + s 1 , where E(t) represents the activated immune system cells (commonly called effector cells) such as cytotoxic T cells, macrophages, and natural killer cells that are cytotoxic to the tumor cells; T(t) represents the tumor cells; and I L (t) represents the concentration of IL-2 in the single tumor-site compartment. e parameters and their biological interpretations are summarized in Table 1. For the nondimensionalized model (1), we adopt the following scaling: (2) en model (1) is converted into the following form (dropping the tilde): In recent years, fractional-order differential equations have attracted the attention of researchers due to their ability to provide a good description of certain nonlinear phenomena. e fractional-order differential equations are generalizations of ordinary differential equations to arbitrary (noninteger) orders. Some researchers studied the fractional-order differential equations to describe complex systems in different branches of physics, chemistry, and engineering [7]. In the last few years, many researchers have also employed fractional-order biological models [8]. is is because fractional-order differential equations are naturally related to systems with memory [8]. Many biological systems possess memory, and the conception of the fractional-order system may be closer to real-life situations than integer-order systems. e advantages of fractional-order systems are that they describe the whole time domain for physical processes, while the integer-order model is related to the local properties of a certain position, and they allow greater degrees of freedom in the model [9]. e relevant works related to the fractional modeling can be found in [10][11][12][13] and the references therein. To the best of the authors' knowledge, the dynamical analysis of a fractional-order tumor-immune interaction system with immunotherapy has not been performed before. Motivated by the above considerations, in this paper, we study a fractional-order tumor-immune interaction system by extending the integer order model (3) as follows: 2 Complexity where α ∈ (0, 1) and c 0 D α t is the standard Caputo differentiation. e Caputo fractional derivative of order α is defined as [9,14] In this paper, we consider immunotherapy to be ACI and/or IL-2 delivery either separately or in combination in the interaction site among effector cells, the tumor, and IL-2. e organization of this paper is as follows. In Section 2, the existence, uniqueness, and nonnegativity of the fractionalorder model (4) are presented. In Section 3, equilibria and (global) asymptotic stability analysis of the fractional-order model (4) are given. e numerical simulations are provided to verify the theoretical results of the fractional-order model (4) in Section 4. Finally, the study concludes with a brief discussion in Section 5. Existence, Uniqueness, and Nonnegativity is section studies the existence, uniqueness, and nonnegativity of the solutions of the fractional-order model (4). To prove the existence and uniqueness of the solution for model (4), we need the following lemma. Definition 1 (see [16]). A point x * is called an equilibrium point of system (6) if and only if f(t, x * ) � 0. exists a unique solution of the fractional-order model (4), which is defined for all t ≥ 0. Proof. Let 0 < T < ∞. We seek a sufficient condition for existence and uniqueness of the solutions of the fractionalorder model (4) For any X, X ∈ Ω, it follows from (4) that us, F(X) satisfies the Lipschitz condition with respect to X. Consequently, it follows from Lemma 1 that there exists a unique solution of model (4). + , all the solutions of the fractional-order model (4) are nonnegative. Proof. We will prove this theorem by contradiction. Suppose there exists t * ≥ 0 at which the solutions of model (4) passes through either the u-axis, v-axis, or w-axis. Let α ∈ (0, 1), then there are three possibilities: Using the standard comparison theorem for fractional order and the positivity of Mittag-Leffler erefore, the solution of model (4) will be nonnegative. Equilibria Analysis and Asymptotic Stability We investigate all nonnegative constant equilibrium points to (4). First, according to Definition 1, model (4) has the following four nonnegative equilibrium points, which have at least one component zero: e cases (2) and (4) are realistic tumor-free equilibrium points. On the other hand, (1) and (3) are not realistic because the effector (or immune) cells do not disappear although the immune system can be weak. us, in this section, to investigate the tumor-free equilibrium points in (1), we examine the asymptotically stable behavior at the equilibrium points provided in the cases (2) and (4). Next, we only provide the sufficient conditions of the existence of a unique positive equilibrium point E * � (u * , v * , w * ) to (4) and omit the proof process. Lemma 2 (Lemma 2.1, see [18]). If one of the following inequalities holds, then (4) has a unique positive equilibrium point E * . Now, we determine the local stability of the equilibrium points of model (4) using the linearization method. e Jacobian matrix of the system evaluated at point X � (u, v, w) is given by where F(X) is defined in the proof of eorem 1. (4) is locally asymptotically stable if as 1 > gμ 2 and is unstable, which is a saddle point, if as 1 < gμ 2 . erefore, according to Lemma 3, the equilibrium point E 2 is locally asymptotically stable. erefore, according to Lemma 3, the equilibrium point E 4 is locally asymptotically stable. Remark 2. It follows from Lemmas 2.2 and 2.3 in [18] that So, the signs of some terms of a i , i � 1, 2, 3, could be determined. We next investigate the global stability of the positive equilibrium point E * by introducing the following Lyapunov function: for the solution (u, v, w) to (4). Note that E(t) ≥ 0 for all t ≥ 0, and thus, if c 0 D α t E(t) ≤ 0 can be derived, then we obtain the desired result from the well-known Lyapunov stability. For better visualization of the impact of α on the asymptotic rate of convergence of the realistic tumor-free equilibria E 2 and E 4 , Figure 3 indicates that with the higher value of α, the asymptotic rate of convergence of E 2 and E 4 will be larger. Note that v represents the tumor cells and s 1 and s 2 represent the treatment by an external source of effector (4) with c � 0.9, μ 2 � 1, p 1 � 0.5, s 1 � 3, b � 3, a � 1, g � 2.5, p 2 � 1, μ 3 � 1, α � 0.9, and s 2 � 0.5. Figure 4 implies the former case, Figures 5 and 6 imply the latter case. e results show (1) Tumor treatment by an external source of effector cells, i.e., s 2 � 0 with different s 1 . Figure 4 shows that the higher the value of s 1 , the asymptotic rate of convergence of v or the rate of tumor extinction will be larger; however, the variations are not obvious when s 1 reaches a critical value. (2) Tumor treatment by an external source of effector cells without or with an external input of IL-2 into the system, i.e., s 1 � 3, s 2 � 0 or s 1 � 3, s 2 � 0.5. Figure 5 shows that the introduction of new immunotherapy methods has accelerated the asymptotic rate of convergence of v or the rate of tumor extinction. (3) Tumor treatment by an external source of effector cells and an external input of IL-2 into the system, i.e., s 1 � 3 with different s 2 . Figure 6 shows that with the same value of s 1 and higher value of s 2 , the asymptotic rate of convergence of v or the rate of tumor extinction will be larger; however, the (4) with c � 0.9, μ 2 � 1, p 1 � 0.5, s 1 � 3, b � 3, a � 1, g � 2.5, p 2 � 1, μ 3 � 1, α � 0.9, and the same treatment by an external input of IL-2 into the system s 1 � 3. (a) Original drawing (the blue line represents that there is only the treatment by an external input of IL-2 into the system, i.e., s 2 � 0, and the red line represents that besides the treatment by an external input of IL-2 into the system, there is also the treatment by an external source of effector cells, i.e., s 2 � 0.5). (b) Drawing of partial enlargement of 5(a). variations are not obvious when s 2 reaches a critical value. In other words, the desired best effects can be achieved by combining the two types of immunotherapy. with the results of eorem 6. is situation means that the tumor will exist indefinitely, which will be incurable in medicine. Concluding Remarks In this paper a fractional-order tumor-immune interaction model with immunotherapy is discussed. e existence, uniqueness, and nonnegativity of the solutions are proved. e local and global asymptotic stability of some equilibrium points are investigated. Unfortunately, by the fractional calculation, we cannot obtain the boundedness of solutions to the fractional-order tumor-immune model (4) with α ∈ (0, 1). In addition, numerical simulations are conducted to illustrate the analytical results. is yields that under some conditions, the tumor can be cured thoroughly, by the therapy (ACI or ACI plus IL-2); under some other conditions, combination therapy (ACI plus IL-2) can achieve satisfactory and stable tumor control; however, the tumor is incurable. Data Availability e data used to support the findings of this study are available from the corresponding author upon request Conflicts of Interest e authors declare that they have no conflicts of interest.
2,896.6
2020-04-28T00:00:00.000
[ "Mathematics" ]
Iranian EFL Learners’ and Teachers’ Beliefs About the Usefulness of Vocabulary Learning Strategies Vocabulary is an important part of language which is central to all language skills and meaningful communication. One way through which vocabulary learning can be facilitated is by the use of vocabulary learning strategies (VLS). VLSs can empower language learners to be more self-directed, regulated, and autonomous. Also, they can help language learners to discover and consolidate the meaning of the words more effectively. Teachers’ and students’ behavior, functioning, and learning are, however, controlled by their thoughts, beliefs, attitudes, and perceptions. The present study was an effort to explore the Iranian EFL (English as a Foreign Language) learners’ and teachers’ beliefs about the usefulness of different types of VLSs. To that end, a VLS questionnaire developed for this purpose was given to 392 EFL teachers and learners. Based on the results of the study, the Iranian EFL learners and teachers believed that strategies such as paying attention to vocabulary forms, functions, and semantic relations; guessing the meaning of new words from the context; and using monolingual dictionaries can be very useful in discovering and consolidating the meaning of new words. They, nevertheless, expressed hesitancy to use L1, bilingual dictionaries, and mnemonic devices. The results of Kruskal–Wallis Test also showed that the preference for a few strategies differed across levels of education. Introduction In the past, vocabulary was sidelined in the area of language learning and teaching because grammar was considered to be the most important part of language and vocabulary was secondary to it (Milton, 2009). The developments in the area of linguistics along with new sociocultural demands challenged the status quo (Richards & Rodgers, 2003). More recently, researchers (Amiryousefi & Kassaian, 2010;Amiryousefi & Ketabi, 2011;Coady & Huckin, 1997;Hedge, 2008;Oxford & Scarcella, 1994;Richards & Renandya, 2002;Schmitt, 2010) have viewed vocabulary as an important part of language on which effective communication relies. Schmitt (2010), for example, believes that meaningful communication in a foreign language depends mostly on words. If learners do not have the available words to express their ideas, mastering grammatical rules does not help. Vocabulary has, consequently, gained popularity in the general field of English language teaching and learning, and research in second language lexical acquisition, retention, and instruction has increased (Coady & Huckin, 1997;Hedge, 2008;Richards & Renandya, 2002). During the previous decades, the area of language learning and teaching has also been marked with the attempts to make language learners autonomous (Harmer, 2001;Hedge, 2008). Autonomy is believed to be the essence of language acquisition which can help language learners take charge of their own learning (Little, 2007). One way through which language learners can become autonomous is to help them use language learning strategies (LLS) (Zarei & Elekaie, 2012). LLSs are a set of conscious or semi-conscious thoughts and behavior which are used by language learners to facilitate their learning process (Cohen & Dornyei, 2002). Vocabulary learning strategies (VLS) are also a part of LLSs which are defined by Gu (2003) as those behavior and actions used by language learners to use and to know vocabulary items. The present study is, therefore, an attempt to explore the Iranian EFL (English as a Foreign Language) learners' and teachers' beliefs about the usefulness of different types of VLSs and to examine the effects of level of education on their strategy preference. 581382S GOXXX10.1177 1 University of Isfahan, Iran VLS Although research on VLSs has been done for several decades, the field has not been able to form a common and unified definition of what the term exactly means. Various authors and researchers (Catalan, 2003;Cohen & Dornyei, 2002;Gu, 2003;Nation, 2001;Takac, 2008) have defined VLSs differently. Nation (2001), for example, believes that VLSs are a part of LLSs which need to 1) involve choice. That is, there are several strategies to choose from, 2) be complex. That is, there are several steps to learn, 3) require knowledge and benefit from training, 4) increase the efficiency of vocabulary learning and vocabulary use. (p. 352) Cohen and Dornyei (2002), however, believe that VLSs involve memorizing, recalling, reviewing, and using vocabulary items. Catalan (2003) also believes that VLSs are those actions which are taken by language learners to find out the meaning of the words, to send them to their longterm memory which is, based on Schmitt (2010), the ultimate goal of vocabulary learning, to recall and to use the words when needed. Takac (2008), however, believes that VLSs are those strategies which are solely used for vocabulary learning tasks. In spite of the differences in definitions, almost all researchers believe that studying VLSs can give teachers and researchers useful insights. First, by exploring VLSs used by different language learners, useful information can be obtained regarding the cognitive, social, and affective processes involved in vocabulary learning (Chamot, 2001). Second, by exploring the VLSs used by successful learners a list of useful strategies can be prepared to be taught to less successful language learners to help and support them in their language learning process (H. D. Brown, 2014). Third, by exploring the beliefs and attitudes of language learners about different VLSs, useful information can be obtained about their desired and expected behavior and actions (Schmitt, 2010). Learners' expectations and desires have big impacts on their learning behavior because, according to Schmitt (2010), if language learners do not value specific behavior and actions, they will not have the needed motivation which is the very first step in the language learning process. Fourth, Tseng, Dornyei, and Schmitt (2006) believe that VLSs are the ways through which language learners can be empowered to be more self-directed in their learning. By exploring the beliefs and attitudes about different VLSs or by getting information about the whats and hows of VLSs, teachers can raise their awareness about what works for and what works against their learners (H. D. Brown, 2014). Finally, research has shown that students from different cultural, linguistic, and educational backgrounds do not benefit from the same strategies (Gu, 2003;Tran, 2011). It means that culture plays an important role in vocabulary learning and VLS use. Language learners from different cultures may find different VLSs useful (Schmitt, 2000). By exploring the beliefs and perceptions of language learners about different VLSs, teachers will know what strategies to focus on. VLS Classifications Different taxonomies and classifications of VLSs are available in the literature of vocabulary learning and teaching (Klapper, 2008;Nation, 2001;Rubin & Thompson, 1994;Schmitt, 1997). Rubin and Thompson's (1994) taxonomy of VLS, for example, consists of three major parts: (a) direct approach, (b) indirect approach, and (c) mnemonics. Direct approach contains strategies such as saying or writing the words several times and putting the words on cards which direct language learners' attention to vocabulary items themselves. Indirect approach, however, contains those strategies such as reading a text and trying to make sense of it which focus learners' attention to language learning tasks rather than individual vocabulary items. Mnemonics also contain strategies such as grouping the words and relating them to a picture which are used to retain the words in the memory. Schmitt's (1997) taxonomy, however, consists of 58 strategies which can be divided into two broad categories, namely, discovery strategies which are used to discover the meaning of the words and consolidation strategies which are used to retain them. Schmitt's (1997) taxonomy is believed to be the most comprehensive one because it is specifically prepared for vocabulary learning, and there is little overlap between the classifications of the strategies (Akbari & Tahririan, 2009). Nation (2001) in his taxonomy of different kinds of VLSs makes a difference between aspects of vocabulary knowledge, sources of vocabulary knowledge, and learning processes (pp. 352-353). Aspects of vocabulary knowledge refer to what is involved in knowing a word such as its written and spoken forms, sources of vocabulary knowledge refer to the context in which the word is used and learning processes refer to those actions that lead to the learning and retention of the given words. His taxonomy, therefore, consists of three major parts, namely, planning strategies which are used to choose what to focus on, source strategies which are used to find information about words, and process strategies which are used to establish vocabulary knowledge. Klapper (2008), however, divides VLSs into the strategies which are used in explicit learning of vocabulary and those which are used in implicit vocabulary learning. Strategies such as analyzing vocabulary items, using cards, or keeping vocabulary notebooks are used in implicit learning, whereas strategies such as listening to stories, watching movies, or reading stories are the ones used in implicit learning. Tran (2011) modified the taxonomy developed by Catalan (2003) to study the Vietnamese EFL teachers' perceptions about vocabulary learning and teaching. His taxonomy contains 68 items which are also divided into two main parts, namely, discovery strategies and consolidation strategies. The discovery part has strategies such as leafing through the dictionary to learn words and asking the teacher for an L1 translation which can help language learners to discover the meaning of the words. The consolidation part, however, has strategies such as using scales for gradable adjectives and using mnemonic devices which can be helpful in consolidating the meaning of the learned items. The Importance of Students' and Teachers' Beliefs Research (Borg, 2003;Nation & Macalister, 2010;Phipps & Borg, 2009;Rashidi & Moghadam, 2014;Riley, 2009) in the field of language teaching and learning has shown that students' and teachers' beliefs about the nature of language and language learning and teaching affect their pedagogical practices in the classroom. It is believed that beliefs affect students' and teachers' autonomy and success in language learning and teaching, and underlie all choices they make. Differences in beliefs can, therefore, make students' and teachers' approach a learning task differently despite their similarities in language proficiency and level of education. Beliefs also influence students' and teachers' personal attributes such as anxiety and motivation (Riley, 2009). Nation and Macalister (2010) believe that what teachers and students do is determined by their beliefs. In the same fashion, Phipps and Borg (2009) believe that students' and teachers' beliefs act as a filter through which all practices and experiences are passed and interpreted. The social-cognitive theory also states that a student or a teacher's behavior, learning, and actions are products of a continuous interaction between cognitive, behavioral, and contextual factors. That is, a student or a teacher's behavior, actions, and learning or teaching are shaped by factors such as the reinforcements experienced by himself or herself and/or by other's beliefs, perceptions, and interpretation of the task and context (Bembenutty & White, 2013;Kitsantas & Zimmerman, 2009). Language learners' use of LLSs in general and VLSs in particular is, consequently, affected by factors such as their own and their classmates' and teachers' beliefs about their usefulness and effectiveness. Beliefs are, nevertheless, considered to be dynamic and may change over time or may change as teachers' and students' factors such as level of education change (Barcelos & Kalaja, 2011;Zhong, 2014). Riley (2009), however, believes that if teachers' and students' beliefs are consistent with each other, there will be a supportive atmosphere in the classroom which will enhance the quality of learning and teaching. Otherwise, there will be a clash and lack of understanding between the teachers and the students which may lead to dissatisfaction (Rashidi & Moghadam, 2014;Riley, 2009). Examining students' and teachers' beliefs about different aspects of language teaching and learning can, therefore, provide valuable information about what practices and tasks are considered useful by teachers and students to be included and what differences exist between their beliefs to be taken into account. The Purpose of the Present Study Under the umbrella of social-cognitive theory (Bembenutty & White, 2013;Kitsantas & Zimmerman, 2009), which states that beliefs and thoughts are powerful and can affect human beings' behavior and functioning and with regard to the importance attached to the role of LLSs in making language learners autonomous (Zarei & Elekaie, 2012), the value credited to the role of vocabulary in language learning and the role of culture in vocabulary learning and VLS use (Schmitt, 2000(Schmitt, , 2010, the present study is an endeavor to explore the Iranian EFL learners' and teachers' beliefs about the usefulness of different types of VLSs. The present study, therefore, addresses the following research questions: Research Question 1: What are the most useful VLSs in the Iranian EFL learners' and teachers' opinions? Research Question 2: Does the level of education affect the participants' beliefs about the usefulness of VLSs? The Study The Participants The sample of the study included 392 participants comprising 320 students and 72 English teachers. The participants of the study were English learners and teachers of four big and famous institutes (Iran Language Institute, Pouyesh, Gouyesh, and Iranian Academic Center for Education, Culture and Research (ACECR) in Isfahan, Iran. The students participated in this study were learning English in different levels from elementary to advanced at the adults' departments of these institutes in the spring semester of 2014. Their age ranged from 16 to 43, and their degrees ranged from high school diploma to PhD. The majority of the teachers were also female. Their age ranged from 21 to 58, and they had different degrees in English from BA to PhD. Table 1 shows the characteristics of the participants of the study. The Instrument To elicit the required data, a 5-point Likert-type scale questionnaire was used. The VLS questionnaire was developed based on the questionnaire used by Tran (2011) to study the Vietnamese teachers' beliefs about vocabulary learning and teaching. The original questionnaire contained 63 items which were divided into two broad categories of discovery strategies and consolidation strategies. Based on the characteristics of the Iranian EFL learners, teachers, and teaching and learning context, it was, however, reduced to 54 items, and some of the items were also reworded. The anchor points for the existing items ranged from 1 = very useless to 5 = very useful. It was also divided into discovery strategies with 15 items and consolidation strategies with 39 items. Discovery part of the questionnaire contained strategies such as checking the meaning of the words in a bilingual or monolingual dictionary and guessing the meaning of the unknown words which can be used by language learners to arrive at the meaning of the words, while consolidation part contained strategies such as writing words down or saying them aloud several times to remember them, reviewing the words, and relating the words to personal experiences or pictures which are used to consolidate their meaning. Based on J. D. Brown's (2001) suggestion to minimize the measurement errors, the questionnaire was translated into Farsi, the participants' native language and the Farsi format was given to the participants. The final format was reviewed by six experienced teachers and scholars, and based on their validation necessary changes were made. To check the validity of the questionnaire, factor analysis with varimax rotation was also used. As shown in Table 2, the Kaiser-Meyer-Olkin (KMO) is bigger than 0.6 (KMO = 0.618 > 0.6), which shows that the 54 strategies fit into the two main tentative factors as originally hypothesized (i.e., discovery and consolidation strategies). It was also given to a group of 32 English teachers and 69 English learners which were comparable with the participants of the study to explore its reliability and the following results were obtained. As shown in Table 3, αis bigger than .7 for both the students and the teachers, which shows the reliability of the instrument used. Results The data were prepared for analysis and then analyzed using the statistical package for the Social Sciences (SPSS) version 16. Descriptive statistics were used to describe the responses to the items available in the questionnaire. For the sake of simplicity and space, the responses of the students and teachers to each part of the questionnaire are summarized in Table 4. The first column of Table 4 (ranks) presents Likert-type scale values (from very useless to very useful), and the numbers in the next columns show the percent of the students and teachers who selected those scales for each part. As shown in Table 4, around 89% of the students and 87% of teachers had selected either useful or very useful for the items available in the discovery part of the questionnaire, and around 87% of the students and 88% of the teachers had selected useful or very useful for the items available in the consolidation part. Generally speaking, the results show that the majority of the participants believed that both discovery and consolidation strategies are useful in vocabulary learning. These results are, however, too general and belong to those participants who had selected one of the anchor points of very useless, useless, useful, and very useful. The rest who had not responded or had selected "no idea" were excluded in the calculation of the general percent of each part of the questionnaire. The details about the most valued strategies are, however, presented in the following parts. The Most Useful Strategies To extract the most useful strategies in the participants' opinions, the frequency of the responses to each item of the questionnaire was calculated. The items for which at least 60% of the respondents had selected the anchor points of useful or very useful were marked as the most useful and the rest were discarded. Finally, 6 discovery and 20 consolidation strategies remained for the students, and 8 discovery and 25 consolidation strategies remained for the teachers. The results are shown in the following tables. Numbers 1 to 5 used in the tables, respectively, show 1 = very useless, 2 = useless, 3 = no idea, 4 = useful, 5 = very useful, and the numbers used under them show the percent of the teachers and the students who selected each of these anchor points. The letter "M" stands for the mean and "SD" for the standard deviation of the responses to the items listed. As it is clear, the teachers and the students did not agree on the usefulness of several strategies. Table 5 shows the results for those discovery strategies which were rated as the most useful by both the students and teachers. As shown in Table 5, both the teachers and the students believed that strategies such as paying attention to the function, suffixes and prefixes of the words, using monolingual dictionaries, guessing the meaning of the words and asking the teacher to use the words in English sentences, or give synonyms for them can be useful in discovering their meaning. Table 6, however, shows those strategies which were rated as the most useful only by the teachers. The teachers, unlike the students, believed that if the learners try to link English words to some Farsi ones and pay attention to the available pictures and clues, they can get their meaning better. The majority of the students did not, however, think so. Table 7, on the other hand, shows the most useful consolidation strategies in both the teachers' and students' opinions. As stated earlier, these strategies can be used to consolidate the meaning of the learned vocabulary elements. As shown in Table 7, both the teachers and the students believed that strategies such as using the words in interactions, learning the words in sentences, checking the pronunciation of the words, repeating the words, listening to English music, and watching English movies can be useful in consolidating the meaning of the words and sending them to long-term memory for future use. Table 8, however, contains those consolidation strategies which were rated only by the teachers as the most useful strategies to be used to consolidate the meaning of the already learned vocabulary elements. The Iranian EFL teachers, consequently, believed that activities such as relating the words to personal experiences, using flash cards, listening to tapes or CDs containing the words, and keeping a vocabulary notebook can be useful in consolidating the meaning of vocabulary elements. The majority of the Iranian EFL learners did not, however, agree on the usefulness of the strategies listed in Table 8. As shown in Table 9, they believed that, along with the strategies listed in Table 7, memorizing the newly learned vocabulary items can also be a useful practice in consolidating their meaning. However, the majority of the teachers did not believe that memorization can be a useful activity in this regard. The Iranian EFL learners and teachers did not, nevertheless, believe in the usefulness of mnemonic strategies such as the key-word method and the loci method which have a longstanding position in the literature. The Effect of Educational Level on Strategy Preference To examine the effects of level of education on the strategy preference, at first the participants were divided into different groups based on the degrees they held. Accordingly, the teachers were divided into three groups: (a) BA, (b) MA, and (c) PhD, and the students into four groups: (a) diploma, (b) associate degree, (c) BA, and (d) MA and higher. The reason why the students with MA and PhD degrees were classified into one group was that only a very small number (3) of the students had a PhD degree. Then the Kruskal-Wallis Test was used due to the nonexistence of normality in the data to measure the effects of level of education on the students' and teachers' strategy preference. The results of the Kruskal-Wallis Test used for the teachers showed that for all 54 items available in the VLS questionnaire the p values were bigger than .05 showing that there was not a meaningful difference between the responses of the teachers in the three groups to the items available in the VLS questionnaire. However, as shown in Table 10, the results of the Kruskal-Wallis Test used for the students suggested that for the consolidation strategies (a) trying to use the words in interactions; (b) associating the word with its word coordinates, for instance, apple is associated with peach, orange . . .; and (c) reviewing the vocabulary section of the textbook the p values were, respectively, .023, .034, and .025 which were less than .05 representing a meaningful difference between the responses of the students in the four groups to these items. An inspection of the mean ranks showed (Table 10) that for Strategy Number 1, the mean rank of the students with an associate degree was 71.94, which was the highest. The frequency of the responses of the students in different groups to the options available for this strategy also showed that all the students (100%) with an associate degree had selected the anchor points of useful or very useful indicating that in these students' opinions trying to use the newly learned vocabulary elements in interactions can be a very good practice to consolidate their meaning. This strategy was also selected as one of the most useful consolidation strategies in both the teachers' and the students' opinions (Table 7). For Strategy Number 2, the highest rank, as shown in Table 10, was 81.15 which belonged to the students holding MA or higher. The frequency of the students' responses to the options available for this strategy showed that around 50% of these students had selected either useful or very useful showing that about half of these students believed that associating a word with its word coordinates can help them consolidate its meaning. The majority of the students in other groups had either no idea regarding the usefulness of this strategy or selected useless or very useless. The highest rank for Strategy Number 3 was 64.26 which belonged to the students with a BA degree. The frequency of the responses of the students with a BA degree to this item showed that around 85% of them had selected useful or very useful for this strategy. As indicated in Table 7, this strategy was also selected as one of the most useful consolidating strategies in both the teachers' and the students' opinions. Discussion and Conclusion From the last half of the 20th century on, the field of language teaching and learning has been marked with the efforts to shift the attentions from teacher-centered classes to learnercentered ones. To this end, terms such as autonomy, awareness-raising, and self-regulation have made their way into the field (H.D. Brown, 2014). In learner-centered classes, the emphasis is on making the learners independent and self-regulated. To do so, learners should become strategic learners, that is, they should be instructed to use LLSs (Schmitt, 2010). Teachers' and students' thoughts, perceptions, and beliefs are, however, important forces that can have great effects on the decisions they make and the pedagogical practices they use. Teachers' and students' actions, decisions, and functioning are, therefore, sifted through their thoughts and beliefs (Bembenutty & White, 2013;Kitsantas & Zimmerman, 2009). Exploring their beliefs about different aspects of language teaching and learning can provide useful insights about the processes involved. It can also raise teachers' and learners' awareness about what works and what does not (H.D. Brown, 2014). The present study was, therefore, an effort to explore the Iranian EFL teachers' and learners' beliefs about the usefulness of different types of VLSs and to examine the effects of level of education on their strategy preference. The results of the study showed that the students valued 6 discovery strategies (e.g., paying attention to the function of the word in the sentence, paying attention to the prefixes and suffixes, and guessing the meaning of the words) and 19 consolidation strategies (e.g., using the words in interactions, connecting the words to their synonyms and antonyms, and imagining the meaning of the words). The teachers also believed that these strategies were useful for vocabulary learning except for memorizing the words which was believed to be useful only by the students as a consolidation strategy. The teachers, however, valued 2 more discovery strategies (linking the English word to a Farsi one that reminds the learners of the English word's form and meaning, and analyzing the available pictures and clues) and 6 more consolidation strategies (e.g., connecting the word meaning to a personal experience, using flash cards and keeping a vocabulary notebook) for which the students did not show a high esteem. The results of Kruskal-Wallis Test also showed that the teachers' beliefs about the usefulness of different VLSs were not affected by their level of education. However, the preference for a few strategies differed across levels of education for the students. By considering the strategies which were valued by the participants of the study, it can be inferred that for the Iranian EFL teachers and learners: (a) Paying attention to the form of vocabulary items is considered to be important in both discovering the meaning of the words and consolidating them. It can be seen in strategies such as paying attention to the grammatical functions of the words, analyzing the word affixes or considering them, and paying attention to the spelling and the spoken form of the words; (b) mechanical activities such as repeating the words, saying them aloud, or writing them down several times are also deemed essential; (c) the context is very important both during the first encounter with the words to understand their meaning and during the time the learners try to retain them; (d) guessing the meaning from the context and paying attention to the word relations can also promote vocabulary learning; and (e) vocabulary production has also an important place. It can be done through activities such as linking the new words in a story or using them in the interactions with peers and teachers. Strategy preference may, however, be affected by learners' factors such as level of education. The results of the present study can further support the importance of context, word lists, semantic relation among words, guessing the meaning of words from context, and vocabulary production in vocabulary learning and retention which are also proved useful by the studies done in the literature (Krashen, 2004;Nassaji, 2003;Nation, 2002;Schmitt, 1997Schmitt, , 2010Tran, 2011;Waring, 1997). However, the results of the present study do not support the results of the studies done by researchers such as Folse (2004) and Prince (1996), who found that L1 can be facilitative in vocabulary learning, and bilingual dictionaries can provide students with useful information. The present study, on the contrary, showed that the majority of the Iranian EFL teachers and learners expressed hesitancy to use translation activities and tasks, L1 equivalents, and English to Farsi dictionaries for vocabulary learning. Also, the majority of the participants did not agree on the usefulness of mnemonic devices such as the key-word method and the loci method which have a long-standing position in the literature of vocabulary learning and teaching and are believed to contribute to vocabulary learning and retention (Amiryousefi & Ketabi, 2011;Sagarra & Alba, 2006). It can be due to the cultural factors which are believed to have an important role in VLS use (Schmitt, 2000). The results of the present study can, to some extent, support the fact that beliefs are dynamic and the changes in the learners' and teachers' attributes such as the level of education may result in a change in their beliefs about different aspects of language learning and teaching (Barcelos & Kalaja, 2011;Zhong, 2014). As the results of the present study suggest, teachers should be cognizant of the fact that their beliefs do not always match their students' beliefs. Teachers and students are, consequently, recommended to articulate their educational beliefs. In this way, teachers can raise their awareness about their students' accurate and inaccurate beliefs about all aspects of language learning and teaching in general and about vocabulary learning and teaching in particular. It can help teachers justify their educational practices which in their students' opinions are not useful or logical to stimulate satisfaction and cooperation in their students and hence increase the quality of teaching and learning (H. D. Brown, 2014;Murray & Christison, 2011;Rashidi & Moghadam, 2014). Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research and/or authorship of this article.
7,002.8
2015-04-01T00:00:00.000
[ "Education", "Linguistics" ]
Patch-based anisotropic diffusion scheme for fluorescence diffuse optical tomography—part 1: technical principles Fluorescence diffuse optical tomography (fDOT) provides 3D images of fluorescence distributions in biological tissue, which represent molecular and cellular processes. The image reconstruction problem is highly ill-posed and requires regularisation techniques to stabilise and find meaningful solutions. Quadratic regularisation tends to either oversmooth or generate very noisy reconstructions, depending on the regularisation strength. Edge preserving methods, such as anisotropic diffusion regularisation (AD), can preserve important features in the fluorescence image and smooth out noise. However, AD has limited ability to distinguish an edge from noise. In this two-part paper, we propose a patch-based anisotropic diffusion regularisation (PAD), where regularisation strength is determined by a weighted average according to the similarity between patches around voxels within a search window, instead of a simple local neighbourhood strategy. However, this method has higher computational complexity and, hence, we wavelet compress the patches (PAD-WT) to speed it up, while simultaneously taking advantage of the denoising properties of wavelet thresholding. The proposed method combines the nonlocal means (NLM), AD and wavelet shrinkage methods, which are image processing methods. Therefore, in this first paper, we used a denoising test problem to analyse the performance of the new method. Our results show that the proposed PAD-WT method provides better results than the AD or NLM methods alone. The efficacy of the method for fDOT image reconstruction problem is evaluated in part 2. Keywords: fluorescence diffuse optical tomography, image reconstruction, anisotropic diffusion, nonlocal means, structural information, multimodal imaging, regularisation (Some figures may appear in colour only in the online journal) Introduction Fluorescence diffuse optical tomography (fDOT) is an optical imaging modality that provides three-dimensional (3D) images of fluorescent source distributions inside biological tissue (Ntziachristos 2006). The image reconstruction in fDOT is very challenging due to illposedness of the inverse problem, which is a consequence of the multiple scattering nature of biological tissues. To overcome this problem, regularisation methods are commonly used to stabilise the solution. Usually, a quadratic (L2 norm) regularisation or penalty term is added to the least squares objective function to enforce smoothness in the reconstructed image (Hansen 1998, Zacharopoulos et al 2009. In Correia et al (2011) we proposed to use a nonlinear anisotropic diffusion regularisation method, which has the ability to smooth out noise while preserving edges in images, and showed that spatial localisation and size of fluorescence inclusions can be accurately estimated. In this method, a reconstruction step alternates with the regularisation step, i.e. a nonlinear anisotropic diffusion (AD) filtering step (Perona and Malik 1990). The AD method is a well-known and widely used image processing technique that provides satisfactory noise suppression and edge enhancement results. Buades et al (2005a) proposed the nonlocal means method (NLM), an image processing algorithm that surpasses the AD method. In the NLM method, pixels are averaged according to similarity between patches around pixels within a search window W, instead of a simple averaging strategy within a local neighbourhood. NLM exploits the redundancy of information within an image, by assuming that patches from different regions contain similar patterns and averaging them effectively reduces noise. The NLM has superior denoising performance than local-based methods, but at the expense of higher computational complexity. Several methods have been proposed to accelerate the NLM without loss of denoising performance, such as a block-wise implementation of the NLM (Buades et al 2005b, Coupé et al 2008, preselection of similar patches based on mean and average gradient of patches (Mahmoudi and Sapiro 2005) or mean and variance (Coupé et al 2008), arrangement of patches in a cluster tree (Brox et al 2008), singular value decomposition (SVD) (Orchard et al 2008) and principal component analysis (PCA) (Tasdizen 2008). NLM using PCA compressed patches not only reduces the computational time, but also improves the denoising performance (Tasdizen 2008). Another popular denoising/compression method is the wavelet shrinkage, which is based on the process of transforming an image into the wavelet domain (Donoho and Malik 1994, Donoho 1995, Donoho and Johnstone 1995. Important information is encoded by large wavelet coefficients, whereas most of small coefficients represent noise and can be removed using thresholding techniques. The denoised image is obtained by transforming the thresholded coefficients into the image domain. The NLM method has been successfully used in medical imaging, for example, as a denoising technique (Coupé et al 2012, Chen et al 2012, Dutta et al 2013, Chan et al 2014 and in image reconstruction as a regularisation method with (Chen et al 2008, Chun et al 2012, Wang and Qi 2012, Nguyen and Lee 2013. Here, motivated by the effectiveness of the NLM method in medical imaging applications, we propose a modification to our nonlinear anisotropic diffusion regularisation method for fDOT image reconstruction (Correia et al 2011). In this work, instead of the previous local strategy, we consider a patch-based approach where we use a robust edge-preserving potential function. We refer to this method as patch-based anisotropic diffusion (PAD). Moreover, we propose to reduce patch dimensions using wavelet compression, not only to reduce computational complexity but also to increase the robustness of the method to noise. Therefore, the new method combines the advantages of NLM, AD and wavelet shrinkage methods. We refer to this method as patch-based anisotropic diffusion with wavelet patch compression (PAT-WT). Since our new method is based on the NLM method, which was initially proposed for image denoising, we begin by assessing the performance of the PAD(-WT) method using a 2D denoising test problem (an ill-posed inverse problem). In the second part of this two-part paper, we study the performance of the PAD-WT method in fDOT image reconstruction as a regulariser. Methods Consider a noisy image of the form: where f true is the true image and ε is the noise. Denoising algorithms aim at removing noise from the observed noisy image f noise , returning an image as close as possible to f true . The NLM and AD are examples of widely used image denoising methods. Anisotropic diffusion In the image space f 2 ∈ R the discretised anisotropic diffusion method is given by the following iterative scheme (Perona and Malik 1990): where k is the iteration number, | is the number of neighbours (here i W 4, 2 ( ) | |= neighbours per direction) and g is an edge-preserving function such as Huber, Tukey, Total Variation, Perona-Malik, etc (Correia et al 2011). Here, we use the Perona-Malik function defined by ⎛ . The parameter T is the threshold and can be selected using the normalised cumulative histogram (NCH) of the gradient (Correia et al 2011). This method is commonly used in edge detection problems. The NCH indicates the probability ℘ of a gradient taking on a value less than or equal to the value X that the bin represents, i.e. . It increases monotonically and the smoothness/sharpness of the curve indicates how smooth/sharp the edges are. The threshold can be calculated from the NCH by setting the threshold at, for example, 90 percent. Patch-based anisotropic diffusion We propose the following modification to the previous method: where w ij is the weight that measures the similarity between two neighbourhoods (patches with fixed size P N N = × ) centred at pixel i and j, in a search window W 2 1 2 1 ( ) ( ) = + × + W W centred at pixel i, where W is the maximum distance from pixel i (see figure 1 for an example). The weight w ij is defined as: where i N and j N are the neighbourhoods centred at pixel i and j, respectively. The parameter h controls the exponential decay and C(i) is a normalising constant given by: and where f ip and f jp represent the p element of the patches f i ( ) N and f j ( ) N , respectively. Therefore, the weights w ij are large when patches are similar and small when they are much different. Nonlocal means The new patch-based anisotropic diffusion method is based on the nonlocal means denoising method (Buades et al 2005a(Buades et al , 2005b: NLM can be considered as a one step of a fixed point iteration (Brox et al 2008), and denoising results can be improved by performing a few iterations: The iterative NLM method computes a weighted average of pixel intensities at each iteration. In our proposed method, pixel intensities are replaced by . Note that the patch-based anisotropic diffusion (PAD) method becomes the local AD when N 1 = and 1 = W . Patch compression NLM methods have superior denoising performance than local methods, but at the expense of computational complexity. Principal component analysis (PCA) has been used to reduce data dimensionality and speed up the NLM method (Tasdizen 2008). Furthermore, accuracy of the solution was shown to improve since principal components that represent noise, i.e. with small variance, are removed. We propose to use a wavelet transform (WT) compression method instead. In this method, the high frequency coefficients are removed, preserving the main information of the data, while removing noise. Compression is applied to the patches, which are previously converted into single dimension arrays to simplify and speed up computations. The patch-based anisotropic diffusion method with wavelet patch compression (PAD-WT) is our proposed method. Principal component analysis. PCA is a statistical method widely used in data analysis and compression. PCA identifies similarities and differences in the data, by representing the signal in terms of a set of new orthogonal variables called principal components. The first and last components have the largest and smallest variance, respectively. Consider a patch f i ( ) N with P elements, the mean of its elements f ī and covariance where Λ is a diagonal matrix whose elements are the eigenvalues and the columns of the orthonormal matrix U are the eigenvectors of the covariance matrix. The principal components are the eigenvectors ordered by descending magnitude of the corresponding eigenvalues. The first component is the principal component and represents the largest variance of the data. The dimensionality of the data can be reduced to d by projecting the patch where the d largest eigenvectors are kept, i.e. the first d columns of U, and the less significant components are discarded. Wavelet transform. The fast wavelet transform (FWT) decomposes a signal or function into different frequency subbands (Mallat 2008). It uses a low-pass filter H and a high-pass filter G to obtain the approximation c ϖ and detail d ϖ coefficients. The approximation coefficients represent the approximation of the signal at a resolution 2 r , where r is an integer that specifies the resolution level. The detail coefficients contain the details of the original signal, i.e. the high-frequency information. For a signal f n r n 1, i ( ) ( ) ϖ = + , with n samples and at a starting scale r + 1: Figure 1. Similar to the NLM method, the patch-based anisotropic diffusion method is based on patch similarity. A square search window W is defined surrounding pixel i. The method estimates pixel i by taking a weighted average of all pixels j within W. The weight w ij is calculated based on the similarity between two neighbourhoods or patches ( ) N f i and ( ) N f j , centred around pixel i and j, respectively. Similar patches give larger weights. In this 2D example, patches have dimensions = × P 3 3, hence, patch length is = N 3, and the window = × W 7 7 or = W 3, meaning that the maximum pixel distance from pixel i is 3 pixels. Thus, the FWT consists of a convolution, followed by downsampling by a factor of 2 ( 2 ↓ ), i.e. keeping the even index samples. The wavelet transform can be implemented as a decomposition filter bank, where the initial signal goes through a series of filters. Note that the filters are related to each other by G n H n 1 1 n ( ) ( ) ( ) = − − . Data compression is used to reduce the dimensions of the patches for computational efficiency. If a two-level 1D FWT is applied to patch f i ( ) N , the obtained wavelet coefficients are r n r n r n 1, , 1 , , , The vectorised patch is compressed by keeping the largest coefficients, which contain most of the relevant information. Thus, the weights in equations (3) and (4) are calculated using: where ip ϖ and jp ϖ represent the p wavelet coefficient of the wavelet transformed patches f i ( ) N and f j ( ) N , respectively. Evaluation We used an image denoising problem to test the hypotheses that the proposed method is superior to NLM, anisotropic diffusion filtering and that WT is an efficient patch compression method. We used several images corrupted with Gaussian noise in the denoising test. First, we analysed the performance of the PAD-WT method for different wavelet types. Finally, we solved the denoising problem using the following iterative methods: (1) NLM (see equation (8)), (2) NLM with patch compression using PCA (NLM-PCA), (3) NLM with patch compression using wavelets (NLM-WT), (4) AD or local PAD (see equation (2)), (5) PAD (see equation (3)), (6) PAD with patch compression using PCA (PAD-PCA) and (7) PAD-WT. We considered two tests: Test (1) using a test image (figure 2) corrupted by Gaussian noise with different noise levels, ranging from 10% to 50% in steps of 10%, and Test (2) several standard test images (figure 2) with the same noise level (20% Gaussian noise). All the images had 320 320 × pixels. We assessed the denoising quality with a widely used figure of merit in image processing, the peak signal to noise ratio (PSNR) between the original image f true and the denoised image f: f PSNR 10 log max /MSE 10 true and NP is the total number of pixels in the image. We solved the denoising problem for 20 noise realisations for each noise level and image. For all the methods, the iterative process was automatically stopped when PSNR PSNR k k 1 < + . We averaged the resulting PSNRs and calculated the corresponding standard deviations. As mentioned previously, the parameters T and h can be calculated from the NCH by setting the threshold at a certain percentage. The PSNR was calculated for different possible combinations of percentages (for NLM-based methods we only calculated h). The parameters that resulted in the highest PSNR were the ones used to generate the results presented here. The time step was set to 1 τ = for the PAD and AD methods. All methods were run on a linux PC with an Intel Xeon E5-2665 CPU @ 2.4 GHz using Matlab Mex-files. Wavelet types We used Test 1 (left image in figure 2 with different noise levels) to identify a suitable wavelet type for the patch compression. The wavelet types tested were (Salomon and Motta 2009): Haar, Vaidyanathan, Daubechies of order 2, 4 and 6, Battle-Lemarié of order 1, 3 and 5, and Coiflet of order 1, 3 and 5. The patch and search window sizes of the PAD-WT method were set to N 3 = and 5 = W , respectively. The largest N N /4 ⌈( ) ⌉ = × wavelet coefficients are kept, where ⌈ ⌉ ⋅ denotes the ceil function. For a N 3 = patch, 3 = , which is considered to be the maximum compression for this patch size. Table 1 shows the averaged PSNRs (and standard deviations) obtained using the proposed method with different wavelet types to remove noise from an image corrupted with 5 different noise levels. The best results were obtained using Daubechies wavelets of order 4. Therefore, Daubechies wavelets of order 4 were used to compress the patches in the following analysis. Comparison between denoising methods First, we used Test 1 to compare the iterative NLM method with PAD and their local versions. As mentioned previously, the methods are considered local when N 1 = and 1 = W . We compared the performance of the previous methods to their equivalent with patch compression, using PCA or WT. We calculated the averaged PSNRs and standard deviations of images obtained using the NLM and PAD methods, with and without patch compression, for window sizes 1, 3, 5, 7, 15 { } = W and patches of length N 3, 7 { } = . We retained a total of N N /4 ⌈( ) ⌉ = × wavelet coefficients for the NLM(PAD)-WT method. For the NLM(PAD)-PCA method, we kept 1/2 of the principal components. Table 2 shows the denoising results for Test 1 obtained using equation (8) with N 1 = and 1 = W (local filtering), NLM, NLM-PCA and NLM-WT methods. Table 3 shows the results obtained using AD, PAD-PCA, PAD-WT, and the respective elapsed time for one iteration. Results show that NLM-based methods perform better than local filtering. This test also shows that the PAD method performs better than the conventional NLM or AD. Similarly, the PAD-WT shows superior performance compared to the NLM-WT method. Better results were obtained using PAD(NLM)-WT compared to PAD(NLM)-PCA. The PAD-WT method provides results close to those of PAD without patch compression for lower levels of noise and gives better results for higher noise levels (>30%), with the advantage of being faster. These results show that the size of the search window affects the performance of the PAD(NLM) method. This test suggest that for the PAD-WT method the optimal window and patch sizes are 7 = W and N 3 = , respectively. The results obtained using 5 = W are almost as good, with the advantage of being approximately two times faster to compute. For a window of size 10 = W and patch N 7 = , as used by Buades et al (2005b), the method is slower and the PSNR values are not higher. In this study, we did not intend to perform an extensive study on the optimal number of wavelet coefficients that results in the highest PSNR value. Nevertheless, we tested other values and there were no notable changes in the results (not shown). Tasdizen (2008) found that, for a patch of size N 7 = , choosing d = 6 yields the highest PSNR. The performance of the NLM(PAD)-PCA for d = 6 and d t N N ⌈( ) ⌉ = × , where t = {3/8 , 1/4}, was also evaluated. The PSNRs values (not shown) do not vary significantly, and hence, the conclusions regarding the performance of the denoising methods remain the same. Nevertheless, for consistency, d was kept fixed since was also kept constant. Figure 3 shows the test image (Test 1) contaminated with different levels of noise and the best denoising results obtained using the PAD, PAD-PCA and PAD-WT methods. The proposed PAD-WT method is qualitatively very similar to the PAD and performs much better than the PAD-PCA method. However, note that the image background can be recovered using PAD-PCA if different T and h parameters are used, but at the expense of decreased PSNR values. Both PAD and PAD-WT demonstrate high denoising performance for noise levels up to 20%. For higher noise levels, low contrast features cannot be recovered. However, for noise levels up 40% the quality of the denoised images is still quite high. Iteration times are faster when wavelet patch compression is used compared to other PAD-based methods (table 3). Test 2 was used to further evaluate the performance of the PAD-WT method. Denoising results for standard test images using PAD, PAD-PCA and PAD-WT are shown in table 4 and figure 4. For each method we used the window and patch sizes that gave the best Test 1 results for a noise level of 20%. The PAD-WT method returns slightly lower PSRNs than the PAD method for the Barbara and cameraman images, but higher values for the CT and MRI images. This denoising study suggests that the proposed method is suitable for medical imaging applications. One of the advantages of using the AD method is the computational speed. However, NLM-based methods can easily be parallelised and implemented to exploit GPU (graphics processing units) acceleration (Cuomo et al 2014) to greatly reduce the execution times. Alternatively, to reduce the computational times, a semi-implicit scheme can be used, instead of the explicit discretisation used here, so that τ can be relatively large without causing numerical instability, which is particularly advantageous to reduce the number of iterations (Correia et al 2011). Conclusion In this study, we proposed a modification to the AD method, motivated by the effectiveness and simplicity of the NLM method for image denoising. The proposed PAD method uses a patch-based approach instead of a local neighbourhood strategy. Additionally, patches are wavelet compressed to speed up the method. We used a denoising problem to compare our PAD method with the AD method, iterative NLM and its corresponding local model. The results obtained show that combining AD with NLM, i.e. the PAD method, provides better denoising results than these methods alone. Moreover, results show that the proposed PAD-WT method has superior denoising performance compared to the analogue method with PCA patch compression. In addition, this method is faster and achieves comparable results or, for high noise levels, even better results than those obtained when patch compression is not used. Medical images are often corrupted by noise and artifacts intrinsic to the image acquisition process. Our study suggests that the proposed method is particularly suitable for medical imaging applications. In the second part of this paper, we use the PAD-WT as a regularisation method in fDOT image reconstruction. A split operator method is used for solving the fDOT inverse problem, which is a two-step method, where a reconstruction step alternates with the regularisation step, i.e. the PAT-WT method.
5,112.8
2016-02-21T00:00:00.000
[ "Mathematics" ]
Clinical Significance of Serum p53 and Epidermal Growth Factor Receptor in Patients with Acute Leukemia Leukemia pathogenesis has not been yet completely clear, the conversion of the proto-oncogenes, tumor suppressor gene aberration and apoptosis inhibition may play an important role in the pathogenesis of disease (Xiaoming and Weiping, 2009). p53 is a tumor suppressor gene encoding a nuclear phosphoprotein that plays an important role in controlling the normal cell proliferation (Konikova et al., 1999). The suppression of p53 protein results in interruption of DNA repair mechanisms in dividing malignant cells thereby increasing the DNA damage and activating p53-independent mechanisms of apoptosis (Alachkar et al., 2012), so p53 inactivation is a key factor in human tumorigenesis and chemotherapy resistance. The traditionally described mechanisms of p53 inactivation in AML include TP53 mutations and abrogation of p53 pathway (Prokocimer and Peller, 2012). The epidermal growth factor receptor (EGFR) family belongs to type I receptor tyrosine kinases. Overexpression or mutation of EGFR/ErbB1 gene has been detected in a large number of human solid tumors. According to some previous reports, this gene is not expressed in hematological malignancies. However, two recent clinical case reports showed that erlotinib caused complete Introduction Leukemia pathogenesis has not been yet completely clear, the conversion of the proto-oncogenes, tumor suppressor gene aberration and apoptosis inhibition may play an important role in the pathogenesis of disease (Xiaoming and Weiping, 2009). p53 is a tumor suppressor gene encoding a nuclear phosphoprotein that plays an important role in controlling the normal cell proliferation (Konikova et al., 1999). The suppression of p53 protein results in interruption of DNA repair mechanisms in dividing malignant cells thereby increasing the DNA damage and activating p53-independent mechanisms of apoptosis (Alachkar et al., 2012), so p53 inactivation is a key factor in human tumorigenesis and chemotherapy resistance. The traditionally described mechanisms of p53 inactivation in AML include TP53 mutations and abrogation of p53 pathway (Prokocimer and Peller, 2012). The epidermal growth factor receptor (EGFR) family belongs to type I receptor tyrosine kinases. Overexpression or mutation of EGFR/ErbB1 gene has been detected in a large number of human solid tumors. According to some previous reports, this gene is not expressed in hematological malignancies. However, two recent clinical case reports showed that erlotinib caused complete remission of AML-M1 in patients who had both AML-M1 and non-small-cell lung cancer (Sun et al., 2012). The rationale of this study was to analyze the pretreatment serum p53 and EGFR levels using ELISA in patients with acute leukemia and to investigate their correlations with hematological data. Analysis the roles of these variables for characterization of different subtypes of acute leukemic patients were also studied. Study subjects The present study had been conducted in cooperation with National Cancer Institute, (Cairo University) and Laboratory Research Unit (Gastroenterology Surgical Center, Faculty of Medicine, Mansoura University). A total of 46 patients with hematologically diagnosed acute leukemia, 32 patients were diagnosed as acute myeloid leukemia (19 men and 13 women; mean age 32.61 years, range 14-53 years) and 14 patients were acute lymphoid leukemia (9 men and 5 women; mean age 27.38 years, range 18-41 years). Patients with AML and ALL were classified into subtypes according to French-American-British (FAB) classification that based on the type of cell from which the leukemia developed and how mature the cells are. This was based largely on how the leukemia cells looked under the microscope after routine staining. The subtypes of AML involved in this study were M1 (26%), M2 (19.5%), M3 (15.2%), and M4 (8.7%), while the subtypes of ALL were ALL1 (15.2%) and ALL2 (15.2%). The control group included 24 healthy individuals (18 men and 6 women; mean age 33 years, range 24-42). The control individuals were selected without a clinical history of any chronic diseases and without symptoms or signs of acute or chronic leukemia. Peripheral blood samples were obtained from the patients and those from healthy subjects in the control group and sera were promptly separated and stored at -20˚C till use. The study was approved by the local Research and Ethics Committee of Mansoura and Cairo Universities. An informed consent was obtained from the child's parent or guardian before inclusion in the study. Analysis of hematological data Peripheral blood samples were obtained from all studied groups (healthy individuals, acute myeloid leukemia, and acute lymphoid leukemia) to analyze the hematological parameters as hemoglobin content (Hb), red blood cells (RBCs), white blood cells (WBCs), and platelets count according to routinely investigated laboratory tests. Serum p53 and EGFR analysis using ELISA A home-ELISA method was optimized to obtain the optimum reaction conditions. Polystyrene microtiter plates were coated with 50 µl/well of each serum sample diluted 1:1000 in carbonate/bicarbonate buffer (pH 9.6). The plates were incubated overnight at room temperature and washed three times using 0.05% (v/v) PBS-T20 (pH 7.2) and then incubated for 1h at room temperature with 200 µl/well of 0.2% (w/v) non-fat milk in carbonate/ bicarbonate buffer (pH 9.6). After washing, 50 µl/well of mouse monoclonal antibody Bp53-12 (Sigma), diluted 1:100 in PBS-T20 or monoclonal anti-EGFR, clone 29.1 (Sigma) diluted 1:1000 were added and incubated at 37˚C for 2 h. After washing, 50 µl/well of anti-mouse IgG alkaline phosphatase conjugate (Sigma), diluted 1:250 in PBS-T20, was added and incubated at 37˚C for 1 h. Excess conjugate was removed by extensive washing and the amount of coupled conjugate was determined by incubation with 50 µl/well p-nitrophenyl phosphate (Sigma) for 30 min at 37˚C The reaction was stopped using 25 µl/well of 3M NaOH and absorbance was read at 405 nm using microplate autoreader (Bio-Tek Instruments. WI, USA). Cut-off level of ELISA above or below which the tested samples were considered positive or negative was calculated as the mean concentrations of 24 serum samples from healthy individuals +2SD. Statistical analysis Results were expressed as mean±SD. and were analyzed by using X 2 -test, Mann-Whitney U-test, Fisher's exact test, Spearman correlation as appropriate. The Mann-Whitney U-test was used to compare different groups for continuous variables including the serum levels of mutant p53 and EGFR. The correlations between serum levels of mutant p53, EGFR, and hematological data of patients were assessed by Spearman correlation. p≤0.05 was considered significant. These statistical procedures were performed using SPSS software, version 11 for windows (SPSS Inc., USA). Receiver operating characteristic curves (ROCs), area under curve (AUC) calculations, and one-way analysis of variance (ANOVA) to compare among different subtypes of patients with acute leukemia were performed using MedCalc software, version 12 for windows (Belgium). Hematological data of the studied groups The hematological data of all studied groups (HI, AML, and ALL) were listed in Table 1. Hg content, RBCs, and platelets count of patients with AML or ALL were significantly lower (p<0.0001 for all except platelets count in ALL group, p<0.05) than those in healthy individuals, while WBCs count was significantly higher (p<0.0001 in both of AML and ALL groups) than those in healthy individuals. Serum levels of p53 and EGFR As shown in Table 2, the results demonstrated a significant difference between the serum levels of mutant p53 and EGFR in patients with AML compared to that of controls (p<0.0001 for both p53 and EGFR). Also, serum EGFR and p53 levels were increased significantly in patients with ALL compared to the control group (p<0.0001 for EGFR, p<0.05 for p53). Our results showed that the positivities of p53 and EGFR in patients with AML were 60% and 78.12% respectively, while the positivities of these variables in patients with ALL were 8.69% and 61.53%. It is observed from our results that the serum level of p53 in patients with ALL was significantly lower (p<0.01) than those in patients with AML, but there was Correlation amongst serum p53, EGFR and hematological parameters As shown in Table 3, data statistical analysis showed a positive strong correlation between serum p53 and EGFR (r=0.85, p<0.0001). Thus, it seems clear that the p53 and EGFR are dependent variables. Both of p53 and EGFR were negatively correlated with Hg content [r=-0.57 (p<0.0001), -0.42 (p<0.01) respectively] and RBCs count [r=-0.5 (p<0.001), -0.43 (p<0.01) respectively], while there was no correlation between these two variables and both of WBCs and platelets count. Serum p53 and EGFR levels in different subtypes of AML and ALL Data showed no significant difference for p53 and EGFR with ALL subtypes (ALL1 vs ALL2) but these variables were able to discriminate between subtype M4 and both of M1, M2, and M3 (for p53, p=0.03, for EGFR, p=0.028). On the other hand, both of p53 and EGFR were not differentiate among M1, M2, and M3 subtypes (Figures 2). Discussion p53 biosignatures contain useful information for cancer evaluation and prognostication (Anensen et al., 2012). In the present work, there were significant increase in the serum levels of p53 in patients with AL, AML (p<0.0001, positivity 52%, 60% respectively), and ALL (p<0.05, positivity 8.69%) compared to control group. Also, the serum p53 levels in patients with AML were higher than those in ALL group with a statistically significant difference (p<0.01). These results indicate that, the expression of p53 protein may have a different mechanism in the pathogenesis progress in these two types of acute leukemia and it may be considered as a helpful marker to differentiate between them. Several studies revealed that the mutation of p53 gene has been reported only 5-10% of patients with AML (Diccianni et al., 1994;Hsiao et al., 1994;Wattel et al., 1994;Zhu et al., 1999). In contrast, the results of Sahu and Jena (2011) showed that 91% patients with AML were p53 immunopositive using immunocytochemistry. Also, measurement of p53 protein expression by flow cytometry showed higher percentage of p53 expression in cells of AML patients at the time of diagnosis opposite to the controls (Konikova et al., 1999). Furthermore, the results of Park et al. (2000) revealed that the overexpression of p53 protein was found in 38% of patients with AML, while 25% of patients with ALL were p53 immunopositive using immunohistochemical technique. Significant increase of serum p53 protein in different human cancers were reported by several authors (Segawa et al., 1997;Suwa et al., 1997;Shim et al., 1998;Sobti and Parashar, 1998;Morita et al., 2000;Charuruks et al., 2001;Chow et al., 2001). In addition, our previous reports revealed an increasing level of serum p53 protein using ELISA in different gastrointestinal tumors (Attallah et al., 2003), hepatocellular carcinoma (Abdel-aziz et al., 2005), and colorectal cancer (Abdel-aziz et al., 2009). Epidermal growth factor (EGF) and its receptor (EGFR) are one of the most important ligands/receptors of mammalian cells. EGFR possesses intrinsic tyrosine kinase activity, and its overexpression is associated with malignant transformations (Schlessinger and Ullrich, 1992;Rajkumar, 2001). Previous studies reported that EGF/EGFR binding plays an important role in the carcinogenesis of several human tumors because EGF stimulates proliferation of malignant cells through its receptor, EGFR (Yamazaki et al., 1998). In the present study, there was a significant increase in the EGFR level in both of AL and AML patients groups compared to the control group (p<0.0001, positivity 73.91% and 78.12% respectively). Furthermore, EGFR levels in ALL patients were significantly increased (p<0.0001) with a high positivity (61.5%) compared to control group, while there was a not quit significant difference in EGFR levels between AML and ALL patients groups. The present work showed that the optimized home -ELISA technique allows the serological quantitative analysis of these markers (p53 and EGFR) to give different sensitivities for each with good specificities (for p53, 52%, 100%, for EGFR, 73.91%, 95.8%). AUC for each marker was calculated according to their ROCs and it was found that the AUC for p53 and EGFR are 0.8, and 0.93 respectively. These results indicate the good validity for p53 and EGFR to discriminate the seropositive from the seronegative samples of AL patients and indicate that our optimized ELISA method is a reliable diagnostic technique for differentiation between positive and negative cases. Furthermore, our results showed significant positive correlation between p53 and EGFR. Thus, it seems clear that these markers are dependent variables. On the other hand, there were significant negative correlations between these variables and some hematological data of AL patients as hemoglobin content and red blood cells count, while there were no correlations with white blood cells and platelets count. The most important rationale of this study is to analyze the serum levels of the these variables in different subtypes of both AML and ALL patients. Our results showed that both of serum p53 and EGFR levels in M4 subtype are higher than those in M1, M2, and M3 (for p53, p=0.03, for EGFR, p=0.028), while there were no significant differences among the subtypes M1, M2, and M3. These results showed that the serological analysis of these markers have a significant role for characterization of AML subtype. In contrast, serum levels of these markers failed to discriminate between the two subtypes of ALL (ALL1 vs ALL2). In conclusion, our optimized ELISA technique is a valid reliable assay for determination of serum p53 and EGFR and these markers are helpful serological markers for diagnosis of both AML and ALL patients and can discriminate between different types of AL patients. Furthermore, these variables can differentiate among the different subtypes of AML patients and aid for disease characterization. Our results encourage us and others to investigate the efficacy of these markers to monitor patients with AL during and after treatment.
3,095.4
2013-07-30T00:00:00.000
[ "Medicine", "Biology" ]
Changes in Sound Localization Performance of Single-Sided Deaf Listeners after Visual Feedback Training in Azimuth Chronic single-sided deaf (CSSD) listeners lack the availability of binaural difference cues to localize sounds in the horizontal plane. Hence, for directional hearing they have to rely on different types of monaural cues: loudness perceived in their hearing ear, which is affected in a systematic way by the acoustic head shadow, on spectral cues provided by the low-pass filtering characteristic of the head, and on high-frequency spectral-shape cues from the pinna of their hearing ear. Presumably, these cues are differentially weighted against prior assumptions on the properties of sound sources in the environment. The rules guiding this weighting process are not well understood. In this preliminary study, we trained three CSSD listeners to localize a fixed intensity, high-pass filtered sound source at ten locations in the horizontal plane with visual feedback. After training, we compared their localization performance to sounds with different intensities, presented in the two-dimensional frontal hemifield to their pre-training results. We show that the training had rapidly readjusted the contributions of monaural cues and internal priors, which resulted to be imposed by the multisensory information provided during the training. We compare the results with the strategies found for the acute monaural hearing condition of normal-hearing listeners, described in an earlier study [1]. Introduction The healthy auditory system applies three types of acoustic cues to localize a sound [2]. For sound sources in the horizontal plane (azimuth angle), the system uses interaural differences in arrival times (ITDs, order: up to ±600 µs), and intensity (ILDs, up to about ±20 dB). The latter arises from the head-shadow effect (HSE), which causes frequency-dependent sound attenuations: the higher the frequency, the stronger the shadowing by the head, and the left-right difference can thus provide a unique frequency-dependent cue for source azimuth. Below approximately 1.5 kHz, the HSE is too small to be reliably detected by the brain (< 1 dB), but for these lower frequencies the interaural time/phase differences become a reliable azimuth cue. Note that for all locations in the midsagittal plane the ITDs and ILDs are zero, and therefore cannot specify the elevation direction of a sound source. The elevation angle can be extracted from the complex broad-band spectral shape cues that result from direction-dependent reflections and refraction of high frequency (> 4 kHz) sound waves within the pinna cavities, and can be characterized by direction-dependent head-related transfer functions (HRTFs; [2,3,4,5,6,7]). The auditory system needs to map the implicit acoustic localization cues to veridical twodimensional sound-source directions in azimuth and elevation to achieve a coherent and accurate percept of sound location. These cues change in the course of one's life-span, which suggests that the auditory system should be able to recalibrate and reweight those cues as the head and ears slowly grow and change. In earlier studies from our lab [8,9], we have shown that the capacity to relearn new cues is not only limited to early childhood. For example, the auditory system can adapt to chronic [10] and acutely-imposed changes of the pinnae [8,11,12,9]. We recently showed that normal-hearing human listeners rapidly learn to remap the acoustic spectral cues for elevation during a short training session to a limited number of locations, both with and without visual feedback [13]. In a follow-up study [1], we subsequently demonstrated that the auditory system can also swiftly reweight its internal priors regarding the binaural difference cues and monaural head-shadow cues, to improve localization performance in azimuth, in response to acutely imposed monaural hearing (ear-canal plug and muff over one ear). While there is consensus that rapid adaptation is possible in the auditory system [14,15,16,8,17,18,19], so far, studies have reported different results on the reweighting of the different monaural and binaural localization cues and prior information sources to estimate a sound's direction [20,21,22,23]. In this paper, we set out to study short-term adaptive behavior of chronic single sided deaf (CSSD) listeners, and compare the results with acute conductive unilateral plugged (ACUP) listening for normal-hearing subjects, as described in the previous study [1]. In the latter group, the manipulation perturbed the highly robust binaural differences cues for sound-source azimuth, which led to an immediate and dramatic degradation of sound-localization performance in the horizontal plane, with a large localization bias towards the hearing ear. Interestingly, the deficit in binaural hearing, however, also impaired localization in the vertical plane, i.e., up-down and front-back directions, as binaural spectral integration for the elevation angle is known to be mediated by perceived azimuth [12,24], which in the case of (acute) binaural impairment is erroneous. Interestingly, after learning with visual feedback, localization performance in azimuth improved, which could be partly attributed to a recalibration of the perturbed binaural differences, but for elevation it deteriorated. Possibly, learning to also use the (monaural) spectral cues to estimate azimuth, may have interfered with the capacity to use the same cues for elevation. Alternatively, the listeners could have adopted a stronger weight for a near-zero elevation prior in response to the feedback learning. A similar problem might be present in single-sided deaf listeners, albeit manifested differently, as Chronic single-sided deaf listeners cannot rely on any binaural difference cues to localize sounds in the horizontal plane. Furthermore, as Chronic single-sided deaf listeners may have had ample time to (potentially) adapt to their monaural listening condition, they may have learned to employ different localization-and cue-weighting strategies than acutely plugged normal-hearing listeners who lack such long-term monaural experience. As illustrated in Figure 1, we hypothesize that Chronic single-sided deaf listeners could potentially make use of three monaural acoustic cues for directional hearing in the horizontal plane (see Appendix, for further details): (i) the overall acoustic head shadow, which changes the perceived (broadband) sound level at the hearing ear in a systematic (sinusoidal) way with source azimuth (e.g., [25]; Eq. A1), (ii) low-pass spectral filtering of the head, which causes an azimuth-dependent spectral head shadow, in which higher frequencies are attenuated more than lower frequencies, creating an azimuth-dependent effective bandwidth at the hearing ear, and (iii) monaural spectralshape cues from the pinna of the hearing ear, in which higher-order spectral-shape features, like notch width, steepness, and depth, could provide unique azimuth-dependent information. Figure 1: Three potential monaural head-shadow cues that are available to single-sided deaf listeners. Each cue varies with source azimuth in a different way. (A) The overall proximal sound level (integrated across all frequencies) varies with azimuth and absolute sound intensity (here indicated for 50, 60 and 70 dB; Eqn. A1). X indicates the deaf side (right ear). (B) The head's main spectral effect is approximated by a low-pass filter. The cut-off frequency (vertical dashed lines) for a flat spectral source (at fixed intensity) varies with source azimuth, α (Eqn. A5). (C) The spectral fine structure of the HRTF (at elevation zero deg) also varies with source azimuth, as high frequencies are attenuated more than low frequencies. This leads to shape changes (widening and deepening) of high-frequency notches and peaks. Note, however, that each of these cues is in principle ambiguous, as infinitely many soundsource spectra and azimuth combinations can yield the same acoustic input at the hearing ear. As such, monaural sound localization is inherently ill-posed. Therefore, to deal with this problem, the listener should combine the acoustic sensory cues with additional sources of prior information, e.g. regarding the expected absolute source intensity, potential locations, and potential spectral profiles. To investigate their short-term adaptation behavior and to compare their results with those obtained from acute conductive unilateral plugged subjects, three Chronic single-sided deaf listeners were trained, with visual feedback about the true sound location, to localize a sound of fixed spectral content and intensity at only a limited number of locations in the horizontal plane. Listeners generated a fast head-orienting saccade to the perceived sound location, as well as a fast corrective head movement to the visual feedback stimulus. After training, they were tested for their localization behavior in the entire frontal hemifield. We discuss the preliminary results from these Chronic single-sided deaf listeners, by comparing their adaptive behaviour with the results from acutely monauralized listeners. Participants Three single-sided deaf listeners (M1-M3, ages M1: 25, M2: 53, and M3: 59; 1 female) participated in the free-field sound-localization experiments. All were naive regarding the purpose of the study. M1 and M3 were deaf in their left ear, M2 in the right ear. The non-affected ear of all three listeners had normal hearing, within 20 dB HL (see Table 1). Subjects were given a brief practice session to get acquainted with the setup and localization paradigm, and to gain stable localization performance to standard broadband Gaussian white noise stimuli. Ethics statement The local Ethics Committee of the Faculty of Social Sciences of the Radboud University (ECSW, 2016) approved the experimental procedures, as they concerned non-invasive observational experiments with healthy adult human subjects. Prior to their participation in the experiments, all subjects gave their full written consent. Experimental setup During the experiments, subjects sat comfortably in a chair in the centre of a completely dark, sound-attenuated room (length x width x height: 3×3×3 m). The walls of the 3×3×3 m room were covered with black foam that prevented echoes for frequencies exceeding 500 Hz. The background noise level in the room was about 30 dB SPL [26].Target locations and head movement responses were transformed to double-polar coordinates [27]. In this system, azimuth, α, is defined as the angle between the sound source or response location, the center of the head, and the midsagittal plane, and elevation, ε, is defined as the angle between the sound source, the center of the head, and the horizontal plane . The origin of the coordinate system corresponds to the straight-ahead speaker location. Head movements were recorded with the magnetic search-coil induction technique [28]. To that end, the participant wore a lightweight (150 g) "helmet" consisting of two perpendicular 4 cm wide straps that could be adjusted to fit around the participant's head without interfering with the ears. On top of this helmet, a small coil was attached. From the left side of the helmet, a 40 cm long, thin, aluminum rod protruded forward with a dim (0.15 Cd/m 2 ) red LED attached to its end, which could be positioned in front of the listener's eyes, and served as an eye-fixed head pointer for the perceived sound locations. Two orthogonal pairs of 2.45 2.45 m coils were attached to the edges of the room to generate the horizontal (60 kHz) and vertical (80 kHz) magnetic fields. The head-coil signals were amplified and demodulated (Remmel Labs, Ashland, MA), after being low-pass filtered at 150 Hz (model 3343; Krohn-Hite, Brockton, MA) before being stored on hard disk at a sampling rate of 500 Hz per channel for off-line analysis. Auditory Stimuli Acoustic stimuli were digitally generated using Tucker-Davis Technologies (TDT) (Alachua, FL) System III hardware, with a TDT DA1 16-bit digital-to-analog converter (50 kHz sampling rate). A TDT PA4 programmable attenuator controlled sound level, after which the stimuli were passed to the TDT HB6 buffer and finally to one of the speakers in the experimental room. All acoustic stimuli were derived from a standard Gaussian white noise stimulus, which had 0.5 ms sine-squared onset and offset ramps. This broadband GWN control stimulus had a flat characteristics between 0.2 and 20 kHz, and a duration of 150 ms. The three types of stimuli were presented during the experiments. Broadband (BB), Low-pass (LP) and High-pass (HP) contained the frequencies from 0.2 to 20 kHz, all frequencies up to 3.0 kHz and the frequencies above 3.0 kHz, respectively ( Figure 2). In the adaptation experiment, only the HP stimuli were chosen as by focusing on the HP stimuli we excluded the ITD contribution to azimuth sound localization. Absolute free-field sound levels were measured at the position of the listener's head with a calibrated sound amplifier and microphone (Bruel and Kjaer, Norcross, GA). Experimental paradigms Calibration. Each experimental session started with a calibration paradigm to establish the mapping parameters of the search-coil signals to known target locations. Head-position data for the calibration procedure were obtained by instructing the listener to make an accurate head movement while redirecting the dim LED in front of the eyes from the central fixation LED to each of 58 peripheral LEDs, which was illuminated as soon as the fixation point extinguished. The 58 fixation points and raw head-position signals thus obtained were used to train two three-layer neural networks (one for azimuth, one for elevation) that served to calibrate the head-position data, using the Bayesian regularization implementation of the back-propagation algorithm (MatLab; Neural Networks Toolbox) to avoid overfitting [29]. In each sound-localization experiment, the listener started a trial by fixating the central LED (azimuth and elevation both zero; Figure 3, [1]). After a pseudo-random period between 1.5˘2.0 sec, the fixation LED was extinguished, and an auditory stimulus was presented 400 msec later. The listener was asked to redirect the head by pointing the dim LED at the end of the aluminum rod to the perceived location of the sound stimulus as fast and as accurately as possible. Control session. The sound-localization experiments were carried out over two experimental days. The localization control experiment was performed on the first day. This experiment contained 300 trails with broadband, low-pass and high-pass stimuli, and were presented at randomly selected locations that ranged from [-60,+60] deg in azimuth, and from [-40,+40] deg in elevation (see Figure 3). To prevent successful use of the HSE [25], the stimuli varied in intensity; sound levels of the HP stimuli varied between 45 dB and 70 dB in 5 dB increments; sound levels of the LP and BB stimuli were either 50 dB or 65 dB (HP: 6 different sound levels, 30 locations, in total: 180 trials, and LP,BB each 2 different sound levels, 30 locations, in total 120 trials). The control experiment served to establish the subject's localization abilities, and to assess the effect of sound level on the monaural listeners' localization performance, prior to the adaptation experiment. That is, we chose a sound level for which they had a considerable bias toward the normal-hearing ear. The pre-adaptation, training, and post-adaptation experiments were performed on a second recording day. Training. In the training experiment, subjects localized the HP stimuli at 60 dB, presented at 10 fixed locations in the azimuth direction (+60, +48, +36, +24, +12, −12, −24, −36, −48, −60 deg), at elevation zero ( Figure 3). After the sound was presented, and the subject had made the localization response, a green LED in the center of the speaker was illuminated for a duration of 1500 ms. The subject had to make a subsequent head-orienting response to the location of the LED; this procedure ensured that the subject had access to signals related to programming a corrective response, immediately after the sound-localization estimate. The training experiment consisted of 400 trials in which every location was presented 50 times in pseudo-random order. Test sessions. The pre-and post-adaptation experiments contained the same 180 trials, consisting of three types of stimuli: HP50, HP60, and HP70 sounds. Stimuli were presented at pseudo-randomly selected locations in the full 2D frontal hemifield, ranging from [-60,60] in azimuth, and from [−40, +50] deg in elevation. Data Analysis We analyzed the calibrated responses from each participant, separately for the different stimulus types, by determining the optimal linear fits for the stimulus-response relationships for the azimuth and elevation components: by minimizing the least-squares error, using the Scikit-learn library. R α and R are the azimuth and elevation response components, and T α and T are the actual azimuth and elevation coordinates of the target. Fit parameters, a and c, are the response biases (offsets; in deg), whereas b and d are the response gains (slopes, dimensionless) for the azimuth and elevation response components, respectively. Note that an ideal localizer should yield gains of 1.0, and offsets of 0.0 deg. We also calculated Pearson's linear correlation coefficient, r, the coefficient of determination, r 2 , the mean absolute residual error (standard deviation around the fitted line), and the mean absolute localization error of each fit. Linear regression for listener M2 were performed on the inverted azimuth coordinates of the stimulus-response relations, in order to align the deaf side to the left (positive bias) for all listeners. Multiple linear regression. To test to what extent the acute monaural listener makes use of the ambiguous head shadow effect (HSE) and the true source location (presumably through distorted remaining binaural cues, and spectral cues, see above) to localize sound sources, we analyzed our data with a multiple linear regression. We evaluated the relative contributions of sound level and stimulus azimuth to the subject's azimuth localization response in the following way:R Here,R α ,Î prox , andT α are the dimensionless z-scores for the response, proximal sound level, and target values, respectively, with µ z the mean, and σ z the standard deviation of variable z. In this way, the contributions of sound level and sound location can be directly compared, although they are expressed in different units, and may cover very different numerical ranges. The partial correlation coefficients, p and q, quantify the relative contributions of sound level and target azimuth, respectively, to the measured response. An ideal localizer would yield p = 0 and q = 1, indicating that the localization response is not affected by variations in perceived sound level. On the other hand, if p = 1 and q = 0 the responses are entirely determined by the head-shadow effect. The proximal sound level,Î prox , was calculated as the perceived intensity at the free ear, by using the following approximation:Î prox (T α ) = I snd + HSE. sin( Here, I snd is the actual free-field sound level (in dBA) at the position of the head, and the sine function approximates the head-shadow effect and ear-canal amplification for a broad-band sound (we took HSE = 10 dB; see [27]). For the elevation responses, we extended the multiple regression analysis in the following way [25]:R = p.Î prox + q.T α + s.T Here, the elevation response was considered to potentially depend on proximal sound level, the true target's azimuth location, and the true target's elevation angle. For an ideal localizer, the partial correlations should yield [p, q, s] = [0, 0, 1]. Azimuth Controls. Listeners were first exposed to control experiments in which elicited goal-directed head saccades to ten different sounds. Figure 4 shows the results for listener M2 in azimuth for different stimuli. Responses were highly inaccurate for all three stimuli, as gains and biases substantially deviated from the optimal values of 1.0 and 0.0 deg, respectively. The positive slopes of the regression lines were, however, significantly different from zero, and varied from one stimulus to the next. This indicates that the stimuli still appeared to contain some valid localization cues. Importantly, the localization bias changed in a systematic way with sound level for all three spectral stimulus types. As this listener is deaf in the left ear, soft sounds were perceived mainly towards the deaf side (negative bias) while the louder sounds shifted toward the hearing side (hearing ear). This result differs from the acute conductive unilateral plugged listeners who displayed a large hearing-ear bias for all sounds in this experiment. Pre-training. On the second day of the experiments, listeners performed the localization task for three high-pass filtered stimuli at different levels (HP50, HP60 and HP70). Figure 5 shows the pre-training regression results in azimuth for listener M2. The data indicate poor localization performance as both gain and bias were far from the optimal values. However, it is quite clear that the bias increased for higher sound levels: the HP50 stimui yielded smaller response biases (-9.3 deg) than the higher intensity stimuli, HP60 (18.2 deg) and HP70 (47.2 deg). Also, the response gains were significantly larger than zero. Training. In the training experiments, listeners were exposed to 401 trials in which they had to respond with a saccadic head movement to a HP60 stimulus, randomly selected out of ten locations in the azimuth plane. Immediately after the first localization response, an LED was presented at the center of the speaker to provide visual feedback about the actual source location. The subjects were instructed to make a corrective head movement towards the LED. This experiment was done to check whether visual feedback would lead to improvements in the localization performance. Figure 6 shows the regression results of listener M2 during the course of the training session for three windows of 50 trials: at the beginning of the training session (trials 1-51), after the first phase of the training (trials 101-151), and near the end of the training (trials 351-401). Looking at the gain and bias obtained for the three windows, it is clear that both values improved as training advanced: the gain increased from b = 0.5 to 0.7, while at the same time the bias reduced from a = 12.1 to 1.2 deg. At the same time, the response precession increased from r 2 = 0.6 to 0.8. : localization data of M2 for the ten training targets (HP60 stimuli) presented in randomized order with visual feedback in the azimuth plane (elevation zero) at the start (trials 1-51), after 100 training trials (nrs. 101-151), and towards the end of the session (trials 351-401). Note the systematic increase of the response gain, and the reduction in response variability (increased r 2 ) and bias during the session (cf. Figure 5B). Post-training. Immediately after the training session, subjects had to perform the same experiment as the pre-training session, without the visual feedback, to investigate whether their localization performance differed from the pre-training results. Figure 7 shows the regression results for M2. Comparingn the stimulus-response plots for this listener with the pre-training performance ( Figure 5), reveals that the training had clearly affected localization performance, not only for the limited set of ten trained locations, but also for non-trained locations, and for the different stimulus levels. The gain for stimulus HP60 had increased from b = 0.4 to 0.6, the bias decreased by nearly 25 deg towards the deaf side (from pre: a = +18.2 deg, down to post: a = −6.5 deg), while at the same time localization precision improved as well (r 2 increased from 0.43 to 0.61). The behavior for the lower (HP50) and higher (HP70) intensity stimuli, however, appeared to be different. For these sounds, the response gains remained low, at 0.2 − 0.3, but the biases were more strongly expressed: the soft sounds were heard more into the deaf side (bias decreased from a = −9 to -40 deg), whereas the louder sound remained well at the hearing side (bias changed from a = +47 to +31 deg). Response Azimuth (deg) Figure 7: Stimulus-response plots for the azimuth components for M2 immediately after adaptation. Comparison of these data with Figure 5 shows that training changed localization performance for the non-trained azimuthelevation locations and stimulus levels considerably: the bias for the 60 dB sound is now close to zero, with an increased gain, whereas the softer sounds were localized far into the deaf hemifield, and the loudest sound remained far into the hearing side, both with the same low gain. To illustrate the training effect for the three listeners, we summarized the overall statistical results for pre-and post training sessions in Figure 8 for the HP50, HP60, and HP70 stimuli before and after training. The most consistent improvements were obtained for the trained HP60 stimuli: increased gains and precisions, lowered biases and MAE's. The results for the softer sounds showed no change in gain, r 2 and MAE (data points near the diagonal), and a significant increase in the negative bias, as sounds were consistently localized more into the deaf side. A slight localization improvement was observed for the higher-intensity sounds, as the MAE was significantly reduced, because of the reduced bias. These response patterns, albeit preliminary, seem to differ markedly from the results obtained with the acute unilateral plugged hearing condition described in [1]. Averages across listeners are shown as insets: grey = pre-adaptation mean with std, green = post-adaptation mean with std. For the HP60 sounds, the post-adaptation results are more accurate (higher gains, and smaller bias), and more precise (less variability, higher r 2 ). The HP70 stimuli (right) yielded smaller overall errors, because of the reduced positive bias. Gains and precisions, however, did not change. The HP50 stimuli only yielded larger negative biases (into the deaf side), except for M1, who had a high response gain for this sound. To describe better how subjects had changed their localization performance we also performed a multiple regression analysis (Eqs. 2 and 3) on the pre-and post-adaptation data to quantify the contributions of the HSE and the true azimuth location to the responses. Figure 7B shows that the HSE (indicated by proximal sound level) had a large (negative) contribution (around −0.70) to the responses, but did not change significantly with visual feedback training. In contrast, the partial correlation coefficients for the true azimuth locations had increased significantly from 0.78 ± 0.09 to 0.90 ± 0.11 for the three listeners ( Figure 9C). Elevation To assess the potential training effect on the elevation performance of the SSD listeners, we performed multiple linear regression according to Eqn. 4. Figure 10 shows the results for this analysis. The partial correlation coefficients of HSE to the elevation responses did not show significant change with visual feedback training. The data however indicate that the partial correlation coefficients of true azimuth locations increased for all three subjects, while the elevation contribution remained unchanged. Discussion In addition to the acoustic information, there are also types of non-acoustic information, such as prior knowledge, attention, or memory of specific non-acoustic details related to sound source, that brain uses to localizes sounds. For example, it is highly unlikely that the sound of a car originated from above or below. A normal binaural listener can localize sounds accurately based on the measured acoustic information. Such a normal hearing listener will , therefore, put a smaller weight on the non-acoustic information than the acoustic information. These non-acoustic information are even more important for Chronic single-sided deaf. However, The mechanisms underlying adaptive processes to Chronic single-sided deaf are still unclear. Our experiment thus set out to establish the process in which the learning the system had to cope with ongoing changes. We also compared the short-term adaptation behavior of Chronic single-sided deaf listeners to that of acute conductive unilateral plugged listener. To study the mechanisms underlying the integration of the different acoustic cues,and the effect of adaptation on the chronic monaural listeners, they were presented with sound sources. We studied the adaptation in the Chronic single-sided deaf azimuth localization system. Our experiment thus set out to establish the process in which the auditory system had to cope with ongoing changes. We carried out the experiments where the subjects were exposed to training session for a fixed-intensity high pass sound source, presented at limited number of locations in the horizontal space. They had to generate head-orienting responses to the sound sources distributed in a set of restricted locations in azimuth plane. By providing visual feedback during the training session we investigated the ability of the listeners to cope with sound sources of various levels and spectra. Each trial was followed by the exact same sound source with an LED pointing to the sound location. The subjects, therefore, could correct head movement errors trial after a trial. We observed from Figure 6 that during the training Chronic single-sided deaf improved her localization performance towards the end of the training for HP60. Our results also showed that the adaptation generalized to other target locations, and to the intensity 70, indicating that adaptation was not a simple cognitive trick, however, in acute conductive unilateral plugged listener adaptation generalized to all sound intensities. Unlike acute conductive unilateral plugged listeners, the adaptation in the horizontal direction in Chronic single-sided deaf did not affect the elevation responses This can indicate that in Chronic single-sided deaf spectral pinna cues might underlie the improved performance in horizontal plane. This is perhaps due to the fact that Chronic singlesided deafs have developed a mechanism during their lives that increased use of pinna cues from hearing ear to azimuth direction does not interfere with the estimation process for elevation. All three Chronic single-sided deaf subjects slightly increased HSE effect during training. In the preadaptation sound localization behavior for acute conductive unilateral plugged, azimuth direction is mainly determined by ITDs, monaural intensity, filter cues from HSE, and potential weakened ILDs. For Chronic single-sided deaf , however, our data suggest that sound level plays a strong role with a week contribution of low pass spectral filtering, as there is no contribution from surviving ILDs at all. The post-training localization data indicate that Chronic single-sided deaf subjects put A larger weight on the good-ear HRTF which led to improved azimuth performance, while the elevation estimate is modulated by the azimuth precept ( Figure 11). However, for acute conductive unilateral plugged subjects after training, the elevation localization resulted to be highly determined by the prior. Figure 11: Potential available localization cues for SSD listeners. (A) Because of the absence of any contralateral input from the deaf ear, only three monaural cues remain to estimate source azimuth (sound level, the good-ear HRTF and the low-pass head filter). The pre-adaptation data suggest a strong contribution of Iprox and a weaker contribution from azimuth (from ipsi-spectral or LP filter cues). The elevation percept could be based on azimuth and HRTF cues, in combination with prior assumptions. (B) After training, the azimuth-related weights increased for azimuth estimation, possibly from increased weights of HRTF and low-pass head filter, whereas the elevation percept is based on a stronger azimuth percept. In listener M1, also the prior weight had increased.
6,969.2
2020-04-20T00:00:00.000
[ "Physics" ]
Crude glycerol in the diets of the juveniles ofAmazon catfish(female Pseudoplatystoma punctifer x male Leiarius marmoratus) This research aimed to determine the best inclusion level of crude glycerol in the diet of the Amazon catfish (Pintado), through zootechnical performance, body composition, metabolic profile and histopathology. The experiment was conducted at the Laboratory of Morphophysiology and Biochemistry of Neotropical Fishes of the Federal University of Tocantins. There was used 150 juvenilles of pintado, these with initial weight of 6,83 ± 1,11 (g) and 10,06 ± 0,57 (cm) lenght in a completely randomised design, with 3 replications (10 animals in each one). They were fed with five diets containing increasing levels of glycerol (0 g kg-1, 50 g kg-1, 75 g kg-1, 100 g kg-1, and 125 g kg-1) during 90 days (30 days of adaption and 60 experimental days). The indexes were evaluated and they did not present statistical difference between each other, except for the specific growth rate, which showed a moderate linear behavior and muscular glycogen that at the level of 125 g kg-1 presented a lower concentration compared with the control diet (0 g kg-1). Regarding histology, the crude glycerin did not cause significant hepatic and renal changes in the referred specie, since the alterations found in the two tissues were considered lesions that did not compromise the functioning of the organ or that are reversible. Finally, it was indicated that the juveniles of Amazon Pintado are able to metabolize the crude glycerin up to 100 g kg-1level. INTRODUCTION The accentuatepopulation growthentailssome implications such as the increasingdemand for basic inputs inherent to survival. This is why the scientific community has focused its attention on issues that promote the perpetuation of humanity and the sustainability of the planet, such as the development of renewable energy sources and the increase of food production. Thus, the production of biodiesel has been highlighted as an alternative to fossil fuels, which are considered the main responsible for the greenhouse gases emissions as carbon dioxide, for example. In a national scenario biodiesel started to stand out with greater intensity starting in 2010, after the ratification of the mandatory use of biodiesel together with diesel from fossil fuels , according to current national legislation. However, biodiesel production generates a sign ificant amount of byproduct known as crude glycerol and it has a high polluting potential when not disposed of correctly. A possible solution to this problem is the use of this glycerol as an alternative food for farm animals, because besides the advantages linked to its bioavailability, glycerol has a low cost. The fish farming has been one of the research areas that have tested this byproduct as an alternative food. Even though it is a sector that contributes to the world food production, it faces some obstacles that make it difficult to expand, such as a the need for dietary ingredients, which are available in small quantities. As a result the production costs increases , since food represents between 70 and 90% of the production costs of captive fish. Studies that have investigated the use of crude glycerin in fish diets are still in the beginning. Hence, in view of the current scenario of growth in aquaculture and biodiesel production, the search for information in association with animal experimentation on the use of this b-product as an energy ingredient in the fish diet is essential to evaluate the substitution of conventionally used dietary energy sources and for the foundation of later studies with glycerol in fish nutrition. The present work was conducted with the aim of determining the best level of the inclusion of glycerol in the diet of the hybrid Amazon catfish "Pintado" by the evaluation of zootechnical performances , body composition, metabolic profile and histopathology. II. MATERIAL AND METHODS 2.1 Experimental Design The experiment was conducted in the School of Veterinary Medicine and Animal Sciende of the Federal University of Tocantins -UFT, Araguaína Campus -TO, at the Laboratory of Morphophysiology and Biochemistry of Neotropical Fishes, from january to april 2017. Following the standards written in the Law of Procedures for Scientific use of Animals of the Federal University of Tocantins, the process number is 23101.005896/2016-56. There was used 150 juvenilles of pintado, these with initial weight of 6,44 ± 0,89 (g) and 10,06 ± 0,57 cm lenght, they were displaced in fiber boxes with 1000 liters capacity and constant water flow. Five treatments were tested with three repetitions (fiber boxes) and 10 animals per experimental unity. The experimental design used was completely randomised design (DIC). The diets were created to be isoproteic and isoenergetic and all the nutritional requirements were met, according to Almeida (2014). The treatments consis ted of five experimentais diets and four of them with inclusion level (50 g kg -1 , 75 g kg -1 , 100 g kg -1 , and 125 g kg -1 ) of crude glyrecol in partial substitution of maize and a control treatment as reference (no inclusion of glycerol) (Table1). The ingredients were pelleted in a meat grinder and dried in a greenhouse with circulation and air renewal at 55ºC. The fishes were fed twice a day (around 8h and 17h) to apparent satiety for a period of 90 days of experiment, being 30 days of adaptation. The siphoning of the boxes was performed every day (15h) and the water quality parameters such as pH, oxygen, temperature and ammonia were measured weekly.After the experimental period, fish were fasted for 24 hours to empty the gastrointestinal tract. Five animals from each experimental unit were selected in order to execute biometrics, length (through pachymeter) and weight (high precision scale), for zootechnical performance data as proposed by Fracalossi and Cyrino (2013). After this, the animals were desensitized on ice and then the body composition analysis was performed. The indices were used to verify if the tested food interferes negatively or positively in the performance and health conditions of the animals. Zootechnical performance The survival (SOB%) -was calculated through the equation: SOB (%) = nf / ni x 100 (1) In which: nf = Final number of animals ni = Initial number of animals Specific Growth Rate (SGR) -it shows the daily growth of the animals, obtained in percentage. For this calculation, the following expression was used: SGR (% days) = (lnpflnpi) / t x 100 (2) ln which: ln pf = Logarithm of final weight ln pi = Logarithm of initial weigh t = time Hepatosomatic (HI) Index -it is the ratio between the total of the liver weight and the total of the fish weight. This index is obtained according to the following formula: HI= liver weight (g) / fish weight (g) x 100 (3) Weight gain (WG) -it is the final weight of the animal subtracted from the initial weight. This calculation is obtained by the following formula: WG= final weight (g) -initial weight (g) (4) Condition factor (CF) -The condition factor is the parameter that indirectly measures the physiological state of the animal, in relation to stored energy, such as hepatic glycogen and body fat. For its determination, there was used the following formula: CF= weight (g) / total length (cm) (5) Apparent feed conversion (AFD) -it is equal to the amount of food needed for the animal to gain 1 kg of live weight: AFD=consumed diet (g)/ (final weight -initial weight) (6) Food Efficiency (FE) -this is the average weight gain per fish in the group, divided by the average fish feed intake. Thus, this measures the efficiency that the animal had to convert the consumed diet in live weight: FE= mass gain (g)/amount of the diet that were ingested(g) x 100 (7) Body composition Analysis of fish body composition was performed according to the standard methodology described by INCT/ Detmann et al. (2012). They were made with the five treatments tested (0g kg -1, 50 g kg -1 , 75 g kg -1 , 100 g kg -1 , and 125 g kg -1 ) of crude glycerin with 5 replicates (for each fish used). Hematologic parameters For the hematological analyzes, fifteen individuals from each treatment were randomly selected. The blood collection was performed by caudal puncture using syringes and needles bathed in Ethylenediamine Tetra-Acetic Acid -EDTA and then they were desensitized on ice. Subsidiary blood samples were used immediately for the determination of hematocrit -Htc (microhematocrit technique, according to Wintrobe (1929), hemoglobin-Hb (cyanometahemoglobin method, according to Drabkin (1948) and red cell count -RBC in Neubauer's chamber using as the citrate formaldehyde as diluent. Analyzes of biochemical parameters The blood was centrifuged at 3000 rpm for 5 minutes to obtain blood plasma. The total of proteins, cholesterol, triacylglycerol, Aspartate Amino Transferase enzyme (AST) and Alanine Amino Transferase enzyme (ALT) were analyzed through the Labtest Kit and the readings in the spectrophotometry apparatus. For the quantification of the blood glucose concentration, the One Touch Ultra2 portable reading device (reading between 20 and 600 mg/dL) was used with disposable tapes that were suitable for the apparatus. About 1 to 5 μl of blood was placed on the tape, after the mo nitor switched on automatically and then it was waited 5 seconds for quantification of the blood glucose concentration in mg/dL. For the determination of hepatic and muscular glycogen this researche used the technique described by Bidinotto et al. (1997). In addition to the the referred methodology, hepatic glycogen was also analyzed using Image J software to quantify the total area occupied by it. 2.6 Histopatholog y of the liver and the kidney There were selected 9 animals (randomly) from each treatment and they were desensitized on ice to remove hepatic and renal tissue samples The samples were washed with 0.9% saline solution and fixed for 24 hours in Bouin solution. Subsequently, the material was washed for 24 hours in running water and stored in containers containing 70% alcohol. The samples were then dehydrated in successive alcohol baths (70%, 80%, 90%, 95%, and 100%) and clarified in xylol FERNANDEZ et.al., 2011). After the dehydration and clarification processes were completed, the samples were embedded in paraffin for histological sections of 3 μm thick using a manual microtome, which were stained with Hematoxylin and Eosin (HE) for later analysis under the light microscopy. All sections were analyzed using images obtained on a LEICA DM500 microscope connected to a computer by the LAZ 2.0 program. To identify alteration in the liver or the kidney alterations it was observed 5 fields of each slide in 100x increase. The histopathological analyzes of both tissues were evaluated by two semi-quantitative methods: Mean Value of Changes (MVC) and Histological Alterations Index (HAI). The calculation of (MVC) results in the incidence of lesions, according to Schwaiger et al. (1997). Thus, a numerical value was assigned for each animal according to the scale: grade 1 (absence of histopathological alteration), grade 2 (occurrence of localized lesions) and grade 3 (lesions widely distributed by the organ). To evaluate the degree of liver and kidney alterations, the Histological Alteration Index (HAI) was used according to Poleksic and Mitrovic-Tutundžic (1994), which each alteration was classified in progressive degrees related to tissue function impairment: s tage I, for changes that do not compromise the functioning of the body; s tage II for more severe alterations that compromise organ functioning, but are reversible; and stage III, for the most serious alterations that irreversibly compromise the functioning of the organ. HAI value was calculated for each animal according to the formula: HAI = (1x ΣI) + (10 x ΣII) + (100 x ΣIII), in which ΣI, II and III correspond to the diferent stages numbers I, II and III respectively. The HAI values between 0 and 10 indicate normal tissue functioning; between 11 and 20 indicate mild damage to the organ; between 21 and 50 indicate moderate damage; from 51 to 99, severe damage and greater than 100 indicates irreversible tissue damage. Statistical analyzes The data were submitted to analysis of variance (ANOVA) and the averages were compared by Tukey's test (p> 0.05) and those without normal distribution were submitted to non-parametric analysis (Kruskall Wallis) using the Instat Program v 3.0 for Windows, also, the results were expressed as average ± standard deviation. The parameters of performance and body composition were also submitted to linear regression analysis. Water analysis Regarding the environmental variables quantified during the experiment, differences between treatments were not identified (p> 0.05). The average values for water quality parameters that were recorded during the experimental period are displayed in (Table 2). Table.2: Water quality parameters during the experimental period. The values are followed by the calcultation of the averages and standard deviation Zootechnical performance In the comparison of the zootechnical performance indexes, it was observed that there was no statistical difference between the treatments for the analyzes of the final length, final weight, survival, hepatosomatic index, condition factor, weight gain, conversion or diet efficiency.However, the animals specific growth rate was influencedby the levels of crude glycerin presenting a moderate and an increasing linear behavior compared to the control diet (Fig. 1). The zootechnical parameters of juveniles "Pintado" were analyzed in the present study and are demonstrated in (Table3). Body composition The body composition of the pintado juveniles was not influenced by the partial replacement of maize by crude glycerin. Since no significant differences (p> 0.05) were found between the treatments in any of the indices analyzed. The results of the analysis of body composition of juveniles of pintado (moisture content, ashes and crude protein) are presented in (Table 4). Evaluati on of the red trial The increasing levels of crude glycerin in the diet of pintados did not result in changes in the erythrocyte variables in relation to the control (P> 0.05): hematocrit, hemoglobin, erythrocyte cell count and hematimetric indexes (mean corpuscular volume, mean corpuscular hemoglobin and mean corpuscular hemoglobin concentration) as presented in (Table 5). Biochemical analysis The analysis of the hepatic glycogen did not present any statistical difference in comparison with the treatments with inclusion of glycerol in relation to the control as shown in (Table 7 and 8). The muscle glycogen analyzed in this study had a higher concentration in the control diet (0% glycerol) when compared to the treatment with 12.5% glycerol as presented in (Table 7). The values are expressed as average ± standard deviation. NS = not significant (P> 0.05) by the Turkey Test. Histopathological analysis 3.7.1 Liver The alterations observed in the liver of the pintado that were feed with the control diet and in those fed with increasing levels of crude glycerin were mostly stage I lesions, considered to be alterations that did not compromise the functioning of the organ and a stage II alteration considered more severe, but also reversible. The alterations found in hepatocytes were: nucleus at the periphery of the cell, nuclear hypertrophy, cytoplasmic vacuolization, sinusoidal dilatation and biliary stagnation. The frequency of the hepatic changes and the classification of the severity and the impairment of hepatic function found in the treatments are presented in (Table 9). The most frequent alterations found in the liver of the pintados are presented in the (Fig. 2). The results of the average value of alterations and the indexes of histopathologival changes are displayed in (Fig. 3).The Mean Value of Histological Alteration (VMA), that is obtained considering the hepatic alterations did not present significant differences (p> 0.05) between the experimental diets (0 g kg -1, 50 g kg -1 , 75 g kg -1 , 100 g kg 1 , and 125 g kg -1 ) and the control ( fig.3a) group. The alteration that were observed in the liver, were punctually distributed in the organ and they did not exceed grade 2. The Histological Alteration Index (HAI) also showed no significant difference ( fig.3b), which demonstrates that changes in the hepatic tissue did not compromise liver functioning. Kidney The renal histopathological analysis that was performed in this study showed that the alterations observed in the kidney of the juvenilles of pintado were limited to the renal tubules and were mostly stage I lesions (alterations that do not compromise the functioning of the organ) and stage II (considered more severe, but still possible to reverte). The alteration were: nuclear hypertrophy, tubular light dilation and tubular light occlusion and are displayed in (Fig.4). Fig.4: Histopathological changes in the pintado kidney. (A) tubular light occlusion (black arrow) , tubular light dilatation (white arrow) and nuclear hypertrophy (dashed black arrow). (B) Renal tubule with normal morphology (black arrow ). The frequency of renal changes and the degree of severity and impairment of renal function found in the treatments are showed in (Table 10). The results of the the average value of alterations and the index of histopathological changes are presented in (Fig.5). Hence, as observed in the liver analysis, the Mean Value of Histological Change (VMA) obtained for renal alterations did not present significant differences (p> 0.05) between the experimentais diets (50 g kg -1 , 75 g kg -1 , 100 g kg 1 , and 125 g kg -1 of crude glycerin) and the control (Figure 5a) one. The alterations were distributed punctually in the renal area, because they did not exceed stage 2. The Histological Alteration Index (HAI) also showed no significant difference (figure 5b), wich demonstrates that the renal alterations were not serious in order to compromise the functioning of the organ. IV. DISCUSSION The water parameters analyzed during the experiment were not influenced by the diets tested. All the analyzed parameters were kept within the standards that are 2010) showed that the use of up to 100 g kg -1 dietary glycerol for channel catfish (Ictalurus punctatus) did not cause changes in weight gain, feed efficiency, hepatosomatic index and survival. However, the inclusion of 150 and 200 g kg -1 of glycerol in the diet adversely affected its growth performance. The length of the animals was relatively close, ranging from 15.86 cm to 18.23 cm in treatments with 100g kg -1 inclusion of glycerol and with 75g kg -1 addition of the product, respectively. One factor that may have influenced the non-statistical differentiation of the values of length and final weight is that the fish of the current found that the survival of Nile tilapia (Oreochromis niloticus) fingerlings was higher in treatment that did not contain glycerol. The highest mortality rates according to the author were at the two lowest levels tested, the 75 g kg 1 followed by the 25 g kg -1 . Therefore, differences species and developmental phases may influence the response of the animal metabolism to adapt to glycerol in the diet. According to the referred author, the development phase of the animals used in the research may have been preponderant for a high mortality rate, however, other factors may have contributed to this occurrence, among them, the temperature and the tested food source itself. Hepatosomatic index can be altered given the great importance of energetic metabolism in fish liver, since hepatic deposition of lipids as well as glycogen is common. However, in the present study, this index did not present any alterations in the metabolic functions of the liver, even in comparison to the differents levels of Thus, as the results found by Matos (2016), the conversion and feed efficiency analyzed in this work were not influenced by the diets analyzed. Neu et al. (2012), although they did not verified statistical differences for AFE, they had observed that these values were considerably higher for Tilapia in the larval phase, which means that depending on the environmental conditions or the physiological phase in which the species is, this parameter can be influenced. Crude glycerin has also been studied in other monogastric species and the results found in these studies are in agreement with those found in the present research. Berenchtein et al. (2010) concluded that glycerol can be used as an energy ingredient in the diet for growing and finishing pigs at levels of up to 90 g kg -1 , as it did not influence performance, carcass characteristics and meat quality. ; respectively MCH varies considerably between species, ranging from 30 to 100 pg (picograms) due to differences in the size of circulating globules (WEISS et al., 2010). In this study the identified MCH (mean corpuscular hemoglobin) was higher than those referenced by Weiss et al. (2010), however there was no statistical difference between treatments. The distinction between the values found for VCM and MCH results is due to the peculiarities of fish erythrograms, since hematological parameters can be influenced by numerous factors, such as age, species, stress, temperature, photoperiod, nutritional status and the methodology used for ( neutrophils (14.6 ± 8.30 -male and 11.8 ± 8.22-female) were larger than those reported for the Pintado. Knowledge about the origin and development of thrombocytes and leukocytes in fish is considered scarce, although some ideas have been proposed since the beginning of the last century. However, the data acquired through the studies of hematology and/or hematopoietic organs are still inconclusive. However, organic defense blood cells present interspecific variation (TAVARES-DIAS et al., 2002), which may explain the variation in the values compared above. In addition, differential leukocyte count has some barriers that make it difficult to compare results among different authors, even for studies that use the same species. Among the problems faced are the divergence of terminology, mainly involving granulocytes and the diversity of techniques for quantification and identification of leukocytes (TAVARES-DIAS && MORAES, 2004). However, leukogram is considered an important tool in the understanding of infections and other processes of homeostatic imbalance (SILVA et al., 2012). The biochemical parameters analyzed in this study were not altered by the different levels of crude glycerin. The biochemical composition of the blood reliably portrays the constancy between ingress, egress, and metabolization of nutrients in animal tissue. This balance is termed homeostasis, in which complex metabolic-hormonal mechanisms are involved (BOCKOR, 2010). The concentrations of total cholesterol were not affected by the inclusion of crude glycerin in the diet, corroborating with Balen (2017) results for juveniles of Curimbatá (Prochilodus lineatus), that were feed diets containing crude glycerin (0, 40, 80, 120, 160 and 200 g kg -1 ) and Neu et al. (2013) for juveniles of Nile tilapia with diets containing (0,0, 25, 50, 75 and 100 g kg -1 ) glycerol. Enzymes ALT (alanine aminotransferase) and AST (aspartate aminotransferase) also had not presented a significant difference between the treatments, demonstrating that the diet did not alter their activities. These enzymes are considered important in the diagnosis of hepatic lesions, since the increase of their serum activity may be related to hepatocyte rupture, resulting from processes such as cellular necrosis or aggression by toxic agents (HARZER et al., 2015). Total protein values found in this work did not differ much from those found by Costa et al. (2015) for Nile Tilapia, that remained between 3.93 to 4.13 and that also did not present significant difference. According to this same author, the diet containing glycerol protects protein catabolism for energy purposes in Nile tilapia. Corroborating with this study, Menton et al. (1986) also found no significant difference in plasma protein concentration of trout that were fed diets containing 60 and 120 g kg -1 glycerol. Glucose acts as an energetic substrate stored in the form of hepatic and muscular glycogen (through glycogenesis) and can be mobilized to provide energetic support to fish. Plasma glucose is highly variable not only among species, but also interspecifically, at different stages of life or under certain feeding diets (HEMRE et al., 2002). The results for glucose found in this research were not altered with the inclusion levels of crude glycerol in the diet. Distinguishing from the results found in this work for glucose concentration, Li et al. (2010) observed that the level of glucose in the blood of the catfish of the channel (Ictalurus punctatus) was influenced by the different levels of glycerol (0, 50, 100, 150 and 200g kg -1 ). The values were significantly higher in fish fed with 0 g kg -1 and 50 g kg -1 of glycerol than in fish fed with the other diets. Also, at levels containing 100, 150 and 200 g kg -1 of glycerol, glucose generally decreased For Nile tilapia in the growth/fattening phase Moesch et al. (2016) found that glycerol-containing treatments (200, 400, 600, 800 and 1000 g kg -1 ) had the lowest blood glucose levels, a fact that the author associated with the reduction of starch (corn) in the diet. Carnivorous fishes use lipids efficiently as a source of energy, due to the restricted ability of their metabolism to regulate glycemia (CASERAS et al., 2002;HEMRE et al., 2002), where triacylglycerols are the main form of storage of body energy in these animals The results for the concentration of triglycerides observed in this study are similar to thos e obtained by Costa et al. (2015) and Neu et al. (2013). Both found that dietary glycerol levels did not influence the plasma triglyceride concentration in juvenile Nile tilapia. Costa et al. (2015) stated that there was no change in plasma triglyceride levels because there was no energy deficiency in the animal and therefore, there was no need for the species to mobilize energy and consequently to perform lipolysis on adipocytes. Crude glycerin has been tested in a diet of other monogastric species. Romano et al. (2014) studied the effects of glycerol on the metabolism of broilers. Similarly to the present study, there was no significant difference in total cholesterol concentrations between the control group and the groups that received diets with different glycerin inclusion levels (25, 50, 75 and 100 g kg -1 ), Gallego et al. (2014), when testing levels (0.0, 35, 70, 105 and 140 g kg -1 ) of semipurified glycerin neutralized in the diet, observed that the plasma glucose, cholesterol and triglycerides levels had also no effect of the inclusion of glycerin. In ruminant animals, Maciel et al. (2016) analyzed the performance and carcass characteristics of dairy cows that were fed diets containing crude glycerin and observed that it did not alter serum glucose concentrations, triacylglycerol, total cholesterol, high density lipoprotein cholesterol and creatinine. Ribeiro (2015) when studying crude glycerin (0, 70, 140, and 210g kg -1 ) in the diet of confined lamb, observed no significant difference for albumin, globulin, triglycerides, alanine aminotransferase (ALT), aspartate aminotransferase (AST) and gamma glutamyltransferase. However, serum concentrations of urea and glucose decreased linearly with the increasing of inclusion and cholesterol presented increas ing linear behavior, therefore, they were influenced by glycerol. Glycogen levels present in the hepatic tissue are adaptable to diet (SHEMU, 1997;HEMRE et al., 2002). Hepatic glycogen did not present statistical difference with the treatments with inclusion of glycerol in relation to the control. Menton et al. (1986), when examining glycerol (ranging from 10 to 120 g kg -1 ) in the diet of rainbow trout (Oncorhynchus mykiss) replacing part of wheat bran, observed that in diets containing 60 and 120 g kg -1 of glycerol there was an increase in the level of plasma glucose, but the hepatic glycogen concentration was not altered. The concentration of muscle glycogen was higher in the control diet (0 g kg -1 glycerol) when compared to the treatment with 125 g kg -1 glycerol It is well known that muscle glycogen is essential only for muscle and that its metabolization provides energy for muscle contraction, whereas hepatic glycogen regulates glycemia and provides energetic substrate for other tissues. Silva et al. (2012), when testing two levels of glycerol (0 and 50 g kg -1 ) supplementation, as a way of replacing the muscular glycogen reserves of gilthead (Sparus aurata), observed that fish that were feed the inclusion of 50g kg 1 crude glycerol showed a significantly higher glycogen deposition than the control (0g kg -1 ) group. In previous studies involving glycerol, it has been shown that glycerol may also influence lipogenic activity. Lin (1977), when studying the addition of 200g kg -1 glycerol in the diet of rats for three weeks, reports that it caused an increase in liver weight. On the other hand, when the same author studied 200 g kg -1 of glycerol in the diet of broilers fed for three weeks, he did not observe alteration in liver weight Lin (1977), when studying the feeding of non-ruminant animals, observed that glycerol causes species -specific and organ-specific responses. Based on this hypothesis, in the literature, glycogen deposition and lipogenic activity can be influenced by dietary glycerol. It suggests that more studies have to be done with this food in order to uncover the metabolic effects of glycerol on energy reserves in fish. Crude glycerin may have methanol concentrations in its constitution and that an acute intoxication by this residue can lead to the accumulation of formic acid, resulting in a process of metabolic acidosis (LAMMERS et al., 2008). In this way it is of great importance studies that investigate pathological alterations linked to the glycerol. The liver is an important organ in the digestion and absorption of nutrients from food and, therefore, the monitoring of this organ is considered essential (RAŠKOVIÉ et al., 2011). Morphological changes in the liver can be triggered by chemicals, drugs and even by unbalanced nutrition, which can result in adaptations, injury and even cell death. This organ is highly susceptible to changes in the nutritional status of fish, where diet quality interferes directly with functional histo-morph structure (HONORATO et al., 2014). The alterations observed in the liver of the pintados that were feed on the control diet and those that were feed with increasing levels of crude glycerin were considered to be mostly alterations that did not compromise organ function. Research on the histopathology of animals that were feed with crude glycerin are still incipient. However, even there is just a few studies performed the results corroborate with those found in the present study. Moesch et al. (2016) when studying the replacement of corn bran by crude glycerol in diets for O. niloticus fingerlings at concentrations of (0, 200, 400, 600, 800 and 1000 g kg -1 ) concluded that there were no differences in the area of hepatocytes and that the possible toxic compounds present in the crude glycerol composition did not affect the area of hepatocytes. According to Lammers et al. (2008) growing pigs fed with the addition of up to 100 g kg -1 glycerol in the diet had not suffered hepatic damage, since the frequency of histological lesions was not influenced by dietary treatment. Also, in studies with ruminants, Leão et al. (2012) did not identify hepatic alterations in samples from cull cows and heifers that were feed with up to 240 g kg -1 inclusion of glycerol. As in the liver histopathological analysis, the renal lesions performed in this study, renal, were not serious in order to compromise the functioning of the organ. Which also happened in the liver histopathological analysis of this study, the renal lesions performed were not serious in order to compromise the functioning of the organ. There was no studies that associate the renal histology of fish with the diet containing crude glycerin. However,
7,249.2
2018-09-01T00:00:00.000
[ "Biology" ]
A Novel Remote Sensing Index for Extracting Impervious Surface Distribution from Landsat 8 OLI Imagery : The area of urban impervious surfaces is one of the most important indicators for determining the level of urbanisation and the quality of the environment and is rapidly increasing with the acceleration of urbanisation in developing countries. This paper proposes a novel remote sensing index based on the coastal band and normalised di ff erence vegetation index for extracting impervious surface distribution from Landsat 8 multispectral remote sensing imagery. The index was validated using three images covering urban areas of China and was compared with five other typical index methods for the extraction of impervious surface distribution, namely, the normalised di ff erence built-up index, index-based built-up index, normalised di ff erence impervious surface index, normalised di ff erence impervious index, and combinational built-up index. The results showed that the novel index provided higher accuracy and e ff ectively distinguished impervious surfaces from bare soil, and the average values of the recall, precision, and F1 score for the three images were 95%, 91%, and 93%, respectively. The novel index provides better applicability in the extraction of urban impervious surface distribution from Landsat 8 multispectral remote sensing imagery. Introduction Impervious surfaces are artificial surfaces that water cannot infiltrate to reach the soil, such as parking lots, streets, and highways [1]. Changes in land use and land cover caused by the expansion of impervious surfaces are likely to influence the regional climate [2]. Jacobson et al. and Huszar et al. pointed out that the impervious surface area and land surface temperature are positively correlated, and the expansion of impervious surfaces will result in the increase of the land surface temperature which may cause uneven distribution of urban heat and trigger the urban heat island effect [3,4]. Accurate and fast extraction methods for impervious surface distribution are important which is helpful for detecting regional environmental changes in urban areas and achieving sustainable urban development. They can be divided into two categories, namely, field surveys and remote sensing [1]. Field surveys can provide more detailed information on impervious surface distribution but are time-consuming, laborious, and difficult to apply to the assessment of large areas. In contrast, widely-used remote sensing methods can extract impervious surface distribution quickly, and sources of commonly used remote sensing images include the QuickBird [5], WorldView-2 [6], and Landsat satellites [7], etc. When extracting impervious surface distribution in a large urban area, Landsat Appl. Sci. 2019, 9, x FOR PEER REVIEW 3 of 15 rows × 480 columns with a pixel size of 30 m, and their (5,4,3) false colour composites are shown in Figure 1. Typical ground objects in these case images include vegetation, bare soil, water body, and impervious surfaces, and present more representative and complexely. In the (5,4,3) false colour composite image with a 2% linear stretch, the vegetation is red in summer such as Cases 1 and 2 and easily distinguished from impervious surfaces because of their different spectral features; bare soil is exposed in winter, such as in Case 3, and is easily confused with impervious surfaces because of their similar spectral features. Proposed Method According to the vegetation-impervious surface-soil (VIS) model [24], land cover consists mainly of vegetation (V), impervious surfaces (I) and bare soil (S), with the exception of water bodies. In remote sensing imagery, the largest difference in land cover is that between water bodies and land, and land can further be divided into impervious surfaces and permeable surfaces (vegetation and bare soil). Hence, the workflow can be divided into two steps, namely, masking water bodies and extracting impervious surfaces from land. An impervious surface is an area without vegetation coverage and thus has a lower NDVI value [25]. Studies by Wang et al. and Mu et al. showed that impervious surfaces usually have higher values in the blue band [10,26]. Besides, Chen pointed out that the coastal band in Landsat 8 imagery can enhance information on impervious surfaces more significantly than the blue band, and the addition of the coastal band in the supervised classification of land use is beneficial for further distinguishing impervious surfaces from other ground objects [27]. In this study, a RISI was proposed for extracting impervious surface distribution on the basis of the coastal band and NDVI of a digital number (DN) image and is defined by Equation (1): where B1 and NDVI1 denote the coastal band and NDVI, respectively, after a 0-1 transformation. The workflow comprises the following four steps: (a) Water bodies are first masked; (b) The coastal band undergoes a 0-1 transformation to obtain B1; (c) The NDVI is calculated, and then a 0-1 transformation is performed to obtain NDVI1; and (d) The RISI is calculated and then segmented using a threshold determined by the Otsu method [28]. Typical ground objects in these case images include vegetation, bare soil, water body, and impervious surfaces, and present more representative and complexely. In the (5,4,3) false colour composite image with a 2% linear stretch, the vegetation is red in summer such as Cases 1 and 2 and easily distinguished from impervious surfaces because of their different spectral features; bare soil is exposed in winter, such as in Case 3, and is easily confused with impervious surfaces because of their similar spectral features. Proposed Method According to the vegetation-impervious surface-soil (VIS) model [24], land cover consists mainly of vegetation (V), impervious surfaces (I) and bare soil (S), with the exception of water bodies. In remote sensing imagery, the largest difference in land cover is that between water bodies and land, and land can further be divided into impervious surfaces and permeable surfaces (vegetation and bare soil). Hence, the workflow can be divided into two steps, namely, masking water bodies and extracting impervious surfaces from land. An impervious surface is an area without vegetation coverage and thus has a lower NDVI value [25]. Studies by Wang et al. and Mu et al. showed that impervious surfaces usually have higher values in the blue band [10,26]. Besides, Chen pointed out that the coastal band in Landsat 8 imagery can enhance information on impervious surfaces more significantly than the blue band, and the addition of the coastal band in the supervised classification of land use is beneficial for further distinguishing impervious surfaces from other ground objects [27]. In this study, a RISI was proposed for extracting impervious surface distribution on the basis of the coastal band and NDVI of a digital number (DN) image and is defined by Equation (1): where B1 and NDVI1 denote the coastal band and NDVI, respectively, after a 0-1 transformation. The workflow comprises the following four steps: (a) Water bodies are first masked; (b) The coastal band undergoes a 0-1 transformation to obtain B1; (c) The NDVI is calculated, and then a 0-1 transformation is performed to obtain NDVI1; and (d) The RISI is calculated and then segmented using a threshold determined by the Otsu method [28]. Water Body Masking Appl. Sci. 2019, 9, 2631 4 of 15 The water body index (WI) is calculated using Equation (2) [29]: where Green denotes the green band and SWIR1 denotes the shortwave-infrared band. The value corresponding to the right valley of the WI histogram is manually fine-tuned as the segmentation threshold to mask water bodies. 0-1 Transformation of the Coastal Band The coastal band undergoes a 0-1 transformation to obtain B1 according to Equation (3): where T0 and T1 denote the band before and after the 0-1 transformation according to Equation (3), respectively, and max and min are the maximum and minimum values of the band, respectively. Calculation of the NDVI The NDVI is calculated using Equation (4) [30]: where Red denotes the red band and NIR denotes the near-infrared band. Then, the NDVI undergoes a 0-1 transformation to obtain NDVI1 in an analogous way to Equation (3). Calculation of the RISI The RISI is calculated using Equation (1), and then its threshold is determined by the Otsu method and used to segment the image to obtain the impervious surface region. The Otsu method is an adaptive threshold determination method and is widely used in image segmentation, which maximizes the class variance between objects and background [28]. Comparison of Methods The NDBI, IBI, NDISI, NDII, and CBI methods were used for comparison. The segmentation thresholds of the NDBI and NDISI were both set to 0, which is the default threshold according to the authors who proposed these indices, whereas the thresholds of the other three indices were determined by the Otsu method here because the authors who proposed these indices determined the thresholds manually rather than setting a fixed threshold. In addition, in all the methods, water bodies were masked first so as to ensure the comparisons coincided. (3) NDISI: Xu proposed the NDISI for extracting impervious surface distribution [9]: where TIR denotes the thermal band, which we replaced with the TIR1 band (10.6-11.19µm) in Landsat 8 imagery. MNDWI is calculated as follows [32]: (4) NDII: Wang et al. devised the NDII to extract impervious surface distribution [10]: where VIS represents one of the red, green and blue bands. Wang pointed out that the accuracy of extraction is highest when VIS represents the red band [10], and hence we also used the red band to calculate the NDII. (5) CBI: Sun et al. developed the CBI to extract impervious surface distribution [11]: where NDWI is calculated as follows [33]: Method for Accuracy Assessment Accuracy is determined in terms of precision, recall, and the F1 score. According to their actual and predicted values, pixels are divided into true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN), and then the precision (P), recall (R), and F1 score are calculated using Equations (13)- (15) [34]: where TP_N and FP_N are the total numbers of pixels in TP and FP, respectively. where FN_N is the total number of pixels in FN. The closer the F1 score is to 1, the higher the accuracy of the extraction of impervious surface distribution is. Spectral Features of Impervious Surfaces, Bare Soil, and Vegetation Typical samples of impervious surfaces, bare soil, and vegetation were selected from each case image after water bodies were masked, and the mean spectral values for each type of sample were calculated to obtain their spectral curves ( Figure 2). These show that impervious surfaces had similar spectral features to those of bare soil. The DN value of impervious surfaces in the coastal band was higher than those of vegetation and bare soil, and vegetation had the largest difference between the DN values in the red and near-infrared bands, followed by bare soil. In the TIR1 band, the DN value of impervious surfaces was the highest in Case 2, whereas in Cases 1 and 3 the DN value of bare soil was the highest. Results of Extraction of Impervious Surface Distribution The ground truth for impervious surface distribution was determined by visual interpretation in each case image and is shown in Figure 3: Impervious surface distribution extracted by the proposed method is shown in Table 1. We can see that water bodies were masked well by the WI. The RISI effectively distinguished impervious surfaces from bare soil and vegetation, especially the large amount of bare soil in Case 3. Results of Extraction of Impervious Surface Distribution The ground truth for impervious surface distribution was determined by visual interpretation in each case image and is shown in Figure 3: Results of Extraction of Impervious Surface Distribution The ground truth for impervious surface distribution was determined by visual interpretation in each case image and is shown in Figure 3: Impervious surface distribution extracted by the proposed method is shown in Table 1. We can see that water bodies were masked well by the WI. The RISI effectively distinguished impervious surfaces from bare soil and vegetation, especially the large amount of bare soil in Case 3. Impervious surface distribution extracted by the proposed method is shown in Table 1. We can see that water bodies were masked well by the WI. The RISI effectively distinguished impervious surfaces from bare soil and vegetation, especially the large amount of bare soil in Case 3. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 15 The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. The impervious surface distribution extracted by six methods are shown in Figures 4-6. The segmentation thresholds of the NDBI and NDISI were both set to 0, and the thresholds of the other methods were determined by the Otsu method. In general, the impervious surfaces extracted by the five reference methods were still mixed with other ground objects even though water bodies were masked. In Case 3, bare soil were seriously mis-extracted. Figure 4 shows that the NDBI resulted in serious omissions while the other four reference methods provided higher accuracy. Figure 5 shows that the NDBI lost a large amount of impervious surfaces and other reference methods also resulted in serious errors. Figure 6 shows that all five reference methods mis-extracted a large amount of bare soil, and the NDBI and IBI lost a large amount impervious surfaces. Accuracy The recalls and precisions of the six extractions are shown in Figure 7, and the F1 scores are shown in Figure 8. The NDBI, IBI, NDISI, NDII, and CBI gave high recalls and low precisions, especially in Case 3 which had a large amount of bare soil, where the F1 scores were all less than 65%. The extraction accuracy by the proposed method was higher than others in our three case images, and the average values of the recall, precision, and F1 score were 95%, 91%, and 93%, respectively. Figure 4 shows that the NDBI resulted in serious omissions while the other four reference methods provided higher accuracy. Figure 5 shows that the NDBI lost a large amount of impervious surfaces and other reference methods also resulted in serious errors. Figure 6 shows that all five reference methods mis-extracted a large amount of bare soil, and the NDBI and IBI lost a large amount impervious surfaces. Accuracy The recalls and precisions of the six extractions are shown in Figure 7, and the F1 scores are shown in Figure 8. The NDBI, IBI, NDISI, NDII, and CBI gave high recalls and low precisions, especially in Case 3 which had a large amount of bare soil, where the F1 scores were all less than 65%. The extraction accuracy by the proposed method was higher than others in our three case images, and the average values of the recall, precision, and F1 score were 95%, 91%, and 93%, respectively. Figure 4 shows that the NDBI resulted in serious omissions while the other four reference methods provided higher accuracy. Figure 5 shows that the NDBI lost a large amount of impervious surfaces and other reference methods also resulted in serious errors. Figure 6 shows that all five reference methods mis-extracted a large amount of bare soil, and the NDBI and IBI lost a large amount impervious surfaces. Accuracy The recalls and precisions of the six extractions are shown in Figure 7, and the F1 scores are shown in Figure 8. The NDBI, IBI, NDISI, NDII, and CBI gave high recalls and low precisions, especially in Case 3 which had a large amount of bare soil, where the F1 scores were all less than 65%. The extraction accuracy by the proposed method was higher than others in our three case images, and the average values of the recall, precision, and F1 score were 95%, 91%, and 93%, respectively. Figure 4 shows that the NDBI resulted in serious omissions while the other four reference methods provided higher accuracy. Figure 5 shows that the NDBI lost a large amount of impervious surfaces and other reference methods also resulted in serious errors. Figure 6 shows that all five reference methods mis-extracted a large amount of bare soil, and the NDBI and IBI lost a large amount impervious surfaces. Accuracy The recalls and precisions of the six extractions are shown in Figure 7, and the F1 scores are shown in Figure 8. The NDBI, IBI, NDISI, NDII, and CBI gave high recalls and low precisions, especially in Case 3 which had a large amount of bare soil, where the F1 scores were all less than 65%. The extraction accuracy by the proposed method was higher than others in our three case images, and the average values of the recall, precision, and F1 score were 95%, 91%, and 93%, respectively. Discussion With the economic growth of developing countries, the rapid urbanisation of rural land and its conversion to urban land directly lead to an increase in the area of impervious surfaces, which may lead to the urban heat island effect. Impervious surfaces are artificial surfaces that water cannot infiltrate to reach the soil, such as parking lots, streets, and highways. They often have a low specific heat capacity and heat up fast, which often leads to the higher temperature in urban areas than rural areas, the uneven distribution of heat, the abnormal hydrothermal cycle, and the harmful urban heat island effect [35]. It is important to build accurate and fast methods to extract impervious surface distribution which is helpful for detecting regional environmental changes in urban areas and achieving sustainable urban development. The existing methods for the extraction of impervious surface distribution increased the difference between impervious surfaces and other ground objects and achieved high precisions in some cases [7][8][9][10][11]. However, differences between sensors, regions, and seasons often lead to inconsistencies in the spectral features of ground objects in images. Many methods have poor abilities to distinguish impervious surfaces from bare soil because of their spectral similarity [36], and it is also not easy to distinguish withered vegetation in winter from impervious surfaces. Figure 9 shows the differences between typical samples of impervious surfaces, bare soil, and vegetation in terms of the coastal band, blue band, green band, red band, ratio vegetation index (RVI) [37], NDVI, and difference vegetation index (DVI) [38], where all features underwent a 0-1 transformation. Of these three vegetation indices, the NDVI gave the greatest difference between impervious surfaces and other ground objects. Besides, the values for impervious surfaces in the coastal band were higher than those for vegetation and bare soil, and the contrast between impervious surfaces and bare soil gradually decreased in the coastal, blue, green, and red bands. Discussion With the economic growth of developing countries, the rapid urbanisation of rural land and its conversion to urban land directly lead to an increase in the area of impervious surfaces, which may lead to the urban heat island effect. Impervious surfaces are artificial surfaces that water cannot infiltrate to reach the soil, such as parking lots, streets, and highways. They often have a low specific heat capacity and heat up fast, which often leads to the higher temperature in urban areas than rural areas, the uneven distribution of heat, the abnormal hydrothermal cycle, and the harmful urban heat island effect [35]. It is important to build accurate and fast methods to extract impervious surface distribution which is helpful for detecting regional environmental changes in urban areas and achieving sustainable urban development. The existing methods for the extraction of impervious surface distribution increased the difference between impervious surfaces and other ground objects and achieved high precisions in some cases [7][8][9][10][11]. However, differences between sensors, regions, and seasons often lead to inconsistencies in the spectral features of ground objects in images. Many methods have poor abilities to distinguish impervious surfaces from bare soil because of their spectral similarity [36], and it is also not easy to distinguish withered vegetation in winter from impervious surfaces. Figure 9 shows the differences between typical samples of impervious surfaces, bare soil, and vegetation in terms of the coastal band, blue band, green band, red band, ratio vegetation index (RVI) [37], NDVI, and difference vegetation index (DVI) [38], where all features underwent a 0-1 transformation. Of these three vegetation indices, the NDVI gave the greatest difference between impervious surfaces and other ground objects. Besides, the values for impervious surfaces in the coastal band were higher than those for vegetation and bare soil, and the contrast between impervious surfaces and bare soil gradually decreased in the coastal, blue, green, and red bands. Therefore, the ratio of the coastal band to the NDVI can maximise the information on impervious surfaces and increase the difference between impervious surfaces and other ground objects, and thus effectively distinguish impervious surfaces from bare soil, which is the main advantage of our method in comparison with the other five methods. Extraction of Impervious Surface Distribution by Different Methods The five methods used for comparison are applicable to different conditions and provide higher accuracy in other cases [7][8][9][10][11]. They exhibited differences in performance after water bodies were masked in our three cases, and the average F1 scores of extractions by six methods from the lowest to the highest value were in the following order: the NDBI, NDISI, IBI, NDII, CBI, and our proposed method. Therefore, the ratio of the coastal band to the NDVI can maximise the information on impervious surfaces and increase the difference between impervious surfaces and other ground objects, and thus effectively distinguish impervious surfaces from bare soil, which is the main advantage of our method in comparison with the other five methods. Extraction of Impervious Surface Distribution by Different Methods The five methods used for comparison are applicable to different conditions and provide higher accuracy in other cases [7][8][9][10][11]. They exhibited differences in performance after water bodies were masked in our three cases, and the average F1 scores of extractions by six methods from the lowest to the highest value were in the following order: the NDBI, NDISI, IBI, NDII, CBI, and our proposed method. The average F1 score of extractions by the NDBI after water bodies were masked was 32%. This index utilises the feature that the values for impervious surfaces in the SWIR1 band in Landsat 5 TM imagery are higher than those in the NIR band. However, the spectral features of impervious surfaces in Landsat 8 OLI imagery are different from those in Landsat 5 TM imagery because of the differences in the sensor, wavelength range, and quantisation bit. Figure 2 shows that the values for impervious surfaces in the SWIR1 band were comparable to those in the NIR band in Cases 1 and 3, whereas the values in the SWIR1 band were lower than those in the NIR band in Case 2. Conversely, the bare soil in Case 3 gave higher values in the SWIR1 band than in the NIR band, which resulted in more instances of error. The average F1 score of extractions by the NDISI after water bodies were masked was 44%. The author who devised the NDISI pointed out that the values for impervious surfaces are higher than those of other ground objects in the thermal infrared band. This index led to instances of error in Cases 2 and 3, which were caused by the fact that the values for impervious surfaces in the Landsat 8 thermal infrared band were not always higher than those for other ground objects (Figure 2). Thermal radiation from impervious surfaces is more obvious in summer but may be lower than that from other ground objects in winter [39,40]. Therefore, the use of the NDISI for the extraction of impervious surface distribution in summer may provide high accuracy [9], but its accuracy may be low if it is employed in winter. The average F1 score of extractions by the IBI after water bodies were masked was 47%. This index combines three indices, namely, the NDBI, MNDWI, and SAVI. In our cases, the NDBI values for impervious surfaces were not always higher than those for other ground objects such as the bare soil in Case 3 (Figure 10), which may have led to many instances of error. The average F1 score of extractions by the NDII after water bodies were masked was 57%. Xu et al. pointed out that this index is unreliable because the values for impervious surfaces in the thermal infrared band are significantly lower than those in the visible light band [36]. Besides, this index requires that bare soil is masked in advance [10], which is difficult in practice. Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 15 The average F1 score of extractions by the NDBI after water bodies were masked was 32%. This index utilises the feature that the values for impervious surfaces in the SWIR1 band in Landsat 5 TM imagery are higher than those in the NIR band. However, the spectral features of impervious surfaces in Landsat 8 OLI imagery are different from those in Landsat 5 TM imagery because of the differences in the sensor, wavelength range, and quantisation bit. Figure 2 shows that the values for impervious surfaces in the SWIR1 band were comparable to those in the NIR band in Cases 1 and 3, whereas the values in the SWIR1 band were lower than those in the NIR band in Case 2. Conversely, the bare soil in Case 3 gave higher values in the SWIR1 band than in the NIR band, which resulted in more instances of error. The average F1 score of extractions by the NDISI after water bodies were masked was 44%. The author who devised the NDISI pointed out that the values for impervious surfaces are higher than those of other ground objects in the thermal infrared band. This index led to instances of error in Cases 2 and 3, which were caused by the fact that the values for impervious surfaces in the Landsat 8 thermal infrared band were not always higher than those for other ground objects (Figure 2). Thermal radiation from impervious surfaces is more obvious in summer but may be lower than that from other ground objects in winter [39,40]. Therefore, the use of the NDISI for the extraction of impervious surface distribution in summer may provide high accuracy [9], but its accuracy may be low if it is employed in winter. The average F1 score of extractions by the IBI after water bodies were masked was 47%. This index combines three indices, namely, the NDBI, MNDWI, and SAVI. In our cases, the NDBI values for impervious surfaces were not always higher than those for other ground objects such as the bare soil in Case 3 ( Figure 10), which may have led to many instances of error. The average F1 score of extractions by the NDII after water bodies were masked was 57%. Xu et al. pointed out that this index is unreliable because the values for impervious surfaces in the thermal infrared band are significantly lower than those in the visible light band [36]. Besides, this index requires that bare soil is masked in advance [10], which is difficult in practice. The average F1 score of extractions by the CBI after water bodies were masked was 65%. This index combines the PC1, NDWI, and SAVI, which represent high-albedo surfaces, low-albedo surfaces, and vegetation, respectively, but the ability to mask bare soil needs to be further enhanced [36]. The average F1 score of extractions by our proposed method was 93%. Our method not only identified impervious surfaces but also masked other ground objects such as bare soil and therefore provided higher accuracy than the other five methods. Atmospheric Effect of the Proposed Method The abovementioned impervious surface distributions were extracted on the basis of the DN values in the images. When electromagnetic waves are transmitted across the atmosphere, they are scattered because of molecules and minute particles; the shorter the wavelength is, the stronger the scattering is, which causes radiation distortion in the coastal and blue bands of Landsat 8 images. Atmospheric correction The average F1 score of extractions by the CBI after water bodies were masked was 65%. This index combines the PC1, NDWI, and SAVI, which represent high-albedo surfaces, low-albedo surfaces, and vegetation, respectively, but the ability to mask bare soil needs to be further enhanced [36]. The average F1 score of extractions by our proposed method was 93%. Our method not only identified impervious surfaces but also masked other ground objects such as bare soil and therefore provided higher accuracy than the other five methods. Atmospheric Effect of the Proposed Method The abovementioned impervious surface distributions were extracted on the basis of the DN values in the images. When electromagnetic waves are transmitted across the atmosphere, they are scattered because of molecules and minute particles; the shorter the wavelength is, the stronger the scattering is, which causes radiation distortion in the coastal and blue bands of Landsat 8 images. Atmospheric correction of images is usually required in order to obtain more realistic surface reflectivity, but this is often a cumbersome process. As a comparison, the case images were corrected according to the FLAASH model in ENVI, and reflectance values that were less than 0 or greater than 100 were modified according to Equation (16): where f (x, y) and g(x, y) represent the images before and after modification, respectively. The impervious surface distributions extracted after atmospheric correction are shown in Figure 11. From a comparison with Table 1, impervious surfaces were less widely distributed, and the average recall, precision, and F1 score were 89%, 77%, and 83%, respectively, which represented decreases by 6%, 14%, and 10%, respectively. of images is usually required in order to obtain more realistic surface reflectivity, but this is often a cumbersome process. As a comparison, the case images were corrected according to the FLAASH model in ENVI, and reflectance values that were less than 0 or greater than 100 were modified according to Equation (16): ( , ) < 0 ( , ) 0 ≤ ( , ) ≤ 100 100 ( , ) > 100 (16) where ( , ) and ( , ) represent the images before and after modification, respectively. The impervious surface distributions extracted after atmospheric correction are shown in Figure 11. From a comparison with Table 1, impervious surfaces were less widely distributed, and the average recall, precision, and F1 score were 89%, 77%, and 83%, respectively, which represented decreases by 6%, 14%, and 10%, respectively. Atmospheric correction did not provide additional information that could supplement the results based on the DN, but instead resulted in more mis-extraction and losing results. Taking Case 3 as an example, Figure 12 shows the difference between the impervious surface distribution extracted before and after atmospheric correction, where (a) is a false colour composite of the (5, 4, 3) band and (b) is an RGB composite of the impervious surface distribution extracted before atmospheric correction, ground truth for the impervious surface distribution and the impervious surface distribution extracted after atmospheric correction. Here, the white areas were extracted in both instances, the green areas were lost in both instances, the purple areas were misextracted in both instances, the yellow areas were lost after atmospheric correction, and the blue areas were only mis-extracted after atmospheric correction. Atmospheric correction did not provide additional information that could supplement the results based on the DN, but instead resulted in more mis-extraction and losing results. Taking Case 3 as an example, Figure 12 shows the difference between the impervious surface distribution extracted before and after atmospheric correction, where (a) is a false colour composite of the (5, 4, 3) band and (b) is an RGB composite of the impervious surface distribution extracted before atmospheric correction, ground truth for the impervious surface distribution and the impervious surface distribution extracted after atmospheric correction. Here, the white areas were extracted in both instances, the green areas were lost in both instances, the purple areas were mis-extracted in both instances, the yellow areas were lost after atmospheric correction, and the blue areas were only mis-extracted after atmospheric correction. Actually, atmospheric correction using an atmospheric correction model is not always necessary and may decrease the accuracy of the extraction of remote sensing information [41]. With respect to our three case images, atmospheric correction did not help to improve the accuracy of the extraction of impervious surface distribution. of the (5,4,3) band and (b) is an RGB composite of the impervious surface distribution extracted before atmospheric correction, ground truth for the impervious surface distribution and the impervious surface distribution extracted after atmospheric correction. Here, the white areas were extracted in both instances, the green areas were lost in both instances, the purple areas were misextracted in both instances, the yellow areas were lost after atmospheric correction, and the blue areas were only mis-extracted after atmospheric correction. Alternatives to the Coastal Band There is no coastal band in other Landsat imagery, which may influence the applicability of the proposed method. Here, we compare and discuss the differences that occur if impervious surface distribution is extracted using the blue band instead of the coastal band. The differences between the values for typical samples of impervious surfaces and other ground objects of the ratio of the coastal band to the NDVI (coastal band/NDVI) and the ratio of the blue band to the NDVI (blue band/NDVI) were calculated, as shown in Figure 13. We can see that if the blue band/NDVI is used, the difference between impervious surfaces and other ground objects decreases slightly. Actually, atmospheric correction using an atmospheric correction model is not always necessary and may decrease the accuracy of the extraction of remote sensing information [41]. With respect to our three case images, atmospheric correction did not help to improve the accuracy of the extraction of impervious surface distribution. Alternatives to the Coastal Band There is no coastal band in other Landsat imagery, which may influence the applicability of the proposed method. Here, we compare and discuss the differences that occur if impervious surface distribution is extracted using the blue band instead of the coastal band. The differences between the values for typical samples of impervious surfaces and other ground objects of the ratio of the coastal band to the NDVI (coastal band/NDVI) and the ratio of the blue band to the NDVI (blue band/NDVI) were calculated, as shown in Figure 13. We can see that if the blue band/NDVI is used, the difference between impervious surfaces and other ground objects decreases slightly. The coastal band was replaced with the blue band, and the other parameters were the same as those used in the previous workflow to extract the impervious surface distribution from the above three case images. We also determined the accuracy and found that the average recall, precision, and F1 score were 93%, 87%, and 90%, respectively, which represented decreases by 2%, 4%, and 3%, respectively. Therefore, it is feasible to use the blue band to extract impervious surface distribution from other Landsat remote sensing images without using the coastal band, albeit with a certain decrease in accuracy. In contrast to validation using images for a single region and season [7][8][9][10], our study used three images obtained in summer and winter. The RISI exhibited higher accuracy and better applicability than the other methods used for comparison, and it avoids the additional feature selection and classifier selection processes used in classification methods. Conclusions This study proposed a novel index, namely, the RISI, for extracting impervious surface The coastal band was replaced with the blue band, and the other parameters were the same as those used in the previous workflow to extract the impervious surface distribution from the above three case images. We also determined the accuracy and found that the average recall, precision, and F1 score were 93%, 87%, and 90%, respectively, which represented decreases by 2%, 4%, and 3%, respectively. Therefore, it is feasible to use the blue band to extract impervious surface distribution from other Landsat remote sensing images without using the coastal band, albeit with a certain decrease in accuracy. In contrast to validation using images for a single region and season [7][8][9][10], our study used three images obtained in summer and winter. The RISI exhibited higher accuracy and better applicability than the other methods used for comparison, and it avoids the additional feature selection and classifier selection processes used in classification methods. Conclusions This study proposed a novel index, namely, the RISI, for extracting impervious surface distribution from Landsat 8 imagery and validated it using three images covering urban areas of China in winter and summer. The results of the extraction were compared with those obtained using the NDBI, IBI, NDISI, NDII, and CBI. The main conclusions are as follows: (1) The RISI effectively distinguishes impervious surfaces from other ground objects, especially bare soil, and gives higher recalls, precisions, and F1 scores, of which the average values for the three case images were 95%, 91%, and 93%, respectively. The validation confirmed the robustness of the RISI. (2) The use of the coastal band after a 0-1 transformation can further increase the difference between impervious surfaces and bare soil. (3) The workflow has been shown to have good applicability in the extraction of impervious surface distribution from Landsat imagery. It is still necessary to perform validation using more images with complex compositions and distributions of ground objects to refine the workflow so that it suits more situations.
9,745.4
2019-06-28T00:00:00.000
[ "Mathematics", "Environmental Science" ]
Effect of Magnetically Treated Tap Water Quenchant on Hardenability of S45C Steel 2022 The objective of this work was to investigate effects of magnetically treated tap water quenchant on hardenability and quenching crack resistance of S45C steel. The magnetically treated water quenchant was prepared by circulating regular tap water though a 130 mT magnetic field. The S45C steel was austenized at 860°C for 30 minutes. The hardenability in transverse section measurement of S45C steel quenched in magnetically treated tap water did not differ from that prepared with regular tap water quenchant. In measurements of the quenched end, the hardenability of S45C Steel quenched in magnetically treated water was below that with tap water quenchant. On the other hand, quenching crack resistance of S45C steel quenched in magnetically treated tap water was higher than that prepared with regular tap water. Moreover, microstructures of specimens quenched in magnetically treated tap water quenchant were different from that with regular tap water quenchant. Fine martensite structure formed in specimen quenched in regular tap water quenchant, while coarse lath martensite formed in specimens quenched in magnetically treated tap water Introduction Typically heat treatments of steel parts are needed in the manufacture of automotive components. Most engine components, all parts of gear boxes, axles, drive shafts and suspension parts, as well as steering components and injection systems, are frequently hardened and tempered, or carburized, or nitrocarburized 1 . Hardening of steel includes austenitizing and quenching steps. Austenitizing means heating to about 50°C above upper and lower critical temperatures, leading to formation of an austenite phase, when the elevated temperature is held for a proper time. The subsequent rapid cooling by immersion in a quenching medium is called quenching. The microstructure and properties of steel after quenching depends on the choice of quenching medium, such as water, salt solution and oil, which differ in the cooling rates. Water is inexpensive, readily available, and, unless contaminated, it is easily disposed without causing pollution or health hazards. One disadvantage of water is its rapid cooling rate that persists at lower temperatures where distortion and cracking are more likely to occur 2 . Quenching in oil provides slower cooling rates than water quenching, which reduces the possibility of introduction of distortions and cracks in the quenched piece 3 , while the environmental effects of oil waste pose limitations. Finding quenchant alternatives to oils is of interest in related research. Alternative quenchants have been widely investigated. Wu et al. 4 developed a water quenchant in electric field, which applied currents to the samples placed in the water. They found that the hardness of samples quenched in water with electric field was higher than that quenched in water. The electric field disturbed vapor films covering the hot samples in the first stage of cooling and increased heat transfer rate. On the other hand, quenching in a magnetic field has been studied by adding magnetic particles of 10 nm diameter into the water quenchant, which also increased hardness of the samples 5 . In addition, Akhbarizadeh et al. 6 investigated the effects of magnetic field during deep cryogenic treatment at -195°C on the corrosion and wear properties of 1.2080 tool steel grade. The tool steel sample was attracted to a magnet bar during quenching. They found that as the magnetic field was applied, the hardness and the corrosion resistance of the tool steel decreased, while the wear resistance increased. Moreover, Zhang et al. 7 investigated the effects of high intensity magnetic field on the austenite-to-ferrite transformation in 42CrMo low alloy steel at different cooling rates. Superconductive magnets were used to generate magnetic field intensities up to 15T. They reported that the magnetic field accelerated the transformation of austenite to ferrite, increasing the amount of ferrite and pearlite after quenching, while bainite formed in the case without a magnetic field. Prior studies have not investigated the effects on hardenability of steels of water quenchant circulated through a magnetic field. Effects of a magnetic field on properties of water have been studied, and it can change the electrical conductivity and the evaporation of water 8 . The electrical conductivity and the evaporation of water exposed to magnetic field were higher than without the magnetic field. Viscosity of water decreased with increasing exposure time to a magnetic field 9 . *email<EMAIL_ADDRESS>When water is circulated through a magnetic field, its surface tension decreases and viscosity increases. Cai et al. 10 stated that a magnetic field induced hydrogen bonding that led to larger water molecule agglomerates. As flow rate and magnetic field intensity were increased, evaporation and heat transfer of the water increased 11 . Hence, circulating water through a magnetic field can possibility alter its quenchant properties, effectively giving novel control of water quenchants. Therefore, water quenchant circulated through a static magnetic field was investigated in this study. The feasibility of water quenchant circulation through static magnetic field is evaluated. Preparation of the samples S45C steel was used in test specimens of this work and their chemical composition, shown in Table 1 in weight percentages, was determined by Applied Research Laboratoties-ARL3460 Optical Emission Spectrometry (OES). The dimensions of the as-received S45C were 32 mm diameter and 1000 mm length. The test specimens were machined to cylindrical bar with 32 mm diameter and 100 mm length. An S45C steel rod of 13 mm diameter was cut into 15 mm long pieces for quenching crack resistance tests. Preparation of magnetically treated water quenchant Rectangular NdFeB permanent magnets (100 mm long, 15mm wide and 5 mm thick) were used to provide a static magnetic field, as shown in Figure 1a. Each magnetic piece had a north pole face and opposing south pole face, as shown in Figure 1b. The magnetic flux density was measured at the contact surface for each piece by using a tesla-meter (PHYWE No. 13610-93, Germany), and the magnetic field intensity was not uniform across the contact surface, while its maximum was 130 mT. A device to treat tap water magnetically was built based on the apparatus described by Gabrielli et al. 12 , as shown in Figure 1c. It consisted of seven pairs of permanent magnets with north and south poles facing each other, which the distance between magnetic poles on each side being 25 mm apart. A steel pipe coated with zinc of 1000 mm length and 25 mm outer diameter was inserted between the magnets, as shown in Figure 1c. Regular tap water (50 liters) was circulated through the pipe passing between the magnets at a constant 5 liters/min flow rate by a power head pump (Sonic AP1200). In this configuration the magnetic field was perpendicular to the flow of regular tap water. The regular tap water used in this experiment had pH of about 7.7, total dissolved solid (TDS) of about 32-48 ppm, and electrical conductivity of 74 μS/cm. The chemical analysis of the regular tap water was examined by Perkin Elmer Optima 8000 Inductively Coupled Plasma-Optical Emission Spectrometer(ICP-OES), from which the results were 10.5 mg/l Cl, 0.14 mg/l F, <5mg/l SO 4 , 0.29mg/l NO 3 , 22.7 mg/l CaCO 3 , and 0.344 mg/l Fe, as shown in Table 2. Hardenability Test The S45C specimens of 100 mm length and 32 mm diameter were austenized at 860°C for 30 minutes and were then removed from the furnace and placed quickly (within 5s) in a rectangular quenching chamber (25 cm wide, 25 cm long, and 25 cm tall), shown in Figure 2a and b. Two quenching techniques were applied in the hardenablitity tests. First, the entire specimen was immersed in 9 liters of quenchant for 5 mins, as shown in Figure 2a. The quenched specimen was cut into two pieces at the middle in length direction. The transverse sections of two cut pieces were ground and polished. Rockwell C hardness measurements were made along the transverse sections from the circular boundary to the center.This method is based on the hardenability test of Chen et al. 13 Second, only half of a sample was immersed in the quenchant for 5 mins, as shown in Figure 2b. The volume of quenchant in quenching chamber was about 3.125 liters for 50 mm level height. The quenched specimens were then ground flat to a depth of 0.5 mm along the entire length of the bar, on two opposing sides. Rockwell C hardness measurements were made along the length of the bar from quenched end along the two zones: underneath and top surface. Three replicate experiments were run for samples quenched in regular tap water (W); tap water circulated through a magnetic field of 130 mT or magnetically treated tap water (MW); and oil quenchants. The temperatures of water quenchants with or without magnetic treatment were 25.0±0.5°C and oil quenchant was at 32.0±2.0 °C before quenching. Quenching cracking resistance test For quenching crack resistance measurement based on the paper of Chen et al. 13 , the specimens of 15 mm length and 13 mm diameter were heated to 860 °C, held for 15 minutes, and quenched in the alternative quenchants. Austenizing time of this test was shorter than that of the hardenability test due to smaller sized specimen. The temperatures of water quenchants with or without magnetic treatment were 25.0±0.5°C and oil quenchant was at 32.0±2.0 °C before quenching. Up to 50 specimens were prepared for each quenchant. After quenching, all the specimens were ground, polished, and etched. A metallographic inspection was then performed. Specimens with quenching cracks were counted and typical look of the cracks was assessed. The effect of magnetic treatment on conductivity of tap water quenchant The magnetically treated tap water had increased electrical conductivity from that of tap water without magnetic field treatment and circulation, as shown in Figure 3. The electrical conductivity of MW increased gradually from 74 μS/cm to 83 μS/cm during circulation through static magnetic field over 144 hours. This result matches the investigations of Holysz et al. 8 and Szczes´et al. 11 Toledo et al. 14 found that viscosity, surface tension and vaporization enthalpy increased when water was exposed to magnetic field treatment, and these are correlated with the intermolecular forces. Wang et al. 15 found that the properties of tap water changed when it was circulated through a magnetic field. The magnetically treated tap water had increased evaporation, with decreased specific heat and boiling point. In addition, magnetic field strength has a marked influence on the physical properties of tap water. The electrical conductivity of regular tap water (W) has no significant change over a similar time frame. The magnetically treated tap water (MW) in this experiment was circulated through the magnetic field for 144 hours. Effect of magnetically treated tap water quenchant on hardenability of S45C steel Figure 4 shows the HRC hardness measured on transverse sections at different distances from the circular boundary of the section, when an entire specimen was quenched in 13 They reported that the hardness of #45 steel quenched in water was over 50 HRC at 2 mm depth from the surface. This work followed the experiment of Chen et al. 13 , which tested different sample sizes. However, the hardness varied by depth in samples whether quenched in magnetically treated tap water or regular tap water, similarly in both cases. The difference was less than the standard deviation. The specimen quenched in oil did not harden. For the hardenability test with only half of each specimen immersed in the quenchant (Figure 2b), the quenched specimen was ground to have flat surfaces at a depth of 0.5 mm from the original cylindrical surface, on two opposing sides. The surface hardness was measured from the quenched end. The surface hardness profiles of S45C steel quenched in regular tap water, magnetically treated tap water, and oil quenchant are shown in Figure 5. After half of specimens were immersed in the quenchants, the average hardnesses at 31 mm from the quenched end of each specimen were 61.7±0.7 HRC, 56.7±1.6 HRC and 19.9±0.8 HRC for quenching in regular tap water, magnetically treated tap water, and oil quenchant, respectively. The average hardness decreased gradually with distance from end of the specimen, from 31 mm to 45 mm distances, and then decreased sharply around the middle of the specimen (at 50 mm distance from the end) after quenching in regular tap water or in magnetically treated tap water, which were the two quenchants causing hardening. The hardness changed with distance from the quenched end of the specimen. The hardness profles were similar to jominy hardness test profile, as presented by Nunura et al. 16 and Ghrib et al. 17 The hardness in upper half of sample above the quenchant decreased gradually with increasing distance (52-81 mm distance from the quenched end of specimen). The hardness profile of S45C steel quenched in oil changed a little. The hardenability of S45C quenched in magnetically treated tap water tended to be lower than that when using water quenchant. It is possible that the evaporated amount of tap water circulated through magnetic field was higher than without magnetic exposure, as reported by Holys et al. 8 and Szczes et al. 11 Faster evaporation may create a vapor blanket that insulates and decreases heat transfer. The hardness profile of S45C steel in Figure 4 arose from immersing specimen in W and MW quenchants with heat tranfer in a radial direction, while halfway immersion quenching with heat transfer in longitudinal direction along with air cooling resulted in the hardness profile in Figure 5. The hardness profile varied less in radial direction than in longitudinal direction. However, both hardenability test methods showed no effect from oil quenchant. Table 3 shows the quenching cracking ratios of S45C steel when quenched in the magnetically treated water, water, and oil quenchant. The quenching cracking ratio of S45C steel quenched in magnetically treated tap water (MW) quenchant was 12%, which is obviously less than with regular tap water (W) that gave 26% quenching cracking ratio. Chen et al. 15 investigated the quenching cracking ratio of #45 steel quenched in water, and reported 16% quenching cracking ratio. 25% quenching cracking ratio of AISI 1045 steel parts with size of 10x10x 55 mm and a 2 mm diameter hole through the sample at a distance of 5 mm from one of the ends has been reported by Canale and Totten 18 . Material type, environment and size/shape of specimen interact in very complex ways affecting the risk of cracking. The S45C steel quenched in water has a higher risk of cracking mainly due to the large stresses incurred by the martensitic transformation 15 . On the other hand, wetting kinematics has a large effect via non-uniform cooling. Film-boiling is unstable and highly variable, and water does not rewet steel in a uniform manner during still quenching, leading to non-uniform cooling 19 . Pang and Deng 9 found that the wetting angles of magnetized water on the surfaces of hydrophobic materials were decreased from those of fresh water. The surface tension of magnetized water also decreased on comparing with fresh water. Therefore, it is possible that magnetically treated tap water gave more uniform cooling than tap water, resulting in less quenching cracking. Quenching cracking and distortion of steel parts limit the use of water quenchant. Figure 6a and b show the marcro-crack morphologies (pointed out by the black arrows) of specimens quenched in regular tap water and in magnetically treated tap water, respectively. The straight crack propagated in radial direction from the surface towards the core (Figure 6a) obviously, by the caused high quench severity or excessive cooling rates during the quenching 20 . Quenching crack resistance Crack parth of a specimen quenched in magnetically treated tap water (in Figure 6b) was shorter than when quenched in regular tap water. In addition, the cracks formed around fine martensite structures in the specimen quenched in regular tap water (Figure 7a), and a coarser martensite structure was seen in the specimen quenched in magnetically treated tap water (Figure 7b). Three specimens without cracking (after subjecting to the quenching crack resistance test) were selected for hardness measurements. The hardnesses when quenched in regular tap water were 58.0-59.0 HRC, and with Conclusions Effects of magnetically treated tap water quenchant on hardenability and quenching crack resistance of S45C steel were investigated in this work. The results show that hardenability of S45C steel quenched in magnetically treated tap water as quenchant was similar to that with regular tap water quenchant, in a transverse section measurement. However, the hardenability of S45C steel quenched in magnetically treated water was lower than that with tap water quenchant, around the quenched end of a rod-shaped sample that was quenched with only halfway immersion. Interestingly, quenching crack resistance of S45C steel was higher when quenched in magnetically treated water than with tap water quenchant. Quenching crack ratio of S45C was 12% with magnetically treated tap water, but 26% with regular water. The steel quenched in oil had no cracks, as oil quenching also gave no hardening. In addition, microstructures of specimens quenched in magnetically treated tap water were different from those with regular tap water quenchant. A fine martensite structure had formed in specimen quenched in regular tap water, while a coarse lath martensite formed in magnetically treated tap water quenchant. The magnetically treated tap water with increased electrical conductivity could possibly be applied as a quenchant to reduce quenching crack rate from that with regular tap water.
4,053
2022-01-01T00:00:00.000
[ "Materials Science" ]
Nonequilibrium Spin-Hall Detector with Alternating Current An oscillographic study of the Hall voltage with an unpolarized alternating current through a platinum sample revealed chiral features of the Hall effect, which clearly demonstrate the presence of the spin Hall effect in metals with a noticeable spin-orbit interaction. It was confirmed that, as in the case of direct current, the possibility of a spin-Hall effect is associated with the presence of an imbalance of the spins and charges at the edges of the samples, which is realized using their asymmetric geometry. In particular, it was found that such chiral features of the nonequilibrium spin-Hall effect (NSHE), such as independence from the direction of the injection current and the direction of the constant magnetic field, in the case of alternating current, make it possible to obtain a double-frequency transverse voltage, which can be used as a platform for creating spintronics devices. Introduction Since the introduction of the concept of an additional degree of electron freedom, the spin has been predicted and further a number of characteristic properties of the electron wave function in its behavior, which follow from the relativistic Dirac equation, have been described. So, in 1929, Mott first showed that one of them could be the chiral asymmetry of scattering of electrons with different spin directions in a central force field under conditions of a relativistic spin-orbit (SO) coupling [1]. After 40 years, Dyakonov and Perel, based on this idea, predicted for nonmagnetic conductors the effect of curving electron trajectories with opposite spin orientations followed by their accumulation at opposite edges of the samples [2], which served as an impetus for active studies of the possibility of generating spin currents using Spin-Hall Effect (SHE). Since that time, many SHE experiments have been carried out in the framework of the concept of the Mott impurity mechanism of asymmetric spin scattering [3]. Later, under the conditions of SO interaction, which removes double spin degeneracy, the spin-dependent behavior of electrons in the absence of scattering was predicted due to the possibility of spin-dependent induction of the transverse electron velocity component in an external electric field [4,5,6]. However, for a long time, the detection of these effects by electric methods in a "pure experimental setup" using an unpolarized injection current seemed impossible. To avoid this difficulty, they resort to preliminary polarization of the current injected into the samples using ferromagnets, the ability of which to create spin polarization has been repeatedly confirmed, for example, by the manifestation of an anomalous spin-hall effect in them. The spin-polarized current obtained in a ferromagnet, which was then introduced into the material under study, induces a charge and spin imbalance in the sample (in particular, in SHE), which makes it possible to use electric measurement of spin currents [7,8] as additive to the nonequilibrium state of charges and spins in the system as a whole [9,10], which leaves a certain ambiguity in the interpretation of the results. We previously suggested a method for creating nonequilibrium in spins and, correspondingly, in charges, which made it possible to study the momentum-spin dynamics of electrons in metals by electric methods, without resorting to improper methods of polarizing the current introduced into the sample [11]. It consists in the use of samples with an asymmetric cross-sectional shape, the characteristic size of which ~ ( -sectional area) significantly exceeds the mean free path of carriers ℓ and spin relaxation ℓ . NSHE with Alternating Current As is known, in the absence of external sources of electric field, the distribution of spins and their corresponding charges in the sample depends on the shape of the sample, but with respect to a certain center of symmetry (center of gravity) remains equilibrium, so that the gradients of the electrochemical potential and the spin chemical potential in the sample remain equal to 0 in any directions for any form of sample. However, the non-uniformity of the distribution of carriers over the volume with the asymmetric shape of the sample leads, as is well known, to such an effect as the Hall effect, when the voltage transverse with respect to the current is determined by the total rather than specific (like the Hall constant) number of charges in the sample: ~ , where is the volumetric macroscopic parameter (thickness) of the sample. According to the concept of SO interaction, upon application of an electric field, the nonequilibrium dynamics of spins in the momentum space should manifest itself in the sample due to the addition to the spin-orbit field (for example, within the framework of the Hamiltonian with the coupling of the Rashba spin -orbit coupling [8]) induced by the drift addition to the carrier velocity . This, in turn, will lead to the appearance of equal in magnitude fluxes of spins of opposite directions along y due to the deviation of the spins up at 0 and spins down at 0 ( is quasimomentum): ≡ . The process will end with the establishment of equilibrium between the strength of the spin-orbital field and the gradient of the spin chemical potential , arising between the spins of the opposite direction ! ∓ # and the accumulation of charges with opposite spins on opposite edges of the sample in equal amounts, regardless of the cross sectional geometry, if there are no spin relaxation processes. In this case, the appearance in the y direction of the gradient of the spin chemical potential will not be accompanied by a charge imbalance and the gradient of the electrochemical potential in this direction remains zero. This means that the condition ℓ ≫ makes it impossible to study SHE by electrical methods. However, if the dimensions of the sample in the cross section significantly exceed the mean free path ≫ ℓ #, so that the inequalities are ℓ ≪ ℓ ≪ , then the "spinflip" region is inevitable, where the dynamics of spins is stochastic, and the spin currents of oriented spins will appear only where the spins are coherent. In Figure 1, the regions of coherent spin flows in the field near the opposite & and ' edges of the sample with thicknesses ( ) and ( * in the + direction differ ,( ) ( *and have an unequal number of carriers (spins) . )~ ) and . *~ * . The crossed out arrows indicate spin flows that do not reach the edges of the sample. As a result, the total charge current in the / direction due to spin dynamics under conditions of SO interaction will be as follows: (3 4 is spin conductivity). Thus, in an asymmetric sample, spin imbalance is accompanied by charge imbalance, which allows one to study the features of the spin-hall effect by electric methods without the aid of ferromagnets. Two distinctive features of such nonequilibrium SHE at constant current, detailed studied in [12], should be the independence of the direction of nonequilibrium gradients of the spin and charge chemical potentials from the direction of the current, as follows from the diagrams in Figure 1, and from the direction of the constant magnetic field: unlike common Hall effect (CHE), which is described by the vector quantity, the interaction of oppositely oriented spins 'up' and 'down' with a magnetic field in opposite directions should be symmetrical, so that the spin conductivity should be scalar. The indicated NSHE properties make it easy to separate the Hall components of the voltage transverse to the current and field directions (see [12]) -spin 567 and Lorentz U 967 : where C and D are the orts of the E and + axes; and 3 are spin current density and spin conductivity, respectively; : is the average distance between the sample edges not equal thickness along y, and the signs of the second component correspond to the orientations of the magnetic field of the opposite direction. In this report, we show how these features of NSHE using alternating current can be demonstrated visually and used as an informative platform for elements of spintronics, such as spin detectors, for example. Indeed, since, according to the diagram in Figure 1, the sign of the spin-charge unbalance does not depend on the direction of the current, then the possibility of an oscillographic visualization of the effect should obviously follow from this fact. With alternating current, for example, sinusoidal form Visual Implementation of the NSHE To implement the visualization procedure, we chose a heavy metal (Pt), as a metal with the expected strong spinorbit interaction and, as a result, a sufficiently large spin-hall effect for the reliability of its resolution, since the alternating current requires the use of broadband non-selective recording mode. Samples were prepared in the form of pieces of rolled foil sized 6.5×2.5 mm 2 with edge thicknesses differing by U 0.1, which was not less than 300 electron mean free paths ℓ (U 0.3μm at 4.2 K). The resolution of the AC signal was ~10 Z V. The insets in Figure 2 show the oscillograms of for different values of the AC sinusoidal modulation coefficient with an injection current amplitude 0 4G 100 mA and a frequency of 22 Hz. Three most characteristic waveform series are shown: for S L ,;-≫ 1 and small values of 567,G-,;and 967,G-,;-(series #1); for S L ,;-≫ 1 and large values of 967,[\-,;-(series #3); and for S L ,;-1 (series#2). In each series, oscillograms are presented for two opposite directions of the magnetic field (middle photos), which were realized by switching the direction of the current by changing its phase by π. At the top of each series are initial waveforms at ] 0 , associated with a slight nonorthogonality of the Hall contacts. The 567 curve from series #2, averaged over the two waveforms, which are shown in the #2 insert for ;, after excluding 967 ,(; ;according to the rules of expression 1, is shown separately. It is seen that the phase 567 does not depend on the phase of the modulating signal, as expected for the spin-Hall effect. The oscillograms obtained with the same instrument resolution and the same scanning frequency. The bottom waveforms on the inserts (modulating envelopes) demonstrate the dependence of the modulation amplitude in the magnetic field. For convenience of analysis, the scanning frequency was chosen such that on all photos the curves were represented within at least a period. To eliminate the influence of the field on the electron dynamics, small fields were used that satisfy the condition K b ≪ 1 , where b ℓ c ⁄ is the momentum relaxation time, c is a Fermi-velocity. Starting with some values of the magnetic field, peaks appeared on the oscillograms, whose order and sign, the same for opposite field directions, corresponded to the features of the even function relative to the sign of the current and magnetic field. In addition, when changing the direction of the magnetic field, the phase of the modulating signal changed as π, as an alternating (odd in current and magnetic field) contribution 967 . Thus, the observed features of the oscillograms completely corresponded to the manifestation of NSHE: the spin-Hall Effect "straightens" (!) the current, as the series of oscillograms #2 particularly clearly demonstrates. Obviously, the possibility of such visualization of the effect is limited by the range of the modulation factor S L ,;-1. In the case of S L ,;-≫ 1, the type of the evencurrent effects is detected after the digital processing data by formula 1 and the data averaging for opposite directions of the magnetic field. The resulting dependence of the AC response amplitude of the double frequency NSHE in the entire measured range of magnetic fields has the form shown in Figure 3. It can be seen that sign of the amplitude 567,G-,;of this response changes to the opposite at some point of the crossover, which corresponds to the change of the AC phase of the signal by π. As in the case of Al, this is apparently due to a change in the hole carrier sign [12]. According to the measured data, the spin Hall angle for platinum is tanh g 567 U 10 T , which is almost an order of magnitude smaller than the usual Hall angle. Conclusion In conclusion, with the help of alternating current, we visualized a nonequilibrium spin-hall effect in asymmetric platinum plates in the helium temperature range. The observed oscillograms completely confirmed the properties of the spin-hall effect, which we previously discovered using the unpolarized direct current injection into the samples of metals such as Al, W, Pt. The chiral properties of the effect on alternating current make it possible to use them as an informative platform for the simple spintronic devices that do not require high technologies for the production of the samples.
2,968
2020-05-29T00:00:00.000
[ "Physics" ]
Noise-Related Song Variation Affects Communication: Bananaquits Adjust Vocally to Playback of Elaborate or Simple Songs Birds communicate through acoustic variation in their songs for territorial defense and mate attraction. Noisy urban conditions often induce vocal changes that can alleviate masking problems, but that may also affect signal value. We investigated this potential for a functional compromise in a neotropical songbird: the bananaquit (Coereba flaveola). This species occurs in urban environments with variable traffic noise levels and was previously found to reduce song elaboration in concert with a noise-dependent reduction in song frequency bandwidth. Singing higher and in a narrower bandwidth may make their songs more audible in noisy conditions of low-frequency traffic. However, it was unknown whether the associated decrease in syllable diversity affected their communication. Here we show that bananaquits responded differently to experimental playback of elaborate vs. simple songs. The variation in syllable diversity did not affect general response strength, but the tested birds gave acoustically distinct song replies. Songs had fewer syllables and were lower in frequency and of wider bandwidth when individuals responded to elaborate songs compared to simple songs. This result suggests that noise-dependent vocal restrictions may change the signal value of songs and compromise their communicative function. It remains to be investigated whether there are consequences for individual fitness and how such effects may alter the diversity and density of the avian community in noisy cities. INTRODUCTION In the last decades, the noise levels in human-altered and natural habitats have substantially increased and affected the way birds sing (Rabin and Greene, 2002;Mennitt et al., 2015;Buxton et al., 2017). Anthropogenic noise can interfere with communication among birds because it can mask their songs through overlap in frequency and time (Brumm and Slabbekoorn, 2005;Barber et al., 2010;Parris and McCarthy, 2013). Several noise-dependent vocal changes have been reported in city birds (Brumm, 2004;Potvin and Mulder, 2013;Gil et al., 2014), which typically yield an increase in song detectability and improved efficiency of communication (Brumm and Slabbekoorn, 2005;Pohl et al., 2012). However, vocal changes may not only affect signal detectability but also signal value (Slabbekoorn and Ripmeester, 2008;Gross et al., 2010) and noise-dependent song variation may thereby involve a functional compromise (Slabbekoorn, 2013;Luther and Magnotti, 2014;Luther et al., 2016;Phillips and Derryberry, 2018). Although reports on noise-dependent song variation are widespread, tests of the potential for functional consequences for communication are still rare (see e.g., Mockford and Marshall, 2009;Ripmeester et al., 2010;Luther and Derryberry, 2012;Luther et al., 2016). There are several ways birds change their songs by which they could counteract masking by urban noise. Several species have been found to sing higher frequencies and/or narrowerbanded songs in noisier environments (Slabbekoorn and Peet, 2003;Verzijden et al., 2010;Bermúdez-Cuamatzin et al., 2011;Montague et al., 2012;LaZerte et al., 2016). As anthropogenic noise is typically biased to low-frequency bands, higherfrequency songs are more audible than lower-frequency songs (Brumm and Slabbekoorn, 2005;Nemeth and Brumm, 2010;Halfwerk et al., 2011) and concentrating all acoustic energy in a narrower band can also raise signal-to-noise ratio (Hanna et al., 2011). Birds are also reported to sing at higher amplitudes if noise levels rise and they can sing shorter or in alternating time periods when noise levels are fluctuating (Brumm, 2004;Gil et al., 2014;Gentry et al., 2017;Derryberry et al., 2017). Although such noise-dependent changes may be successful in masking avoidance, they may also restrict the potential for communication by undermining the signaling function of the songs (Slabbekoorn and Ripmeester, 2008;Gross et al., 2010;Slabbekoorn, 2013;Luther et al., 2016). Reduction in frequency band use, for example, may restrict the use of particular syllables and limit possible syllable variation, and consequently limit song repertoire size of an individual (Montague et al., 2012;Fouda et al., 2018;Winandy et al., 2021). Song elaboration in birds may signal male size or other parental qualities (e.g., Kipper et al., 2006;Botero et al., 2009;Kagawa and Soma, 2013) and can be a good predictor of potential offspring survival and thus affect female preference (Hasselquist et al., 1996;Catchpole, 1997, 2000). Although, some bird species may be able to counteract song structure restrictions on song complexity (see Moseley et al., 2019), noise-dependent reduction in song elaboration in general may negatively affect signal quality and undermine information transfer about sender quality. Potential signal value or communicative function of a song can be explored by controlled exposure to playbacks of recorded songs and by experimental manipulation of specific acoustic variation (e.g., Nelson, 1988;Slabbekoorn and ten Cate, 1998;Linhart et al., 2012). Playback of urban and rural song variation has, for example, revealed recognition of urban acoustic features in natural territories of great tits (Parus major) and European blackbirds (Turdus merula). Individual birds approach more closely, stay longer or respond vocally more quickly to playback of songs dependent on whether they are from birds from the same habitat type or similar background noise levels (Mockford and Marshall, 2009;Ripmeester et al., 2010). The potential impact of noise-dependent variation in spectral range has been tested in few studies in both male-female (Halfwerk et al., 2011;Huet des Aunay et al., 2014) and male-male communication (Luther and Magnotti, 2014;Luther et al., 2016;LaZerte et al., 2017;Phillips and Derryberry, 2018). The bananaquit (Coereba flavoela), an abundant bird species of neotropical cities, is a good system to study the potential signal value of song elaboration. We previously showed bananaquits exhibit noise-dependent variation in song elaboration: they sing elaborate songs, rich in syllable types and syllable transitions in quiet territories and simple and repetitive songs that are poor in syllable diversity in more noisy territories (Winandy et al., 2021). They are relatively abundant across city habitats, used to human presence, and can be highly territorial to conspecific intruders (Hilty and Christie, 2018;personal observations). Consequently, bananaquits are very suitable for playback studies that demand close approach of researchers for behavioral observations and recordings. In this study, we performed a playback exposure experiment and tested whether bananaquits responded differently to elaborate vs. simple songs. More elaborate songs were characterized by higher syllable diversity (i.e., more syllable types per song), but also by lower minimum and higher maximum frequencies (Winandy et al., 2021). In many species of birds, songs with an aggressive territorial function tend to be shorter and more repetitive than songs with a mate attraction function (Searcy and Anderson, 1986;Collins, 2004). Bananaquit territorial responses to playback may therefore be stronger to simpler songs and may also elicit a vocal response matching in song elaboration. More elaborate songs may require a wider frequency bandwidth. Consequently, we aimed at answering the following questions: (1) do simpler songs trigger stronger responses than the more elaborate songs? (2) do individuals match song elaboration? (3) do elaborate songs trigger wider frequency range songs from territory owners? This study could provide new insights into how noise pollution, through the simplification of urban songs, can alter the evolution of sexually selected signals. Ethics The animal study was reviewed and approved by the local "Committee of Ethics in the Use of Animals, " Federal University of Bahia -UFBA, Brazil (n • 36/2016). Study Site and Species We conducted our playback experiment in 20 bananaquit (Coereba flaveola) territories in the city of Salvador, Bahia, Brazil (12 • 57 50.9 S, 38 • 30 21.0 W). We tested the birds during the Brazilian summer, between February and March of 2018. This species can sing and breed throughout the year (Hilty and Christie, 2018) and territorial responsiveness does not fade during summer. The territories were located in different habitat types and traffic noise regimes: in Atlantic Forest urban parks, urban gardens and areas close or next to main avenues, with variable stands of concrete buildings and trees. We performed the experiment only during relatively quiet moments of the day for each territory, between 05H00 and 07H00 in the morning. Previous to the experiment, we assessed the noise levels of the territories throughout the morning, i.e., 05H00 to 10H00, with a sound pressure level meter, and the results are reported in Winandy et al. (2021). Noise levels rose gradually to become above 55dB(A) only after 07H00. By conducting our experiments before 07H00, before the rush hour, and by keeping the distance between playback speaker and response bird less than 14 m, we avoided possible interference of traffic noise with playback song detection, which was not our target in this study. Bananaquits are nectarivorous songbirds that occur across the Neotropics from Mexico to Argentina and the Caribbean islands. They can be easily observed in several types of human-altered habitats, from highly urban to rural areas. They are territorial birds that sing for mate attraction and territorial defense throughout the day and year (Hilty and Christie, 2018). The singing is thought to be primarily done by males, although more research is needed about possible singing behavior in females (Riebel et al., 2019). The songs are composed of series of highpitched syllables, which vary from complex sequences of diverse element types with high transition rates to highly repetitive series of less variable syllable types (Winandy et al., 2021). Bananquits have highly variable repertoires, and song elaboration may vary within and among individuals with behavioral context and local noise levels (Winandy et al., 2021). Sound Recording and Analysis Before exposing the birds to the playback, we recorded their pre-playback songs for 1 min. Usually, in 1 min of recording the bananaquits sang about 10 songs, but for some individuals we obtained less than five songs. We recorded the birds from a distance of 2-14 m, using a Tascam DR-44WL recorder connected to a Sennheiser TM (Wedemark, Germany) shotgun directional microphone (ME67 + K6). In total, we performed acoustic analyses on 11.1 ± 5.2 (mean ± SD) songs per individual. We used Raven TM PRO software, version 1.5 (Cornell Laboratory of Ornithology, Ithaca, NY, United States) for all processing of recordings and song measurements. Spectrograms settings were kept constant as: FFT length: 512, window: hann, overlap: 75%. All song recordings, pre-playback and response songs, were first cut in shorter song sequences, separated from recorded playback stimuli, before the analyses. In this way, the observer was always blind to the origin and nature of songs in the stimuli used for the playback experiment. We used cursor placement to extract three spectral song variables (c.f. Verzijden et al., 2010;Winandy et al., 2021): minimum frequency, maximum frequency, and frequency bandwidth. The low-noise conditions during playback and the observer being blind to the stimulus type reduced the chance for observer bias or artifact effects in our spectral measurements (Verzijden et al., 2010;Brumm et al., 2017). Additionally, we counted the number of syllable types per song as a measure of song elaboration. Each playback stimulus consisted of three different songs of the same individual and song category (simple or elaborate). Songs from the same individual were only used for one stimulus and thus not in different song categories. The three songs were played back twice in the same sequence with a silent interval of 3 s between each of them (c.f. Ripmeester et al., 2010). We created in this way 10 unique exemplars of each playback stimulus: 10 simple and 10 elaborate playback stimuli. Playback Design We played back the stimulus songs in bananaquit territories of actively singing birds without nearby competitors that could be agonistically interacting at the time of the experiment. These procedures were meant to reduce variation in behavioral responses related to different motivational states. We placed the 'JBL clip 2' loudspeaker at about 5-10 m from the focal male and the observer was positioned 5-10 m further away. We measured the amplitude of the playback with a Skill-Tec TM, SKDEC-02 (São Paulo, São Paulo, Brazil) sound pressure level meter (A-weighted, fast response, range 30-130 dB, 1 s interval) and adjusted playback levels to a volume of 70 dB(A) at a distance of 1 m from the speaker. After the start of the playback of the first song stimulus series, simple or elaborate, we scored the behavior of the focal individual for 1 min. During the playback and for 2 min after it had ended, we also recorded the songs. After the 2 min interval, we played back a song stimulus from the opposite FIGURE 1 | Examples of two elaborate (I and II) and two simple (III, IV) song stimuli used in the playback experiment. FIGURE 2 | Time periods overview of the playback procedure in the field. A stimulus of three distinct elaborate or simple songs of the same individual was played twice after a 1 min of pre-playback recording phase. After the start of the playback of the first song stimulus series, we scored behavior for 1 min. During the playback and for 2 min after it ended, we recorded the response songs. Following that, the second stimulus was played back to the same focal individual: three distinct songs twice of the opposite stimulus category (simple or elaborate songs, depending on the order of exposure). category and recorded songs and scored response behaviors for the same periods as before (Figure 2). The order of the played back stimuli was randomized. We avoided testing direct neighbors that could have been exposed to previous playbacks. The following behaviors and song measurements were scored: number of flights over the loudspeaker, shortest distance of the focal male to the loudspeaker, number of songs, number of calls and song and call rate. Statistical Analysis We conducted all statistical analyses in R studio software (R Core Team), using the packages lme4 (Bates et al., 2015) and MuMIn (Barton, 2016). We performed generalized linear models (GLM) and Akaike's information criterion (AIC) model selection to find out whether the song variables and behavioral responses were affected by the stimulus type (simple vs. elaborate song playbacks) and/or by the order of the stimuli. All song measurements and behavioral responses were entered as response variables in the models. The stimulus category and playback order were entered as fixed factors in the full model and individual as random factor. We computed the statistics for all possible models, which included: (1) single predictors (stimulus category, order), (2) their additive combinations (category + order), and (3) the null models (without effect of any predictor). The response variables: number of syllable types, total number of syllables and number of flights were entered Frontiers in Ecology and Evolution | www.frontiersin.org as interval variables in Poisson generalized models with loglink function. We selected the best models based on the AICc values, considering AICc > 2 a criterion for substantial difference between models (Burnham and Anderson, 2002). The model selection was made using the function dredge model selection (package MuMIn) (Barton, 2016). We calculated the marginal (R 2 m) and conditional (R 2 c) R 2 values to evaluate how much the fixed effects (R 2 m) or the entire model (R 2 c) explained the variance of the response variables (Nakagawa and Schielzeth, 2013). Finally, we performed post-hoc Tukey's tests for each response variable for which we obtained a minimal model selection. This analysis informed which pairs of playback conditions were significantly different in song or behavioral responses. As we did in our previous correlational study, we investigated again the possible trade-off between the signal frequency reduction and the restriction in song elaboration. Therefore, we fitted linear models to test the relationship between the spectral and elaboration variables with two different datasets: one that included only the spontaneous songs sung before the start of the playback experiment and another with all songs, both the spontaneous and playback triggered songs. RESULTS There was no effect of the stimulus type (elaborate vs. simple) on behavioral response strength and vocalization rate. The number of flights, the approach to the speaker, the song and call rates were all not affected by stimulus category or by the playback order (Table 1 and Figure 3). However, individuals responded in acoustically distinct ways to each playback type. Their songs had fewer syllables and were lower in frequency and wider in frequency bandwidth when they responded to the elaborate song stimuli compared to when they responded to the simple song stimuli (Figure 4). The model selection for song variables showed that the number of syllables per song was significantly affected by the playback stimulus ( Table 2). The birds sang fewer syllables per song after being exposed to the elaborate song stimulus than before the playback experiment, 12.42 syllables on average before and 11.40 on average after the elaborate playback stimulus ( Table 3). The spectral variables: minimum frequency, maximum frequency and frequency bandwidth (Hz) were explained by both the song stimulus category and the order of the stimulus playback ( Table 2). Regarding the order, when the elaborate stimulus was played first, the differences in the spectral responses between treatments were more pronounced (Figure 4). Birds significantly lowered the minimum frequency of their songs after being exposed to the elaborate playback (Figure 4 and Table 3). Moreover, they sang songs of significantly wider frequency bandwidth when responding to the elaborate stimulus, followed by a bandwidth decrease when exposed to the simple playback as the second stimulus (Figure 4 and Table 3). Finally, the correlation between song elaboration and song frequency previously found for bananaquit songs was not found for the songs from the playback experiment in the current study. The number of syllable types per song and the spectral variables, minimum song frequency and frequency bandwidth, were not correlated. The correlation did not occur when we only included the spontaneous songs from the pre-playback phase [Linear model for low frequency: R 2 = −0.06, F(1, 16) = 0.01, N = 17, P = 0.89; Linear model for frequency bandwidth: R 2 = −0.06, F(1, 16) = 0.03, N = 17, P = 0.86] or when all songs from the playback experiment were included, i.e., for both spontaneous and playback triggered songs [Linear model for low frequency: DISCUSSION We performed a playback exposure experiment to test whether bananaquits responded differently to elaborate vs. simple songs. We found the following answers to our questions: (1) playback of simpler songs did not trigger stronger (or weaker) behavioral responses than playback of more elaborate songs; (2) individuals did not match song elaboration to the stimulus categories, and even decreased syllable numbers in their song in response to more elaborate songs; however (3) songs triggered by elaborate song playback had a lower minimum frequency and wider frequency range compared to songs sung before the playback. The frequency bandwidth of songs sung after the elaborate song playback were also significantly wider than songs sung after simple song playback. Song Elaboration Is Meaningful Our current results reveal that noise-induced changes in song elaboration concern meaningful changes to territorial birds in neotropical bananaquits. Variation in responsiveness related to variation in song elaboration is in line with other studies in the literature. In simulated territorial intrusions, for example, darkeyed juncos (Junco hyemalis) responded stronger to structurally more elaborate songs, spending longer periods closer to the playback speaker (Reichard et al., 2011). In chaffinches (Fringilla coelebs), both males and females responded stronger to more elaborate songs, i.e., signals with a higher number of different trill phrases (Leitão et al., 2006), suggesting this song parameter plays a role in both male-male competition and mate attraction. As we found an impact of song elaboration on response song variation and not on response strength, a signaling function of this song feature may be widespread but vary in content among species. The impact of song elaboration on response song variation in our study on bananaquits concerned syllable number and spectral variation. We found no matching in elaboration, as more elaborate stimuli led to less elaborate response songs. We did not expect this, as less elaborate and more stereotypic songs can be associated with male-male interactions, while more elaborate and diverse songs can be more important for female choice (Hasselquist et al., 1996;Searcy and Beecher, 2009;Kagawa and Soma, 2013). However, we did find spectral matching in the minimum song frequency and in an increase in the frequency Frontiers in Ecology and Evolution | www.frontiersin.org bandwidth when individuals responded to the elaborate playback. Similar changes in song frequency use have been found to be meaningful in other species in various ways. Frequency song matching, for example, can be an aggressive signal between rival birds during dispute (Searcy and Beecher, 2009) as reported for Kentucky warblers (Oporornis formosus, Morton and Young, 1986) and black-capped chickadees (Poecile atricapillus, Horn et al., 1992;Otter et al., 2002). Relative frequency variation among communicating birds (i.e., frequency mismatch) may be important, as shown for willow warblers (Phylloscopus trochilus, Linhart and Fuchs, 2015). Wider frequency bandwidths can also indicate higher aggressiveness, as white-crowned sparrows (Zonotrichia leucophrys nuttalli) respond less strongly to songs of restricted bandwidth (Luther et al., 2016;Phillips and Derryberry, 2018). Although we still have limited insight into the content of the message, we suggest that, according to the literature, the spectral variation and matching in bananaquit songs may also be meaningful. The modified spectral response, in the absence of a strength in other behavioral responses, could also reflect that song elaboration plays a role in moderating territorial disputes (Slabbekoorn and ten Cate, 1996;Searcy and Nowicki, 2000;FIGURE 4 | Spectral variation in the songs sung in response to each stimulus category. Each line connects song measures of one individual in three different periods of the playback procedure. As playback order had an effect, we provide the data in two separate sets of graphs. Individuals that were exposed first to the elaborate songs followed by the simple songs are depicted in the graphs on the left. The responses of individuals that were exposed first to the simple songs followed by the elaborate songs are depicted in the graphs on the right. * indicates statistically significant differences between the measures in two of the playback periods (*P < 0.05 and **P < 0.01). Otter et al., 2002). Graded variation in agonistic signals can convey increasing and decreasing levels of threats, before this becomes actually apparent in more overt changes in behavioral displays or approach tendencies (Searcy and Beecher, 2009). The fact that the order in which the stimuli were played influenced the escalation behavior of bananaquits in our study confirms such a possibility and warrants further exploration through playback experiments simulating dynamic changes in song elaboration (c.f. Hof and Podos, 2013). Elaboration vs. Bandwidth as a Signal There was an interesting discrepancy between the correlational analyses of the spectral and elaboration parameters in our previous (Winandy et al., 2021) and the current study. In the previous observational study, we found frequency bandwidth and AICc > 2 indicates a significant difference between two models. R 2 m indicates the proportion of variance of the response variable explained by the fixed factor and R 2 c indicates the proportion of variance explained by the entire model. The bold values indicate the dependent variables of each model. *Indicates the best model. N = 20 individuals. AICc > 2 indicates a significant difference between two models. R 2 m indicates the proportion of variance of the response variable explained by the fixed factor and R 2 c indicates the proportion of variance explained by the entire model. The bold values indicate the dependent variables of each model. *Indicates the best model. N = 20 individuals. minimum frequency to be determined by noise level, and a lower and narrower frequency range was correlated with less elaborate song. In the current experimental study, however, we found a change in bandwidth dependent on song elaboration, but song frequency range was not correlated with song elaboration. We believe that this discrepancy requires further exploration of the potential role for ambient noise in signaling bananaquits. There are two contextual differences in the recording sets that could explain the inconsistency of the correlation: noise level during recordings and whether song was sung in response to playback. In the previous study, we recorded the birds in quiet and also in noisy conditions, while in the current study, we only recorded the birds in relatively quiet moments of the day. As we found the correlation among the song parameters only in the first study, in which noisy conditions were present, we believe that the traffic noise could be causally linked to the presence of that significant correlation. This is another indication that noisy conditions may play a role in song syllable use restriction through noise-dependent bandwidth availability. In the previous study we also only recorded spontaneous songs, while in the current study, we recorded both spontaneous and playback induced songs. However, in the current study, we found no correlation before nor after the playback. We therefore argue that motivational state is not a likely explanation for the lack of correlation between song elaboration and frequency bandwidth in the current study. The Audibility-Signal Efficiency Trade-Off The combination of results of the previous observational study (Winandy et al., 2021) and the current playback study allows a new perspective on the signal audibility/efficiency tradeoff (Slabbekoorn and Ripmeester, 2008;Gross et al., 2010;Slabbekoorn, 2013). On the one hand, noise-dependent changes in frequency use may improve signal audibility as (1) avoiding low frequencies leaves a larger part of the song unaffected by masking low-frequency traffic noise (Nemeth and Brumm, 2010;Halfwerk et al., 2011); and (2) concentrating sound energy in a spectrally more narrow bandwidth will also improve the signalto-noise ratio (Hanna et al., 2011). On the other hand, as song elaboration is meaningful to the birds themselves (current study), the correlation between frequency bandwidth and song elaboration under noisy conditions (previous study, Winandy et al., 2021) can be interpreted as evidence for a restriction on signal efficiency by noise-dependent song bandwidth contraction. When this signal audibility/efficiency trade-off is relaxed under relatively quiet conditions, the correlation between song frequency bandwidth and song elaboration apparently also fades. Few studies have addressed the consequences of this tradeoff between signal audibility and signal efficiency under noisy conditions. We here show for bananaquits that the noisedependent variation in frequency use concerns biologically relevant signal variation, but for general conclusions the tradeoff remains to be tested in more species. We especially need to gain insight into whether vocal changes that improve audibility actually yield any benefit to the signaler. We do know for example from a few earlier playback studies that spectral changes potentially driven by masking traffic noise affect response levels and are therefore proven to be biologically relevant (Mockford and Marshall, 2009;Ripmeester et al., 2010;Luther and Derryberry, 2012). However, we have only begun to discover the potential for reduced responsiveness to urban song features, as modified by anthropogenic noise conditions, in a mate choice context (Halfwerk et al., 2011;Huet des Aunay et al., 2014) as well as in a territorial context of male-male communication (Luther and Magnotti, 2014;LaZerte et al., 2017;Phillips and Derryberry, 2018). CONCLUSION In the present study, we showed that bananaquits recognize the variation in song elaboration as they respond with syllable adjustment and spectrally distinct songs to the variation in song elaboration in our playback stimuli. As song elaboration was shown in an earlier study to be restricted by noise-dependent song frequency bandwidth, the current results confirm that song adjustments could increase audibility through masking avoidance, but at the same time affect the signaling function. This provides another example of how the rise in anthropogenic noise levels in avian habitat may not only affect what birds sing, but also what they communicate. We still have little insight into fitness consequences of masking avoidance and changes of noise-induced adjustments in signaling content. We therefore believe that more studies are warranted into human impact on the ecology and evolution of singing birds in their acoustically altered environments due to noisy human activities worldwide. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the local 'Committee of Ethics in the Use of Animals' , Federal University of Bahia-UFBA, Brazil (n TM 36/2016).
6,489
2021-04-26T00:00:00.000
[ "Physics" ]
Carbon Nanofibers versus Silver Nanoparticles: Time-Dependent Cytotoxicity, Proliferation, and Gene Expression Carbon nanofibers (CNFs) are one-dimensional nanomaterials with excellent physical and broad-spectrum antimicrobial properties characterized by a low risk of antimicrobial resistance. Silver nanoparticles (AgNPs) are antimicrobial metallic nanomaterials already used in a broad range of industrial applications. In the present study these two nanomaterials were characterized by Raman spectroscopy, transmission electron microscopy, zeta potential, and dynamic light scattering, and their biological properties were compared in terms of cytotoxicity, proliferation, and gene expression in human keratinocyte HaCaT cells. The results showed that both AgNPs and CNFs present similar time-dependent cytotoxicity (EC50 of 608.1 µg/mL for CNFs and 581.9 µg/mL for AgNPs at 24 h) and similar proliferative HaCaT cell activity. However, both nanomaterials showed very different results in the expression of thirteen genes (superoxide dismutase 1 (SOD1), catalase (CAT), matrix metallopeptidase 1 (MMP1), transforming growth factor beta 1 (TGFB1), glutathione peroxidase 1 (GPX1), fibronectin 1 (FN1), hyaluronan synthase 2 (HAS2), laminin subunit beta 1 (LAMB1), lumican (LUM), cadherin 1 CDH1, collagen type IV alpha (COL4A1), fibrillin (FBN), and versican (VCAN)) treated with the lowest non-cytotoxic concentrations in the HaCaT cells after 24 h. The AgNPs were capable of up-regulating only two genes (SOD1 and MMP1) while the CNFs were very effective in up-regulating eight genes (FN1, MMP1, CAT, CDH1, COL4A1, FBN, GPX1, and TGFB1) involved in the defense mechanisms against oxidative stress and maintaining and repairing tissues by regulating cell adhesion, migration, proliferation, differentiation, growth, morphogenesis, and tissue development. These results demonstrate CNF nanomaterials’ unique great potential in biomedical applications such as tissue engineering and wound healing. Introduction Nanotechnology is an emerging field of functional materials on the scale of nanometers at least in one dimension with a broad range of advanced applications such as medical imaging and nanomedicine [1][2][3][4]. Carbon nanofibers (CNFs) are one-dimensional, highly hydrophobic and non-polar filamentous hollow carbon-based nanomaterials (CBNs) that are cost-effective, have good electrical, thermal and mechanical properties [5,6], and show great promise in biomedical applications [7,8]. CNFs can be used to produce conductive composites [9] for biomedical approaches that require electrical stimulation [10] and are produced at a lower cost and higher purity than other CBNs such as carbon nanotubes (CNTs) [11]. While carbon nanostructures in the form of multiwalled carbon nanotubes (MWCNTs), CNFs, and carbon nanoparticles have shown size-dependent cytotoxicity in vitro in lung tumor cells [12], cytotoxicity tests have revealed a concentration-and time-dependent loss of lung fibroblasts, showing that CNFs are less dangerous than single-walled carbon nanotubes (SWCNTs) [13]. CNFs with diameters of 10 µm and 100 nm did not show toxicological activity in mouse keratinocytes , in contrast with 10 nm diameter MWCNTs and 1 nm diameter SWCNTs, which reduced cell viability in a time-dependent manner up to 48 h [14]. CNFs have also shown potent antibacterial properties against the clinically-relevant multidrug-resistant bacteria methicillin-resistant Staphylococcus epidermidis [15] and have been used to enhance the antiviral properties of composite materials [8]. CNFs have been combined with biopolymers to produce non-cytotoxic composites with improved physical and biological properties [16][17][18] in terms of mechanical, thermal, wettability, cell adhesion, and proliferation properties. This type of CBN possesses photocatalytic properties that can enhance its antibacterial properties when it is irradiated with light-emitting diodes [19]. Silver nanoparticles (AgNPs) have been studied in greater depth than CNFs. They are also cost-effective and possess excellent antimicrobial properties [20][21][22][23][24]. In fact, AgNPs are already broadly used in wound dressings for healing processes and treating burns in biomedicine as well as in the food and textile industries and in paints, household products, catheters, implants, and cosmetics and in combination with many types of materials to prevent infection [25][26][27][28][29][30]. These nanoparticles have great potential for use in dermatology and wound healing because of their prolonged capacity to release silver ions showing a concentration-dependent toxic effect in HaCaT cells [31]. Topical delivery of AgNPs promotes wound healing because they exert positive effects due to their antimicrobial activity, reduction action in wound inflammation, and the modulation effect of fibrogenic cytokines [32]. Varying AgNP morphologies have been reported to have different toxic effects against microorganisms, HaCaT keratinocytes, and to affect skin deposition [33]. Their chemopreventive efficacy has been demonstrated in HaCaT cells with a significant reduction in cyclobutene-pyrimidine-dimer formation after DNA damage induced by UVB irradiation [34], which provides great potential for preventing skin carcinogenesis. AgNPs' UVB-protective efficacy in human keratinocytes depends on their size [35]. Thus, pre-treating HaCaT cells with small AgNPs (10-40 nm) was effective in protecting skin cells from UVB-radiation-induced DNA damage and from UV-radiation-induced apoptosis. However, no protection was obtained by using 60 and 100 nm AgNPs. AgNPs are being increasingly used in the healthcare sector and consumer products and many commercial products now contain these nanoparticles for topical application to human skin. However, despite their growing number of applications comprehensive biological information still needs further research because of the many controversial results published on their safety [29]. For example, AgNPs showed reduced cell viability and metabolism as well as proliferative and migratory potential of primary normal human epidermal keratinocytes (NHEKs) at different concentrations [36]. NHEKs have been shown to be more susceptible to the application of AgNPs than normal human dermal fibroblasts (NHDFs). A comparative study was made of the effects of AgNPs and ionic silver (Ag −1 ) in terms of cell viability, inflammatory response, and DNA damage in normal NHDFs and NHEKs [37]. This study showed that Ag −1 is more toxic than AgNPs in both NHDFs and NHEKs. However, microorganisms are known to be capable of developing resistance mechanisms against silver [38,39] and the current excessive use of AgNPs as antibacterial compounds in many areas is increasing its potential risk to humans and the environment [40]. In this regard, alternative broad-spectrum antimicrobial carbon-based nanomaterials such as CNFs are characterized by their low risk of inducing microbial resistance [41], which shows their promise in providing long-lasting solutions in biomedicine. In the present study we analyzed the effects of AgNPs and CNFs on human epidermal HaCaT keratinocyte cells in terms of time-dependent cytotoxicity and their possible biomedical applications when used at low non-cytotoxic concentrations as proliferative agents. We also analyzed their capacity to modify the gene expression of the thirteen genes (superoxide dismutase 1 (SOD1), catalase (CAT), matrix metallopeptidase 1 (MMP1), transforming growth factor beta 1 (TGFB1), glutathione peroxidase 1 (GPX1), fibronectin 1 (FN1), hyaluronan synthase 2 (HAS2), laminin subunit beta 1 (LAMB1), lumican (LUM), cadherin 1 CDH1, collagen type IV alpha (COL4A1), fibrillin (FBN), and versican (VCAN)) associated with oxidative stress, the extracellular matrix, and protein synthesis for the maintenance and repair of different tissues. The expression of these genes is of interest for biomedical applications such as tissue engineering and wound healing. Materials Silver nanopowder (<150 nm particle size, product code 484059, 99% trace metals basis) was purchased from Sigma-Aldrich (Zwijndrecht, Switzerland). Carbon nanofibers (CNFs) were provided by Graphenano (Yecla, Spain). These CNFs were previously characterized by high-performance electron microscopy with elemental analysis (EDS) which showed that they were irregular one-dimensional hollow filaments with a wide range of diameters (22.7 ± 11.9 nm) and lengths (737.8 ± 522.4 nm) and the expected carbon to oxygen atom ratio (C/O ratio of 37.4) [5]. Fetal bovine serum (FBS), DMEM low glucose, penicillinstreptomycin (P/S), L-glutamine and epidermal growth factor (EGF) were obtained from Life Technologies (Gibco, Karlsruhe, Germany). An RNA purification kit was obtained from Norgen Biotek Corp (Ontario, Canada), and a PrimeScript™ RT Reagent Kit (Perfect Real Time) from Takara Bio Inc (Otsu, Japan). Material Characterization Transmission electron microscopy images were obtained by a JEOL 2100 electron microscope with a LaB 6 thermoionic gun at 200 kV. The samples were dispersed in aqueous solution in ultrasound and dropped onto a (grid type) sample holder composed of Cu and C. Raman spectroscopy was performed by an NRS-3100 spectrometer (JASCO) coupled to a silicon CCD detector and an argon-ion laser (Melles Griot, 514.5 nm, 200 mW). The zeta potential (ζ), dynamic light scattering (DLS), and polydispersity index (PdI) values were obtained from a Zetasizer NanoZS (Malvern, UK). The ζ values were obtained in deionized water (pH = 3, 5, 7, 10, and 12) with pH variation using HNO 3 (Synth, 70%) and NH 4 OH (Synth, 24%). The DLS technique was used to evaluate the particle hydrodynamic size of the two materials in water and in the Dulbecco's modified Eagle medium (DMEM) used for the biological characterization. Culture Maintenance Immortalized human keratinocyte cell line HaCaT were cultured in DMEM low glucose, supplemented with FBS 10%, L-glutamine 2%, and P/S 1% in a humidified atmosphere at 5% CO 2 and 37 • C. Cell medium was changed three times per week, and cells were trypsinized and resuspended in medium at low density when the culture reached 80% of confluence. Preparation of Nanomaterial Stock Solutions Nanomaterial stock solutions were prepared in sterile DMEM low glucose supplemented with P/S and L-glutamine, but not FBS, and were sonicated for 2 h to obtain a completely homogeneous solution of the different compounds in the medium. A medium vial was exposed to the same conditions to be used not only as the control group but also for the subsequent dilutions of the nanomaterials. The stocks solutions were used immediately after sonication. Cytotoxicity Assay Human keratinocytes were seeded onto 96-well plates at a density of 1 × 10 4 cells/well and grown in an incubator with a humidified atmosphere (5% CO 2 and 37 • C). After 24 h the medium was changed with 100 µL with the corresponding concentrations, ranging from Biomedicines 2021, 9,1155 4 of 17 0 to 800 µg/mL of each compound. The compounds and concentration batteries were tested per sextuplicate for 3, 12, and 24 h to evaluate different cytotoxicity endpoints at different times. Six replicate samples of each concentrations were measured plus an untreated control group (also without FBS). The concentrations selected to calculate the EC 50 of both compounds were 20, 40, 80, 150, 300, 500, and 800 µg/mL at 12 and 24 h. Cytotoxicity was evaluated by the 3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyl tetrazolium (MTT) assay. Cells with MTT reagent were incubated for 5 h in the same conditions; formazan crystals were then solubilized with DMSO and cell viability was calculated from the absorbance values at 550 nm measured in a Varioskan micro plate reader (ThermoScientific, Markham, ON, Canada). The same experiment was carried out in parallel to remove false positives due to cell pigmentation with AgNPs and CNFs, excluding the MTT reagent so that the background color could be subtracted from the final absorbance values. Proliferation Assay As a safeguard against cytotoxicity by increasing time exposure, two non-toxic tentime diluted concentrations were selected according to the cytotoxicity results at 24 h. Cells were seeded in a 96-well culture plate at a density of 5 × 10 3 cells/well. The stock solution was prepared following the same procedure as indicated in Materials and Methods but with FBS 0.5% instead of 0%. Cells were cultured for 72 or 96 h in a humidified atmosphere (5% CO 2 and 37 • C). A proliferative positive control was included and treated with epidermal growth factor (EGF) at 15 ng/mL. Cell proliferation was measured by the MTT assay as conducted in the cytotoxicity assay. A sextuplicate was run for the different conditions and exposure periods. Gene Expression Gene expression analysis was performed in triplicate using two non-toxic concentrations based on the cytotoxicity results performed at 24 h. Cells were seeded in a 6-well culture plate at a density of 1.5 × 10 6 cells/well. After incubation for 24 h with the different nanomaterials, the supernatant was aspirated and the cells were washed twice with PBS 1× for RNA extraction, cDNA synthesis, and RT-qPCR. Data were analyzed by QuantStudio TM Design & Analysis Software (ThermoFisher, Markham, ON, Canada). The primers for target genes (Table A1 in Appendix A) and reference gene (β-actin/ACTB) were obtained on Primer-Blast software [42]. Data normalization was based on the expression of the reference gene. Statistical Analysis The statistical analysis was performed by ANOVA followed by multiple Tukey's post-hoc analysis. Probit analysis was used to determine the median effective concentration (EC 50 ) values. GraphPad Prism 6 software was used in the statistical analysis at a significance level of at least p < 0.05. Material Characterization The AgNPs and CNFs were characterized by transmission electron microscopy, Raman spectroscopy, zeta potential, and DSL. The morphologies of the AgNPs and CNFs used are shown in Figure 1 at two magnifications. The DLS technique performed to evaluate the particle hydrodynamic size of the two nanomaterials showed larger size values (1693 and 1142 nm) in the DMEM used for the biological characterization than in water (461.3 and 811.2 nm) for both AgNPs and CNFs (Table 1). The DLS technique performed to evaluate the particle hydrodynamic size of the two nanomaterials showed larger size values (1693 and 1142 nm) in the DMEM used for the biological characterization than in water (461.3 and 811.2 nm) for both AgNPs and CNFs (Table 1). The two types of nanomaterial showed different zeta potential (ζ) as a function of pH in water solution ( Figure 2). The two types of nanomaterial showed different zeta potential (ζ) as a function of pH in water solution ( Figure 2). The Raman spectra of the two chemically different nanomaterials (metallic nanomaterial versus carbon-based nanomaterial) are shown in Figure 3. The most representative AgNP Raman peaks (1380 and 1570 cm −1 ) and CNFs (D, G, and 2D bands) are indicated in Figure 3. The Raman spectra of the two chemically different nanomaterials (metallic nanomaterial versus carbon-based nanomaterial) are shown in Figure 3. The most representative AgNP Raman peaks (1380 and 1570 cm −1 ) and CNFs (D, G, and 2D bands) are indicated in Figure 3. Biological Properties The AgNPs' and CNFs' biological properties in terms of time-dependent cytotoxicity, proliferation, and gene expression in human keratinocytes cells are described in the following subsections. Cytotoxicity Assay A study was made of the cytotoxicity of different concentrations ranging from 0 (control) to 800 µg/mL of AgNPs and CNFs in HaCaT cells for different exposure times (3,12, and 24 h). The results showed that none of the AgNP and CNF concentrations was cytotoxic for the HaCaT cell line at 3 h of exposure ( Figure 4). However, the longer time exposure of up to 12 h required less AgNP and CNF exposure concentration than the concentrations used for 3 h to avoid cytotoxic effects ( Figure 5). However, the longer time exposure of up to 12 h required less AgNP and CNF exposure concentration than the concentrations used for 3 h to avoid cytotoxic effects ( Figure 5). The results show a negative correlation with cell viability, indicating that the toxicity of these compounds is dose-dependent. The cytotoxicity assay results at 24 h of exposure time in HaCaT cells also showed a non-cytotoxic concentration of ≤150 µg/mL for CNFs and AgNPs ( Figure 6). a % of the control group. Data are shown as the mean ± standard deviation of six replicates. The ANOVA results of the different AgNP or CNF concentrations with respect to the control are indicated in the plot; n.s: not significant. However, the longer time exposure of up to 12 h required less AgNP and CNF e sure concentration than the concentrations used for 3 h to avoid cytotoxic effects (Fi 5). The results show a negative correlation with cell viability, indicating that the tox of these compounds is dose-dependent. The cytotoxicity assay results at 24 h of expo time in HaCaT cells also showed a non-cytotoxic concentration of ≤150 µg/mL for C and AgNPs (Figure 6). iomedicines 2021, 9, x FOR PEER REVIEW Figure 6. Cytotoxicity assay in human keratinocyte (HaCaT) cells, after 24 h exposure to AgNPs (a) or CNFs (b) at differen concentrations ranging from 0 (control) to 800 µg/mL. Cytotoxicity was evaluated by the MTT assay. Results were repre sented as a % of the control group. Data are given as the mean ± standard deviation of six replicates. The ANOVA results of the different AgNP or CNF concentrations with respect to control are indicated in the plot. *** p < 0.001; **** p < 0.0001 n.s: not significant. The limit of cell viability (70%) for the compounds to be considered non-cytotoxic is indicated. From these results at 24 h of exposure, the mean effective concentrations (EC50) determined for AgNPs and CNFs (Table 2). . Cytotoxicity assay in human keratinocyte (HaCaT) cells, after 24 h exposure to AgNPs (a) or CNFs (b) at different concentrations ranging from 0 (control) to 800 µg/mL. Cytotoxicity was evaluated by the MTT assay. Results were represented as a % of the control group. Data are given as the mean ± standard deviation of six replicates. The ANOVA results of the different AgNP or CNF concentrations with respect to control are indicated in the plot. *** p < 0.001; **** p < 0.0001; n.s: not significant. The limit of cell viability (70%) for the compounds to be considered non-cytotoxic is indicated. From these results at 24 h of exposure, the mean effective concentrations (EC 50 ) were determined for AgNPs and CNFs (Table 2). Proliferation Assay The proliferative activity of AgNPs and CNFs in the keratinocytes cell line was studied using two non-cytotoxic concentrations (10 and 20 µg/mL) based on the previous results obtained from the cytotoxic assay at 24 h ( Figure 6) to avoid toxic effects by increasing exposure time to 72 and 96 h (Figure 7). ted as a % of the control group. Data are given as the mean ± standard deviation of six replicates. The ANOVA resu he different AgNP or CNF concentrations with respect to control are indicated in the plot. *** p < 0.001; **** p < 0.000 not significant. The limit of cell viability (70%) for the compounds to be considered non-cytotoxic is indicated. From these results at 24 h of exposure, the mean effective concentrations (EC5 determined for AgNPs and CNFs ( Proliferation Assay The proliferative activity of AgNPs and CNFs in the keratinocytes cell line wa ied using two non-cytotoxic concentrations (10 and 20 µg/mL) based on the previ sults obtained from the cytotoxic assay at 24 h ( Figure 6) to avoid toxic effects by in ing exposure time to 72 and 96 h (Figure 7). The results showed that 72 h was not long enough to induce cell proliferation. However, AgNPs at both concentrations (20 and 10 µg/mL) and CNFs, only at 20 µg/mL, showed a statistically significant increase in cell growth. Gene Expression The effect of AgNPs and CNFs on the expression of thirteen genes are shown in Figure 8a (genes SOD1, CAT, MMP1, TGFB1, GPX1, FN1 and HAS2) and Figure 8b The results showed that 72 h was not long enough to induce cell proliferation. However, AgNPs at both concentrations (20 and 10 µg/mL) and CNFs, only at 20 µg/mL, showed a statistically significant increase in cell growth. Gene Expression The effect of AgNPs and CNFs on the expression of thirteen genes are shown in Figure 8a (genes SOD1, CAT, MMP1, TGFB1, GPX1, FN1 and HAS2) and Figure 8b (genes LAMB1, LUM, CDH1, COL4A1, FBN and VCAN) at two non-cytotoxic concentrations (20 and 40 µg/mL) in human keratinocyte cells after 24 h. These results show that exposure to CNFs at 40 µg/mL produces gene overexpression in most of the studied genes (CAT, MMP1, TGFB1, GPX1, CDH1, COL4A1, and FBN), while AgNPs were only able to induce expression changes in two genes (SOD1 and MMP1). These results show that exposure to CNFs at 40 µg/mL produces gene overexpression in most of the studied genes (CAT, MMP1, TGFB1, GPX1, CDH1, COL4A1, and FBN), while AgNPs were only able to induce expression changes in two genes (SOD1 and MMP1). Discussion The TEM images show that these two nanomaterials present very different morphologies (Figure 1). CNFs are filamentous materials of micrometric length and an average nanometric diameter of 21.73 ± 9.59 nm, and their morphology is apparently similar to that of carbon nanotubes (CNTs). However, unlike CNTs, CNFs present disordered graphitic layers [17]. These AgNPs present smaller sizes that vary in form from spherical to ellipsoidal. AgNPs have an average individual particle size of 110.10 ± 33.85 nm, but they are presented in the form of several agglomerates. According to the TEM images, the DLS results (Table 1) show larger size values than individual particles for both CNFs and AgNPs. As expected, the particle size values depend on whether the nanofluid is prepared with water solution or DMEM [43]. The PdI values of the samples measured by this technique are also shown in Table 1 to provide a particle aggregation parameter. Both AgNP and CNF show PdI values slightly higher in DMEM than in water, which could be attributed to the higher CNF and AgNP aggregation in this medium and could explain their greater DLS size with respect to that measured in water. However, PdI increased more for AgNPs than for CNFs, in good agreement with the greater DSL size increase found for AgNPs in DMEM. These large differences of particle aggregation in DMEM must be related to the very different morphology of each type of nanomaterial and their different surface charge or zeta potential (ζ) shown in Figure 2. Raman spectroscopy provides valuable nanostructural information on CBNs and metal nanoparticles [44][45][46]. The Raman spectrum of the CNFs showed the three typical bands (D, G, and 2D) at~1350, 1595, and 2690 cm −1 , respectively, and an I D /I G ratio of 1.17, typical of this type of CBN, which presents a higher degree of disorder than CNTs (Figure 3) [47]. The AgNP Rama spectrum showed two main vibrational modes with the maxima at about 1380 and 1570 cm −1 , as expected [46]. CNF and AgNP cytotoxicity was studied at concentrations ranging from 0 (control) to 800 µg/mL in HaCaT cells for different exposure times (3, 12, and 24 h). None of the concentrations of both nanomaterials was cytotoxic for the HaCaT cell line at 3 h of exposure (see Figure 4). As expected, both CNFs and AgNPs showed a negative correlation with cell viability, indicating a dose-dependent toxicity of these compounds ( Figure 5). Considering a limit of reduction to 70% of cell viability with respect to the control, both compounds showed similar cytotoxicity in human keratinocyte HaCaT cells with a non-cytotoxic concentration of ≤150 µg/mL after 12 h of exposure. The cytotoxicity assay at 24 h of exposure time also showed similar results for both compounds (see Figure 6), also with a non-cytotoxic concentration of ≤150 µg/mL. CNFs have shown higher cytotoxicity than single-wall carbon nanotubes [13]. However, in this study CNFs were shown to be much less cytotoxic in human keratinocytes than multi-layer graphene oxide [48]. The mean effective concentrations (EC 50 ) of AgNPs and CNFs at 24 h of exposure also show similar values in Table 2. It is important to emphasize that CNFs are CBNs had much lower toxicity than other CBNs such as multi-layer graphene oxide (EC 50 of 4.087 µg/mL at 24 h) and few-layer graphene oxide (EC 50 of 62.8 µg/mL at 24 h) in human keratinocyte HaCaT cells. Our cytotoxicity results for the AgNPs with spherical to ellipsoidal shapes are in good agreement with previous results on AgNPs in the form of plates (z-potential −37.5 mV) and spheres (z-potential −30.4 mV) in human HaCaT keratinocytes in vitro, which showed an IC 50 of 78.65 µg/mL (95% CI 63.88, 96.83) and 1004 µg/mL (95% CI 286.8, 3516) at 24 h, respectively [33]. AgNPs are more promising than silver cations because they are less cytotoxic (e.g., the IC 50 of silver nitrate is 7.85 µg/mL (95% CI 1.49, 14.69)). Both nanocompounds showed a slightly statistically significant proliferative activity at 96 h of exposure time. A shorter exposure time (72 h) was not long enough to induce cell proliferation, as has been found for other nanomaterials such as GO [48]. CNFs at 10 µg/mL was not a high enough concentration to induce any proliferative effect. The effect of the two different nanomaterials on the expression of thirteen genes (Table A1 in Appendix A) involved in the activation or inhibition of different metabolic routes such as oxidative stress, extracellular matrix, and synthesis of proteins related to the maintenance and repair of different tissues was analyzed in human keratinocytes cells. CNFs were able to up-regulate eight genes (CAT, MMP1, TGFB1, GPX1, FN1, CDH1, COL4A1, and FBN)-four genes (MMP1, TGFB1, FN1, and CDH1) at a concentration of 20 µg/mL and seven genes (CAT, MMP1, TGFB1, GPX1, CDH1, COL4A1, and FBN) at 40 µg/mL. Exposure of HaCaT cells to CNFs increased the expression of FN1, which regulates cell adhesion and migration [49], and TGFB1, involved in cell proliferation, differentiation, and growth [50,51]. These results are in agreement with those reported previously on the enhancement of proliferative activity and cell adhesion of canine adiposederived mesenchymal stem cells with the addition of CNFs in poly(3-hydroxybutyrateco-3-hydroxyvalerate) [18]. The expression of catalase (CAT) and glutathione per-oxidase 1 (GPX1) genes that encode the synthesis of enzymes involved in the neutralization of hydrogen peroxide with an antioxidant effect, was also up-regulated in HaCaT cells after 24 h of exposure to CNFs. The activation of these two genes has been reported to be associated with defense mechanisms against stressors in human skin cells during photoaging as protective oxidative activity against UVA radiation [52][53][54][55]. The CNFs also increased the expression of the genes involved in the synthesis of glycoproteins such as cadherin 1 (CDH1) and fibrillin (FBN), which are essential in the morphogenesis and development of normal tissue by connecting cells with each other [56,57]. The COL4A1 gene, which is abundant in the dermis, and the MMP1 gene, which is involved in the breakdown of the extracellular matrix in normal physiological processes, were also upregulated after exposure of keratinocytes to CNFs for 24 h. The up-regulation of four of these eight genes (FN1, TGFB1, CAT and CDH1) has also been observed in other CBNs such as multilayer GO [48]. However, the increase of the expression of the GPX1 gene was not observed in multilayer GO, probably due to the low non-cytotoxic concentration used (0.05 µg/mL) in that study because, as was found in the present study, this increase was only found at a much higher CNF concentration (40 µg/mL). Only one (MMP1) out of these eight genes was up-regulated by exposing HaCaT cells to AgNPs and only using the highest non-cytotoxic concentration (40 µg/mL). Nonetheless, AgNPs were able to increase the expression of SOD1, which encodes an isozyme that destroys free superoxide radicals in the body by binding copper and zinc ions [58]. AgNP toxicity has also been evaluated in NIH 3T3 mouse embryo fibroblasts and showed cell damage via the generation of ROS [26]. ROS may contribute to tissue damage and participate in cellular events such as signal transduction, proliferative response, and protein redox regulation and can modulate the expression of numerous genes [26,59,60]. A study of the cytotoxic mechanisms of AgNPs in keratinocytes showed them to be related to oxidative damage and inflammation, as shown by increased concentrations of reactive oxygen species (ROS), malondialdehyde (MDA), interleukin-1 alpha, interleukin-6, and interleukin-8 [61]. The effect of AgNPs on the metabolic profile of a human HaCaT epidermis keratinocyte line exposed for 48 h to 30 nm citrate-stabilized spherical AgNPs (10 and 40 µg/mL) by nuclear magnetic resonance metabolomics showed up-regulated glutathione-based antioxidant protection, increased glutaminolysis, down-regulated tricarboxylic acid (TCA) cycle activity, energy depletion, and cell membrane modification [62]. However, the damage produced by the AgNPs used in the current study was not similar to that produced by smaller AgNPs, since AgNP cytotoxicity is influenced by variations in size, shape, and surface electric charges [63]. Cellular uptake and generation of ROS was also found in the murine RAW264.7 macrophages after exposure to CNF, but not after exposure to SWCNT, another type of CBN [13]. CBN toxicity has been reported to depend on dimension and composition [14]. Conclusions The results of this work can be summarized as follows: (i) AgNPs are smaller and present a very different morphology to filamentous CNF carbon-based materials; (ii) Ag-NPs had higher negative zeta potential (ζ), from pH 5-12, than CNFs and similar timedependent cytotoxicity (EC 50 of 608.1 µg/mL for CNFs and 581.9 µg/mL for AgNPs at 24 h); (iii) both nanomaterials showed similar proliferative activity at 20 µg/mL after 96 h in the HaCaT cells; (iv) this study provides the first comparison of time-dependent cytotoxicity, proliferation, and gene expression in human keratinocyte HaCaT cells between CNFs and AgNPs; (v) AgNPs were capable of up-regulating only two genes (SOD1 and MMP1) out of the thirteen genes analyzed. However, CNFs were able to up-regulate eight genes (FN1, MMP1, CAT, CDH1, COL4A1, FBN, GPX1, and TGFB1), which possess many important properties required for biomedical applications, such as defense mechanisms against oxidative stress and tissue maintenance and repair. These results thus show great promise as they open up the possibility of using antimicrobial CNF nanomaterials in a broad range of biomedical applications, including tissue engineering and wound healing. Conflicts of Interest: The authors declare no conflict of interest. Table A1. Gene symbol, gene name, oligo sequences, and function of specific genes used in RT-qPCR measurements. The protein encoded by this gene binds copper and zinc ions and is one of two isozymes responsible for destroying free superoxide radicals in the body 5 -TCCACCTTTGCCCAAGTCA-3
6,867
2021-09-01T00:00:00.000
[ "Materials Science", "Medicine", "Biology", "Environmental Science" ]
Intelligent Locking System using Deep Learning for Autonomous Vehicle in Internet of Things Now-a-days, we are using modern locking system application to lock and unlock our vehicle. The most common method is by using key to unlock our car from outside, pressing unlock button inside our car to unlock the door and many vehicles are using keyless entry remote control for unlocking their vehicle. However, all of this locking system is not user friendly in impaired situation for example when the user hand is full, lost the key, did not bring the key or even conveniently suited for special case like disable driver. Hence, we are proposing a new way to unlock the vehicle by using face recognition. Face recognition is the one of the key components for future intelligent vehicle application in the Autonomous Vehicle (AV) and is very crucial for next generation of AV to promote user convenience. This paper proposes a locking system for AV by using face deep learning approach that adapt face recognition technique. This paper aims to design and implement face recognition procedural steps using image dataset that consist of training, validation and test dataset folder. The methodology used in this paper is Convolution Neural Network (CNN) and we were program it by using Python and Google Colab. We create two different folders to test either the methodology capable to recognize difference faces. Finally, after dataset training a testing was conducted and the works shows that the data trained was successful implemented. The models predict an accurate output result and give significant performance. The data set consist of every face angle from the front, right (30-45 degrees) and left (3045 degrees). Keywords—Face recognition; deep learning; internet of things; convolution neural networks everything else encountered in daily life, contains sensors. Sensors or devices in an IoT system collect and send data [3] from the surrounding environment on the state of the device's operation. As a data transmission channel, a gateway or link is necessary. The gateway's job is to make communication and data sharing between devices easier. As a connectivity of physical objects, IoT works as a bridge to all devices by transferring their data using a common language to connect with various sensors or devices. Basically, sensors acting as the data's supplier to the IoT platform and the data are gained from multiple sources. Moreover, the raw data needs to be analyzed [3] before useful information can be extracted. In the end, the data, automate processes and the efficiency can be enhanced and improved further by integrating it with other devices. Furthermore, the IoT platform such as storage, actuation, sensing, enhanced services, and communication technologies are very important to gather and analyses data [4] from smart infrastructure. On the other hand, the IoT is changing our way of life and transforming how we interact with technology [4], and it is driving the world to become a better place. Human lifestyle has been impacted in certain ways [4] by how people react to the way humans behave with all gadgets (things) in synchronize with the increasing IoT revolution. This revolution, as described in [5] provided and guaranteed the capacity to have seamless interaction when transmitting and sharing data across a network without needing any human-tocomputer communication. The cloud, which functioned as a platform for collecting, storing, managing, and analyzing real-time data, was a major component of IoT. Analytics will play a role in transforming analogue data from billions of devices into meaningful information that can subsequently be utilized for thorough analysis once the cloud handles the data. Data from devices and sensors is converted into a format that is simple to read and process. Artificial intelligence (AI) is becoming more prevalent in IoT applications and deployments [6]. John McCarthy was the first to present AI in 1956, and he felt that AI included developing a machine that could really mimic human intellect [6,7]. Simply put, AI was designed in a way that a machine can simply replicate it and do the tasks from easiest to the hardest. www.ijacsa.thesai.org The idea of AI is to replicate human cognitive processes. The developer and researcher's expectation towards AI is up until imitating humans in simulating processes including perception, reasoning and learning [6]. In a number of situations, AI systems outperform humans by a large margin [8]. It demonstrated that AI can defeat numerous computer games, including a world champion chess program and the top professional poker players in the world [9]. There are two types of AI which are weak and strong [10]. Every system that always does a single task is known as weak AI, while the strong AI always refers to a sophisticated and difficult system. The example for the weak AI is like video games and personal assistants and the famous personal assistants in this world is like Apple's Siri [11]. Other than that, computer games also are one of the examples of weak AI. On the other hand, there are many examples of strong AI nowadays such as operating rooms in hospitals and self-driving automobiles. These technologies are capable of solving problems without human intervention [12] because they have been trained before to deal with the circumstances. Previous AI standards are becoming obsolete as technology develops [12,13]. Nowadays, machines that calculate fundamental operation or read text by applying optical character recognition are formerly regarded to contain AI because these operations are now inspected standard computer functions. AI is continuously being enhanced [13] to benefit a wide range of businesses. Mathematics, psychology, linguistics, and computer science are examples of multidisciplinary methods that are always used to wire machines [14]. We can even lock and unlock Autonomous Vehicles (AV) with our own face utilizing a deep learning approach using facial recognition, thanks to the advancement of AI models [15] that connect with IoT. Furthermore, the AV idea is now at the forefront of the automotive industry's future security [16]. With the progress of technology, AV have the potential to reduce accidents, increase accessibility to transportation, especially for elderly persons, provide stress-free parking, and provide high-end security, among other benefits [16,17]. However, technological advancements might sometimes have downsides for AV users. Sensors, for example, may malfunction, attracting a hacker to steal personal data from an AV user [17]. In Section 2, we'll go into AV in further detail. Fig. 1 shows a process on how the data gathered from AV devices or sensors and then the data go through an analysis process before being transferred to the cloud. Basically, the data is gathered in edge computing after receiving from the AV. This data needs to be gone through pre-processing and decision-making in the edge node. Thenceforth, the data will be transferred to the cloud by the edge node after analyses locally by the IoT sensor [18]. The aim of this process is for less time-sensitive decision-making and for offline global processing. Based on the diagram shown above, the road accident can be prevented by using obstacle recognition as shown in the diagram above. These time-sensitive choices are capable of avoiding crashes in a shorter amount of time as illustrated in the edge node above [18]. To improve a driving experience, the cloud provides a platform for the data to analyses about the traffic, roads and also the driving habits. The edge node as illustrated above shows that the AI models will be actively changed in terms of consumer needs, regulations, policies and appropriate laws. Moreover, in this node the amount of data sent by the IoT is bigger than the generated data in AVs because the data needs to be preprocessed, filtered and cleaned before proceeding to the cloud and by using this method the amount of cost and bandwidth can be reduced. When it comes to the locking security in AV, normally the most common method is by using key to unlock our car from outside, pressing unlock button inside our car to unlock the door and many vehicles are using keyless entry remote control for unlocking their vehicle. However, all of this locking system is not user friendly in impaired situation for example when the user hand is full, lost the key, did not bring the key or even conveniently suited for special case like disable driver. Hence, we are proposing a new way to unlock the vehicle by using face recognition. Face recognition is the one of the key components for future intelligent vehicle application in the Autonomous Vehicle (AV) and is very crucial for next generation of AV to promote user convenience. This paper proposes a locking system for AV by using face deep learning approach that adapt face recognition technique. In the remainder of this paper, Section II contains a brief history of AV including the superiority of AV, challenges of AV and the solution of the challenges; Section III discuss the component of machine learning and in that section, we have discussed of 10 sub-component of machine learning; Section IV discuss the methodology that have been used in this research paper including the way we collect the data to the way we implement the research; Section V conclude the result and discussion in this paper ; Section VI is our conclusion part for this research; Section VII discuss the future work for this research paper and lastly Section VIII dedicated the acknowledgement for this paper. [19]. A study from [16] had mentioned that AV was introduced in the 1980s and the research about AV was funded by Defense Advanced Research Projects Agency (DARPA) [16]. Thanks to AV because with the innovation of transport systems, combination of sensors and software to control, not only can reduce time, money and environmental impact yet can improve safety, increasing capacity, and minimizing traffic congestion [20]. Through the advancement of technologies in these modern days, the AV also known as driverless vehicles evolved by the ability to sense its surroundings, perform significance function, and operate by itself without interference by humans. An automation level generally divided into six levels which are from level 0 to level 5 and each level represents the operation control capabilities whichever the level 0 has the least automation control while the level 5 has the most control capabilities. The lowest level known as level 0 basically can't control all the operation and the whole process of driving needs to be done by humans [16]. On the other hand, the control process at the level 1 had been improved in terms of steering and braking control of the vehicle with the support of Advanced Driver Assistance System (ADAS). As the level of automation increases, the vehicle becomes more advanced. As stated in [16], the level 2 automatics are capable of controlling the steering and braking by using the ADAS system. However, the drivers need to be focused and paying attention to the environment along the journey. At level 3 it becomes more advanced where the driver gives full control to the vehicle through Advanced Driving System (ADS). This system is capable of controlling all parts of the driving task with a few conditions. However, the human driver was allowed to control the vehicle when requested by ADS. In addition to that, the human driver executes the necessary tasks in the remaining conditions. The ADS plays an important role in AV's system where at level 4 the system is capable to control and perform all tasks without any human intervention including supervision from the human. The last and the most advanced AV is at level 5, where in this level the AV is not only capable to perform all the driving tasks but also is capable to communicate with other devices [16] including traffic lights, signage and the environment of the roads and to perform this function this level requires 5G application. Together with that, vehicle speed is also one of the important elements to the AV. To ensure the speed of AV kept at a safe distance, the Adaptive Cruise Control (ACC) is used. This system uses sensors to get the distance information and undertake the vehicle to perform tasks when the sensors send the signal to the vehicle such as perform brake when senses and predict any imminent and any vehicle ahead. These sensors give the information to the actuators in the vehicle and then proceed the control action activity in the vehicle such as braking, acceleration and steering [16]. Furthermore, the high level of AV is adept to control the automated speed in order to respond to the signals that come from the traffic lights and nonvehicular activities. A. Superiority of AV Statistic states that usually vehicle crashes happen because of human error and it is proven when 90% of fatal vehicle accidents are due to human failure [21] hence, the AV's technologies got the potential to reduce the death statistics because of human error. Thus, driverless cars are a future technology that is needed by humans to scale down the deaths and injuries from car collisions. The reason for crashes comes from the driver's focus interruption [21]. On the other hand, there is a website called the house energy and commerce committee that claims that traffic deaths can be reduced up to 90% and can save up to 30,000 people yearly by using driverless vehicles or known as self-driving cars. Apart from that, there is a report from American Society of Civil Engineers (ASCE) that states that Americans can't avoid wasting their time in traffic every day [22] and surprisingly they used 6.9 billion hours for that purpose. Furthermore, AV brings a lot of benefits to people, especially to senior citizens and for the disabilities drivers to handle vehicles safely. Other than reducing the numbers of accidents, the idea of AV is to help people in these groups to drive effortlessly. AV caters and provides more people to drive independently without worries about the safety issues [23]. Moreover, a study by [24] states that by accommodating AV technologies, will make life much easier and effortless to go to work, attend meetings with clients including going to the doctor especially for senior citizens and disability people. Other than that, according to [23] Many benefits will be gained from the AV in terms of travel time, commuting and congestion time and also cut down the fuel consumption which is a good barrier to the citizen especially to the citizen who live in the city or town and crowded place and road. The country can save up to a trillion dollars when using AV and also can reduce manpower and law enforcers and save more money [25] to the country. Safety and security issues are an important part of everyday life and are needed in many areas included in modern transportation. Moreover, these days AV concept is leading the future security of the vehicle industry [20]. AV has Light Detection and Ranging (LiDAR) sensor [16,26,27] It is capable of avoiding obstacles in an unknown environment and being able to classify dynamic objects in urban roads into cars, pedestrians, bicyclists and background [22]. LiDAR divides into two types which are non-scanning and also scanning LiDAR. In addition, scanning LiDAR comes with different features, other than single scanning, there is also one type called multi-line scanning LiDAR. In addition, LiDAR also has a non-scanning LiDAR type [23] that uses 3D-flash LiDAR. The next feature in AV is Radio Detection and Ranging (Radar) [16,26,27]. Radars have proven effective for appearance on AV in the existence of fog and dust [25]. Besides that, radar is also designed to aid Off-road Light Autonomous Vehicle (OLAV) platforms in classification, mapreading, and detection [27]. Furthermore, Lidar and cameras are indeed very popular sensors, but radar gains much more advantage when compared to radar in terms of speed measurement capability and cost and target range [27]. AV also has an image sensor feature like rear-view cameras [27]. www.ijacsa.thesai.org Rear-view cameras are used to detect obstacles behind the vehicle with aid of a fisheye lens [28]. The most important feature to AV is the locking system [29] when it comes to security factor. Generally, a key of a vehicle is the most important thing to the vehicle to start the engine including to unlock the steering [26] such as by using pin tumblers lock, and then the lock changed to transponder key lock and after that an AV locking system became more advanced which can lock and unlock the vehicle by using Passive Keyless Entry and Start (PKES) system [29]. The PKES system is widely used in modern vehicles, the user can lock and unlock the vehicle whenever they are near to the vehicle without needing to take out the key from their pocket [29]. This system is very convenient to many users and makes life easier. Now, many manufacturers want to move to another level of vehicle locking security system by face recognition [30]. This paper invented the novel prototype of a safety system in AV, especially the locking system by using Keras, TensorFlow and Deep Learning (DL). B. Challenges of AV The research done by [12] claims that though AV has been successfully programmed, an unpredicted flaw still may come after. On the other hand, the crazy advancement of technologies makes the older version equipment faced with the faulty code issues. With the many advanced technologies in the AV doesn't mean the vehicle can't be hacked by hackers. Hackers can hack AV systems easily because the system still has many vulnerabilities [16] as this is new to the world and the hackers definitely will steal the personal data through the AV. The next drawback of AV is dysfunctional sensors [31]. Sensor failures often happened in AV [31], as an example the locking system. The locking system is a very crucial part in automated vehicles safety [32]. The AV user is very concerned about the locking system. AV provides the modern locking system to the user by maximizing the security and safety to the vehicle. Safety and vehicles cannot be apart. Safety is a very important element for the vehicle [16]. Basically, the common safety element in vehicles is like the lighting will turn on when the doors are unlocked, and the AV gives notification to the drivers by integrated control of the lighting. However, AV locking systems also will have problems in the modern lifestyle, when the user's hand is full, lost the key, did not bring the key, the key can be duplicated by others or even conveniently suited for a special case like disable driver [33]. All of those factors demand a new locking system for AV users. C. Solution As explained in the previous section, AI has been widely used in this world. Basically, AI is a technique that enables a machine to mimic human behavior. As example, an ability to sense, reason, engage and learn. AI operates autonomously and uses a variety of methods through data learning processes by machines [7]. By the recent advances in AI, many impacted areas have been affected by using AI techniques, as example voice recognition, Natural Language Processing (NLP), computer vision algorithm, robotic and motion, planning and optimization, and knowledge capture [34,35]. When we go deep in AI, we will find that AI is supported by an algorithm model known as Machine Learning (ML) and inside the ML there is another algorithm model called Deep Learning (DL) [36], [37]. Fig. 2 shows AI and the subfield. ML is used to manage from the raw data, when ML gets the data, they need to be trained and this technique can be achieved by using specific algorithms. Basically, AI has a lot of methods that make machines operate autonomously through provided data [7], [37]. ML is created to be an independent computer program by learning itself from the data and ML algorithms is divided in a few categories [37], known as reinforcement, supervised, unsupervised and semi-supervised learning. This type of learning is illustrated in Fig. 3 below. Nowadays AI innovation is leads by ML techniques and nominal by Deep Neural Networks (DNN) [32,33,34,37,38] and this model widely used as black boxes. Currently Deep Learning (DL) is a very popular algorithm and has been widely used by researchers and developers in various fields. The idea of DL is to imitate human brain function into machines. The most popular algorithms in DL are; a) Long Short-Term Memory Networks Deep Boltzmann Machine (DBM) and f) Stacked Auto-Encoders. Moreover, the DL is focusing on the more complex and larger dataset [7,11,38] such as video, audio, text and image. As mentioned above, the human brain acts as a major part for the evolution of DL. The design structure and frameworks of DL exactly look alike and function well as the human brain which is capable of differentiating patterns and can classify diverse types of data [38]. The evolution of DL makes the CNN become popular methods used in this field including Face Recognition (FR) technology based on CNN [39,40]. FR is widely used nowadays in so many fields including to unlock mobile devices and the FR process includes the recognition task, feature extraction, alignment and detection [11]. On the other hand, the DL methods are able to support a huge dataset of faces and learn rich and compact representations of faces. During 2012, a competition called ImageNet Large Scale Visual Recognition contesting became popular because of the CNN research that was initiated by Alex Krizhevsky [41]. After that competition the name of Alex became more popular. By using multiple processing layers and levels of features extraction, DL is capable of learning delegation of data [42]. On the other hand, fully connected layers, normalization layers, convolutional layers and pooling layers are a few examples of the layers that are hidden [38].in the CNN algorithm. With DL, we can produce a new locking system for AV by using FR technique [43].The AI Researchers begin to use DL as a tool for training the face expression [44]. DL is knowingly an authoritative tool in the automation industry, and FR is part of the applications. FR is widely used in the military, finance industry, daily life and public security [45,46], and FR is divided into two classes which are one to many augmentations and many to one normalization [46]. We will explain in detail about FR in Section 3 below. This method can solve the locking system problem. With the advancement of AI, the locking problem that was discussed in the previous section can be solved. A survey done by [47] finds that FR technique will become a useful technique for AV users in terms of security, especially for locking systems. This technique requires a dataset to train before you can prove this technique meets the expected result. Moreover, with the help of IoT, the user will get notified [48] about the system failure. In traditional methods, the system recognizes the human face by layers, which is one or two layers such as responses of filtering. With the emergence of DL, the landscape and framework of FR technique has been changed [49]and reshaped in all algorithm designs, evolution protocols, application scenarios including the dataset training. FR needs three modules to run the system, the first one is a face detector. This module is needed to contain faces in images or videos. After that the next module is the face landmark detector and lastly is the FR module. This module is an antispoofing face [50]. There are two categories of FR [51], the first one is face verification and the other one is face identification. In this study we are using ground truth research by using convolutional neural networks (CNN). This method is used for unlocking an AV by using our face. This model needs to be trained and validated in their own respective data. We will elaborate more in section 3 below about the methodology. III. MACHINE LEARNING The advancement of ML was proven with minimal human interference and these methods are capable of analyzing the data before building an analytic model. Other than that, ML is intelligent in digesting information from raw data, analyzing the form of data and finding the decisions with least human supervision [52,53]. Hence this new model of ML is definitely different from the traditional ML [54] which current ML was designed to learn from pattern recognition and can learn without being programmed to specific tasks but learn from data. To be an independent model, ML's interactive aspect is very crucial because ML works with the new data. The more ML learns about the data, the more ML will become smarter without any assistance from humans [55] and ML can produce reliable results and repeatable decisions. Normally, to understand the data without human intervention, we need four kinds of algorithms that rely under ML, which are; a) semi-supervised learning, b) reinforcement, c) supervised, and d) unsupervised learning [36] Fig. 3 shows the ML types and the categories model in ML. A. Supervised Learning Supervised learning under the ML has been divided into two outcomes which are regression and classification. The regression outcome intention is to forecast based on the training sample set given, such as house pricing, weather forecast and market forecasting. Whereas the classification outcome's goal is to identify pattern [36] such as identify fraud detection, image classification and diagnostics. In supervised learning, there are three models [36], the first one is Classic Neural Networks or known as multilayer perceptron (MLP), the second one is Convolutional Neural Networks popular as CNN and the last model is Recurrent Neural Networks known as RNN. Fig. 4 generally shows a normal workflow for classification in supervised learning algorithms. Above all, three steps are needed to classify data before generating the expectation output. The first step is to clean the raw data through the extraction process before gaining the quality or useful data. Secondly, the useful data including the labels are sent to the training stage by ML algorithm to analyze an excellent model. To improve the accuracy of the model, the model needs adjustment from the evaluation step, because this step is capable of giving a point of view about the feature's extraction and learning stage. Before achieving the desired accuracy stage, the data needs to go through the training process [56] all over and over again. Once it done, the new data can be predicting easily. B. Multilayer Perceptron (MLP) A simple algorithm that calculates the binary classification is called perceptron. Many real case's classified in this algorithm and this algorithm categorizes the input based on their own categories as an example cat or not cat and not fraud and fraud. While the MLP is composed of more than one perceptron [57]. A MLP involves a few layers and the layers have different types of uses. The common layers are known as output, hidden and input layers. Input layer task is to gain signal. Whereas core layer for MLP is the hidden layer, because this layer is the computational engine for MLP and lastly is the layer that functioning to predict the input and this layer known as output layer [58] Supervised learning technique is used in MLP for training purposes for every node and the technique called as back propagation and every node in this layer uses nonlinear activation function except the input node. The design of this layer is illustrated in Fig. 5. To minimize the error the MLP's training does the altering parameters such as weights and biases. Then the MLP learns from the model's correlation between input and the output. Hence, not extraordinary when the MLP is capable of estimating the XOR operator and other nonlinear functions very well [58]. So, all of these are the advantages of MLP. However, the parameters that are set by the MLP will become inefficient whenever the numbers of parameters become so high and it will cause redundancy in high dimensions. Moreover, it will disregard spatial information and make the flatter vectors as inputs [58]. C. CNN As discussed in the previous section, CNN play a paramount role in identifying and classifying images. Many researchers use CNN because of the magnificence of this algorithm in classifying images such as identifying objects, individuals, tumors, street signs, faces and many other data that are related to visuals. And these algorithms can perform the classification of images including photo search [34,58]. CNN came within three layers. And all the layers are very famous among the researchers and developers known as convolutional, fully-connected and pooling layers. The function of the first layer, which is convolutional layers, is to obtain many attributes and diversity of features that are gained from the input images. These input images are then filtered to a specific size by the mathematical operation. This layer is then followed by the other layer [34,58] known as the pooling layer. Decreasing connections between layers capable to reduce the computational cost by decrease the convolved feature's size. In the end, the last layer is called a full connection layer. This layer is located before the output layer and makes another layer in CNN architecture. This layer comes along with weights and biases and the neuron elements. All of these elements are used to connect with different layers [34]. Fig. 6 shows CNN architecture where that architecture consists of an output layer, an input layer, a full connection layer, 2 maxpooling layers and 2 convolutional layers. All algorithms in ML have their own advantages. For CNN, the advantages for this algorithm are advances in Computer Vision (CV). CV algorithm diversity used in technologies nowadays including treatments for the visually impaired, security, drones, medical diagnoses and driverless vehicle [59]. Other than that, CNN is widely used in business-oriented tasks [58] such as making natural-language processing available on analogy and manuscript documents, whereby the images are symbols to be transcribed and to digitize text known as Optical Character Recognition (OCR). However, the CNN are naturally slower because of the operation, like maxpool and in addition the training process will become slower when the CNNs have several layers because the computer doesn't consist of a good GPU. Other than that, based on [59] the author stated that to process and train the neural network CNN require a large Dataset. D. Recurrent Neural Network (RNN) Artificial Neural Network (ANN) alongside internal loops is called RNN [60] and this is a powerful technique. The interesting part about RNN is this technique is being used every day such as image recognition that capable to tell the picture's content, speech recognition, language translation, stock prediction and also driverless vehicle RNN indeed a powerful algorithm for prediction purposes because this algorithm can divide text and words into sequences, especially on sequence data modelling and the sequence data appear in www.ijacsa.thesai.org many patterns like text and audio. RNN predicts the data by having a concept of sequential memory. For example, as a human we can easily mention the alphabet in sequence because we already memorized it. However, when we want to mention the alphabet backwards it's pretty hard for us because we have not memorized it. A human brain capable to recognize sequence of patterns [60] by using sequential memory mechanism. RNN replicates the concept of sequential memory of the human brain by using 3 layers called as an output, hidden and an input layer. In RNN exists a looping mechanism that can pass the previous information forward. This looping performs as an expressway to flow information from one step to another [60]. The previous input information is kept in the hidden state. As an example, by using RNN we can build a chat box and the chat box capable to classify intentions from the user inputted text [60]. As mentioned in the previous sentence, the hidden layer representation of previous input and it will be modified. This modified hidden state contain data from all the earlier steps and continue to loop until no more words and then gives to the output to feed the board layer and it gives the forecast. Forward pass control flow of a RNN can be done by for loop [61]. The RNN architecture workflow quite simple to understand by many. Example of RNN such as previous information is taken by present cell before come out the output. Meanwhile, by referring the Fig. 7, it represents the processed word, t also will be the input after the word was processed. All the text available to process when the sequence dimensions are reduced to a certain value. The sequence needs to meet the size requirement, otherwise the sequences will be filled until the specified value. However, the excess will be barred if the sequence size is more than the specified value [62]. Training in neural network consists of three crucial steps. The first step is to make a prediction by forward pass, then by using loss function the networks can differentiate the output prediction to the ground truth. On the other hand, when the loss function gives an error value as their outputs, it means the network is performing badly. Finally, this network calculates the gradients for every node to do back propagation by using the error value [61]. In this case, gradients referred to a value that mainly used for allowing the network to learn by adjusting the network's internal weight and if more the gradient, hence the more the adjustment. However, a short-term memory is inside the hidden state. This is common memory to other neural network architectures and exist because of the infamous problem which known as vanishing gradient. This problem appears on a one reason, which is the information that have been kept in the previous step have error [61], and basically during training and optimize neural network processes, this problem is normal nature to back propagation algorithms. The computation of RNN model is slow, this drawback led to difficulty to train the data when the researchers using the activation functions because it will make a very exhausting process which makes the long sequences process [63]. Hence, the exploding or gradient vanishing will happen. E. Unsupervised Learning Supervised Learning is different from unsupervised learning in so many ways. This algorithm trains the samples without training the labels. The unsupervised learning is divided into two outcomes which are clustering and dimensionality reduction. Clustering outcome formulated using the algorithm to find consistent patterns become apparent, the similar data points can be clustered together, and different data points will be in different clusters in the data such as recommender system, targeting marketing and customer segmentation. While dimensional reduction outcomes are like finding suitable structure and pattern in the data [58,61] such as big data visualization, structure discovery and feature elicitation. The unsupervised Models also consist of three different models [58]. The first one is a self-organizing map or known as Self-Organizing Map (SOM), secondly is the Boltzmann Machines model and lastly is AutoEncoders model. F. SOM A neural network based on dimensionality reduction algorithm is called as SOM commonly utilize two-dimensional discretized pattern to perform a high-dimensional dataset [58]. Dimensionality decreases will occur whenever to retaining the data's topology in the primary feature space. The input space of the training samples will produce a low dimensional discrete on this type of neural network, called as map [34]. In addition, this technique capable to do reduction of the dimensionality. The similarities in the data could be observed easily by using the dimensionality reduction and grid clustering, that makes this type of neural network is easily to understood and clearly explained [64]. However, cluster inputs need sufficient neuron weights. If the weights not sufficient the map will produce inaccurate results. The SOM model is illustrated in the Fig. 8. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 10, 2021 572 | P a g e www.ijacsa.thesai.org G. Boltzmann Machines Model Unlike others neural network, Boltzmann model only have two kind of nodes called as visible and hidden nodes without an output node [63]. This situation makes this model known as non-deterministic features. Two computational error can be fix by using this model. As example, optimization problem and search problem may happen and by fixing the weights on the connections the problem will solves and this method also is a cost function [63]. That is illustrated in the Fig. 9. The main disadvantage is that Boltzmann learning is significantly slower than backpropagation [63]. However, this model also has numerous problems in the use of algorithms. The example of the problems encountered are such as weight adjustment, the time needed to collect statistics in order to calculate probabilities, the times weights change at a time, the difficulties of adjust the temperature during simulated annealing and the difficulties to decide when the network has reached the equilibrium temperature [63]. H. AutoEncoders Model To learn an encoding data set, autoencoder model need to use training technique in unsupervised way. In order to learn an encoding data set focusing on dimensionality reduction, autoencoder need to training the network to overpass the signal noise [65]. Learning to encode a data set is the main focus for an autoencoder. Generally, autoencoder will reduce the dimensionality to neglect the noise's signal by network's training. In addition, autoencoders will provide a model to user by referring the data rather than predefined filters [65]. In general, the autoencoders provide the users a filter that may fit the user's data better [65]. However, Generative Adversarial Networks is much more efficient than autoencoders in term of recreate an image. In addition, images will start blurry whenever the complexity of the images increase [65]. That is illustrated in the Fig. 10. I. Semi-Supervised Learning Semi-supervised learning falls in the middle of unsupervised and supervised learning. In reality, hiring an expertise worker needs a lot of money because we need to pay their skilled, because of that the cost of label is high [66]. Hence, semi-supervise algorithm became the best methods for the building the model especially for the less labels data. To put it clear, this data brings crucial information about the group parameters even from the unknown group of unlabeled data. J. Reinforcement Learning Machine Learning that has a representative like robots that know how to behave in an environment by evaluate the results is called as Reinforcement Learning. In addition, the robot will get a reward through the points given to them based on their performance on gives correct response in every situation. This point will boost up the robot's confident to take more actions. This process is like Markov Decision Process also known as (MDP). On the other hand, data classification in reinforcement learning is useless [67] because the robot learns from trial and error and support by the concept reward and punishment by MDP. IV. METHODOLOGY Based on those algorithms mentioned in the above sections, we chose the CNN algorithm. A study by found that the trendiest technique of neural network for working with image data is CNN. Other than that, CNN are very good in extract image features that makes this neural network became famous and our research paper is based on image data, so CNN is the best algorithm to build our model. On top of that as our research paper used image and pattern recognition, hence CNN is the best choice for our research as this network can solve our problems with using their technique whereas other neural network doesn't have this technique. The interesting part in this paper is about our methodology stages. Firstly, pre-processing step is being used to haul the images of test data before predicting the classes for these images utilize the trained model. There are seven sequences to construct this model. Firstly, we need to set up Google colab, secondly, we need to import libraries, thirdly we need to load and pre-process data. This step took around three minutes, and then we needed to create a validation set. The next step is defining the model structure, this step took around one minute. Then we need to train the model. This step took around five minutes and the last step was to make a prediction and this step took around one minute. After that we may predict the pattern of the model and analyze its performance using model.predict(). In this study, we are using our own dataset to present the training set. To set up the structure of our image data, we prepared two folders for our data set, the first one is set A www.ijacsa.thesai.org which is named as known folder and the other one is set B which is named as unknown folder. Firstly, we create three folders in each set, the first folder is the train folder, the second folder is the test folder, and the last folder is the validation folder. These sets contain the images of all the test images but no labels. The reason is because the images set will be trained on our model and the testing set images will predict the data by label it. After that we split the model building's process into four stages. The first one is loading and pre-processing data which takes 30% of all time, the second one is defining model architecture which takes 10% of all time. The third stage is training the model. This model took half of all time for this process. The last stage is estimation of performance. This stage took 10% of the time. Validation set should be constructed before disclosing to the test set, these methods used to perform the unseen data. Train and validate the data should be done on their own data respectively by subdivide those data set. All the parameters applied in Table I. The model trained a few times using the list of parameters in the table and the results are shared in the next section of this paper. A. Stage 1: Loading and Pre-processing Data To begin with, we need a dataset to train on. Data is a new gold. ML and DL are not magic, they need data to train. As mentioned above, the first section is loading and preprocessing data. This is a very important step in any research. With having a good number in training set means it determine the better performance of the model and the architecture of the model determine the pattern of data in order to create the validation set. In this study we are using our own dataset. As shown in Table II, our data consists of 255 images and the images represent two different datasets which are known face and unknown face folder. This process just needs a very little Preprocessing process because by considering the small size of the images, hence the dataset is very easy to upload. Firstly, we need to import the necessary libraries. We are using a matplotlib array and diverse modules correlate with TensorFlow and Keras. In this research paper, the data images are shrunk to one similar size of training data images: 150 x 150 pixels during training. Then we need to prepare the data. In this section we need to load and import our data In this section we can determine the data that we want to load by using load_img() function. However, negative impact will happen if the amount is huge. On top of that, we set our data image to 150 wide and 150 heights. In this stage we need to normalize our input data. Our research uses image as our input value. Its value is between 0 and 255. On the other hand, we need to normalize the data by divide the image value with 255. To predict the number of neurons that we need to compress in the last layer, firstly we need to declare the data type as an integer in the dataset and fix the number of classes. In our case, we utilize ImageDataGenerator (rescale = 1. /255) command because its currently an integers value. B. Stage 2: Defining the Model's Architecture Model architecture is very crucial step to define. CNN model was designed in this stage and estimate the number of convolutional layers and hidden layers that we need. After that we need to define the format that we will use for the model. We are choosing Keras because Keras has few different formats to create the models. In Keras sequential format is very popular because of those factors we import it from Keras. A convolutional layer was used in our first model and this layer also will run in input nodes specifying the number of filters is very important to implement in Keras, the size of filters that we want, In our case we are using 64 filters of dimension 3 x 3, the input shape and the activation and padding that we need. So, the activation that we use is ReLU since it is the common activation in DL; however we can string the activation and pooling together. Dropout layer were created to prevent overfitting. To do this, we need to eliminate some of the connections between the layers. We are using string dropout (0.5), so we can drop 50% of the existing connection. After that we are doing batch normalization where the input heads to the layer after and make sure that the network continuously provides activations with the similar circulation needed. Then, to learn more complex presentation of network we need to increase the filter's size, hence another convolution layer will appear. Adding a convolution layer means the filter's numbers is increased too, hence, more complex images will be learnt. We also used a pooling layer. In the pooling layer, as discussed in the previous section this layer makes the image classifier more robust so it can learn relevant patterns. However, pooling layers discards some data, so we don't use many of those layers. Because of our data already in small size, we just twice the pool. After that we did over these layers to give our network look more representation. Then we used fully connected layers and sigmoid activation algorithms. C. Stage 3: Training of the Model After defining this architecture's model, then we do the training of the model stage. In this stage we need to compile it and need to detail up the epoch's number that we prefer to train www.ijacsa.thesai.org and the optimizer that we need to use. In order to reach the fewest point of loss, we need to use the right optimizer to tune the weights for the network and that why we choose Adam optimizer algorithm as it provides good output on most problem situations. In this stage we need to combine with our chosen model's parameter and also determine the metric to be implemented. We are using Adam optimizer as the optimizing function and binary_crossentropy as the loss function while training the data. Then, the model's summary can be print out to analyses the pattern. There is a lot of info inside the summary such as type of layer, the output shape and the parameter. To train the model we need to use the fit() functions on the model and pass in the chosen parameters. We will have the validation set which is different from the testing set. In this stage, we just want to make sure the test data is set aside but not to be trained. For training models, we require two important data sets. The first one is the true labels and training images, and the other one is the validation images and true labels. The true labels in validation images are needed not for the training phase, but to validate the model. D. Stage 4: Estimating the Model's Performance Lastly is the estimating the model's performance stage. In this stage we can see the accuracy result, loss result, plot loss and plot accuracy for each validation and training epoch. Finally, we can test the model on the random train image for both sets; the flow of train image in our work is shown in the Fig. 11. To excellency support of the proposed approach, we compared the results of our approach with some other methods of face recognition in the literature based on existing methods including ANN, support vector machine (SVM) and Principal component analysis (PCA). V. RESULT AND DISCUSSION Few parameters are being adjusted on the validation and training process. The training was done repeatedly in eighty sets, and the Table III shown the experiment output. We also show our training and validation result in graph foam. The graph shows in Fig. 12 and 13. 12 shows the training and validation accuracy graph for our research. The meaning of accuracy here is the number of correct predictions. The training accuracy is actually the accuracy that we get when we use the model on the training data; on the other hand, the validation accuracy is the accuracy on the validation data. As we can see in our graph below the training accuracy in our research achieves the 100-percentage accuracy before the 30 epochs. Fig. 13 shows a training and validation loss graph for our research. As we can see the graph shows training which is the blue line against validation loss which is the orange line. Training loss means that is the error on the training set of data, while validation loss means that the trained data got error after running the validation, so when the epochs increase the both training and validation error drop. Our graph shows that the training error continues to drop, it also shows that the training error totally drops before the 30 epochs that means the network learns the data better and better. Normally, the loss value will be reduced after each model training repetition. The model prediction is considered perfect if the loss is zero and vice versa. As we can see in the table IV below our loss value keeps reducing for every epoch, it means that our model prediction is perfect. The next parameter is accuracy; the accuracy of a model is defined as a percentage of correct predictions for the test data. In our research, the accuracy of the model increases in every epoch and achieves 100 percent accuracy before the epoch of 30. Next is the validation loss parameter, the validation loss means that the trained network has error after the data set have been run through the validation set, as we can see in table IV the validation loss of our research reduces in every epoch. That means the error is reduced in every epoch. The last parameter is validation accuracy parameter; validation accuracy is the accuracy on the validation data. As we can see in our table below the validation accuracy in our research achieves the 100percentage accuracy before the 30 epochs. When the research is completed, we will feed the tested images to the model that we have trained using known and unknown face's label. The predicted image in Fig. 16 and 17 is well identified by the model as we trained the model data, which is the Fig. 16 predicted to known faces and Fig. 17 is predicted unknown faces. The prediction images are shown in Fig. 16 and Fig. 17. Based on the comparison of face recognition approach in the Table V, it is clear that the all mentions methods capable to recognize face recognition but the accuracy is differed. Author from [68] mention that the highest percentage by using ANN approach is 80%. Meanwhile a study did by [69] had mentions that the experiment by using Multi-class SVM achieve until 96%. Other than that, a study did by [70], had mention their approach got the 77% similarities by using PCA integrated with Eigenface approach. From those comparisons, our proposed approach shows that the CNN can achieve a higher face recognition accuracy than others. As such, it can be concluded that the CNN can promote the performance of face recognition due to the availability of the many features. VI. CONCLUSIONS Convolution neural networks have become the main technique in the field of face recognition. In this research paper, implements a CNN, which automatically trained the given dataset to predict the classification of images. These models predict an accurate output result by using every face angle from the front, right (30-45 degrees) and left (30-45 degrees). and give significant performance. On the other hand, it will lead to further development for face recognition using deep learning. From the model training experiment point of view, we can conclude that our data set can produce good results and through the data set the model can differentiate between two different data in high accuracy prediction. Hence CNN is a good technique for face recognition technology. However, we can have a better result if we divide our data set into 70: 20: 10 ratios. 70 % is for training folder data set, 20% is for validation folder data set and 10% for the test folder data set. We are using 255 images for this experiment. We can have a better prediction if we use the bigger data set. VII. FUTURE WORK The research should be further developed in a large dataset to make sure this research can be implemented in real vehicles for safety purposes. Moreover, this paper aims to use face recognition technology in scientific and daily life applications for locking and unlocking autonomous vehicles. In the near future, face recognition technology will become a common approach in many applications. With the Covid-19 pandemic that happens in this world right now where all people are wearing mask wherever they go outside to public area, its hard to identify a person with a mask covering half of their face. The researcher should consider the user in pandemic situation. Furthermore, additional algorithms need to be used and conducted to improve user experience especially for disabled users. www.ijacsa.thesai.org the Government of Malaysia which provide MyBrain15 program for sponsoring this work under the self-fund research grant and L00022 from Ministry of Science, Technology and Innovation (MOSTI).
12,572
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Bimodal action of the flavonoid quercetin on basophil function: an investigation of the putative biochemical targets Background Flavonoids, a large group of polyphenolic metabolites derived from plants have received a great deal of attention over the last several decades for their properties in inflammation and allergy. Quercetin, the most abundant of plant flavonoids, exerts a modulatory action at nanomolar concentrations on human basophils. As this mechanism needs to be elucidated, in this study we focused the possible signal transduction pathways which may be affected by this compound. Methods: K2-EDTA derived leukocyte buffy coats enriched in basophil granulocytes were treated with different concentrations of quercetin and triggered with anti-IgE, fMLP, the calcium ionophore A23187 and the phorbol ester PMA in different experimental conditions. Basophils were captured in a flow cytometry analysis as CD123bright/HLADRnon expressing cells and fluorescence values of the activation markers CD63-FITC or CD203c-PE were used to produce dose response curves. The same population was assayed for histamine release. Results Quercetin inhibited the expression of CD63 and CD203c and the histamine release in basophils activated with anti-IgE or with the ionophore: the IC50 in the anti-IgE model was higher than in the ionophore model and the effects were more pronounced for CD63 than for CD203c. Nanomolar concentrations of quercetin were able to prime both markers expression and histamine release in the fMLP activation model while no effect of quercetin was observed when basophils were activated with PMA. The specific phosphoinositide-3 kinase (PI3K) inhibitor wortmannin exhibited the same behavior of quercetin in anti-IgE and fMLP activation, thus suggesting a role for PI3K involvement in the priming mechanism. Conclusions These results rule out a possible role of protein kinase C in the complex response of basophil to quercetin, while indirectly suggest PI3K as the major intracellular target of this compound also in human basophils. Background Flavonoids include a large group of low molecular weight polyphenolic secondary plant metabolites which can be found in fruits and vegetables, and plant derived beverages such as tea, wine and coffee [1][2][3]. More recently, these natural compounds have been recognized to exert antioxidant [4], anti-bacterial and anti-viral activity, in addition to anti-allergic effects [5][6][7], and exert anti-inflammatory [8], anti-angiogenic, analgesic, cardiovascular-protective [9], anti-hypertensive [10], hepatoprotective [11], cytostatic, cancer preventive [12], apoptotic [13], estrogenic and even anti-estrogenic properties [14]. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7trihydroxy-4H-chromen-4-one) is the most abundant of the flavonoids and is commonly used as a food supplement [15], but evidence-based data regarding its clinical efficacy are quite scanty [16]. As well as many other flavonols, quercetin exerts many effects on inflammation and allergic responses. In this context quercetin is known mainly as a strong inhibitor of many effector functions of leukocytes and mast cells at the micromolar concentration range: the flavonoid is able to inhibit histamine release from human basophils activated with different agonists [17][18][19], to decrease the expression of the basophil activation markers tetraspan CD63 and ectoenzyme CD203c [20], to block mast cell degranulation in the rat cell line RBL-2H3 model [21], to inhibit the production of pro-inflammatory cytokines in HMC-1 mast cell line [22]. This evidence has led to the suggestion that quercetin might be a good candidate for immuno-modulation and anti-allergic therapy [23]. Moreover, recent evidence from our laboratory has reported that sub-micromolar concentrations of quercetin, while inhibiting basophil activation marker expression in cells stimulated through an IgE-dependent pathway, are able to prime those markers in a classical non IgE-dependent activation pattern, such as using a formylated peptide (fMLP) as soluble agonist [20]. The bimodal pattern showed by quercetin in basophils activated with fMLP, having the typical features of a classical hormetic mechanism [24,25], prompted further investigation. Aiming at understanding the bimodal mechanism by which quercetin acts on human basophils, in this study we investigated some basic signaling events in basophil activation, such as calcium, protein kinase C (PKC) and PI3K. From a molecular point of view, quercetin has a significant bulk of intracellular targets, mainly serine/ threonine and tyrosine kinases, which is very difficult to disentangle [26]. In basophil biology a first step to be forwarded could be investigating the differential pattern between anaphylactic degranulation and piecemeal degranulation, known to be related to IgE-mediated and to non-IgE mediated activation pathways, respectively and to the differential expression of surface molecules associated to cell activation [27]. This issue can be focused by the use of inhibitors and regulatory molecules able to dissect these mechanisms. We used a polychromatic flow cytometry approach [28] to investigate the effect of the flavonoid quercetin on the expression of membrane markers triggered by several different agonists in normal subjects (healthy screened blood donors). In parallel, we also evaluated whether the effects of quercetin on basophil membrane markers were reproduced using a classical assay of histamine release. The huge collection of quercetin effects on countless cellular kinases, transcription factors and regulatory proteins, claims for further investigation about the molecular nature of its pharmacological action. This study, in addition to representing a contribute to the comprehension of basophil biology, gives new clues about the modulatory role of this natural compound in cells of inflammation and allergy. Subjects and sampling A total of 70 blood donors volunteers (47% male, 53% female) were enrolled in this study. Recruitment was randomized and encompassed an age range from 24 to 65 yrs (mean 44.61 ± 4.57 SD) in order to have a wide experimental population and to prevent age influence on cell releasability [29]. All the subjects recruited in the study were non allergic and non atopic, they did not suffer of any immunological disorder and had never reported any previous history or genetic diathesis of chronic allergy; moreover, none underwent neither drug therapy nor anti-histamine therapy during the 48 hrs before the peripheral venous blood withdrawal. All participants completed and signed a specific consenting form for taking the samples and for data processing. Cell recovery and preparation Basophils were collected as leukocyte-enriched buffy coats from venous K 2 -EDTA anticoagulated peripheral blood from four screened healthy donors in each experiment performed, according to previously described methods [28]. Buffy coats were pooled and suspended in HBE buffer. To count basophils and evaluate yields, an aliquot of about 1 ml of the cell culture was transferred to a Bayer ADVIA 2120 automated hematocytometer [30]. The volume of working cell suspensions was adjusted with HBE buffer in order to get a basophil count of 90-150 basophils/μl. Compared with hemocytometer counts of starting whole blood (30-50 basophils/μl), an average enrichment of about 1.5-3.0 times (mean = 2.4) was currently obtained. Trypan blue exclusion test revealed that 98.7% ± 7.4 SD leukocytes were viable. Aliquots (100 μl) of cell samples were incubated at 37°C for 10 minutes with an equal volume of HBE in the absence or in the presence of quercetin or wortmannin at the indicated final doses. Activation was performed by adding 50 μl of treated cells to 50 μl of HBC buffer containing 200 nM fMLP or 8 μg/ml of goat anti-human IgE or 1.0 μM A23187 or 100 nM PMA, according to the different protocols. Resting assays were performed by incubating cells in HBC buffer without agonists. Incubation was carried out at 37°C for 30 minutes and blocked by adding 100 μl of ice-cold HBE supplemented with 2.8 mM sodium-EDTA (Na 3 -EDTA). Then the samples were put on ice and stained with monoclonal antibodies (20 minutes at +4°C), according to previously published methods [28]. Afterwards, red blood cells underwent lysis with an ammonium-buffered solution (155 mM NH 4 Cl, 10 mM Na 2 HCO 3 , 0.10 mM Na 3 EDTA, pH = 7.2) for 4 minutes at +4°C; then samples were centrifuged at 700 g and pellets recovered and re-suspended in a PBS-buffered saline solution (pH 7.4) for flow cytometry reading. Histamine release Cells treated with different concentration of quercetin and activated with the indicated agonists, were pelleted at 6000 rpm for 5 minutes and surnatants collected for a competitive histamine ELISA test. 25 μl of each sample was treated with buffers and an acylation reagent, incubated for 1 hour at r.t. and diluted with 200 μl of distilled water. Aliquots of 20 μl of these acylated samples were incubated overnight with 100 μl of antiserum, washed 3 times, incubated for 1 hr at r.t. with a horseradish peroxidase-conjugate, washed 3 times with washing buffer, incubated for 30 min at r.t. with the colorimetric substrate tetramethylbenzidine and reaction stopped. The absorbance of the solution in each wells was read within 10 minutes at 450 nm with a reference wavelength of 620 nm. Histamine was calculated as ng/ml of the released amine against the corresponding standard concentrations in the calibration curve. Flow cytometry and data processing Basophil membrane markers were evaluated by flow cytometry using a five-color fluorochrome panel including CD45-APCCy7, CD123-PECy5 and HLA-DR-PECy7 as phenotyping markers and CD63-FITC and CD203c-PE as activation ones [28]. Flow analysis was performed using a 488 nm-633 nm two-laser BD FACScanto flow cytometer: the instrument had a 10,000 events/sec capability, six-color detection and 0.1% sample carryover. Analysis were performed with a mean flow rate of 300-500 events/sec, setting an excess limit of 50,000 events to record in the basophil gate in order to analyze the whole buffered suspension volume and having a proper estimation of cell recovery and reproducibility. Compensation followed cytometer manufacturer's instruction according an off-line procedure by applying automated electronics algorithms and preset templates, by using biparametric logarithmic dot plots, gate-specific tubes and single-tube data analysis, and optimizing FSC threshold and fluorochrome voltage as set up parameters. Mean of fluorescence intensity (MFI) was calculated automatically by the cytometer software. Percentage of activated cells was calculated by the software considering the CD63 expressing cells (CD63-FITC positive cells) counted to the right of a threshold that was established including the main peak of fluorescence of a sample of resting cells. In order to reduce standard deviation due to positive fluorescent cells respect to negative or dimly ones, a logarithmic scale and a coefficient of variation to measure variability dispersion were used. Statistics Data were analyzed using the software SPSS, version 11 for Windows, Chicago, IL. Dose response curves were obtained by plotting the triplicate data and their mean values and S.E.M. for each experiment using the Sigma plot 10 software. Kolmogorov-Smirnov and Shapiro-Wilk goodness-of-fit tests were performed to determine whether the sample population followed a Gaussian distribution. Differences between quercetin-treated and non-treated cells were analyzed by using a one-way analysis of variance (ANOVA) followed by Fisher LSD test. A value of p < 0.05 was considered statistically significant. IC 50 was calculated for each curve of percentage of effect/control by a linear regression calculation according to the four parameter logistic model (4PL), also called the Hill-Slope model. Results Basophils stimulated with anti-IgE or fMLP Basophils stimulated with the calcium ionophore A23187 or with PMA Quercetin showed also a marked ability to decrease in a dose-dependent fashion the expression of CD63 in basophils activated with the calcium ionophore A23187 (Figure 2, panels A,B). Values of IC 50 for CD63-MFI and for CD63 expr % were respectively 0.573 μM and 0.824 μM. The expression of CD203c was much more resistant to the inhibition by quercetin ( Figure 2, panel C): this evidence suggests some dissociation concerning the action of quercetin on calcium signaling of the two activation markers. When basophils, following pre-incubation with different concentrations of quercetin, were stimulated with the PKC activator PMA (Figure 2, panels D,E,F), the flavonoid did not show any significant inhibitory effect, except for the highest concentration used in the experiments (33 μM). The effect of quercetin showed a specificity for the activation markers, as other molecules used to phenotype basophils, such as CD123, which recognizes the alpha subunit of the constitutive basophil IL-3 receptor, was not affected by any of the quercetin concentrations used in any of the activation model considered in the study (Figure 3). Basophil releasability by the histamine ELISA test Basophils treated with increasing doses of pure aglyconequercetin were triggered with the different agonists used in the study and the histamine released after 30 minutes of incubation at 37°C was assayed with a competitive ELISA kit. Results are described in Figure 4: basophil releasability (degranulation) exhibited the same dose response behavior performed by the activation marker, particularly for CD63, namely a strong dose-response inhibition following anti-IgE activation (Figure 4, panel A), a bimodal pattern in the bacterial peptide activation protocol (Figure 4, panel B), a dose-response inhibition in the calcium ionophore stimulatory assay (Figure 4, panel C), and no inhibitory effect in the PMA activation pattern (Figure 4, panel C). The same cell population was investigated in the same experimental setting about CD63 and CD203c and CD123 membrane expression by flow cytometry: the behavior of these markers under the effect of quercetin was similar, in the various experimental conditions, to the behavior of histamine (data non shown). Wortmannin dose-response Taking into account the two main activation protocols, namely the IgE-mediated and the fMLP mediated stimulation, basophils were treated with the specific PI3K inhibitor wortmannin in order to focus on a possible pathway involved in the bimodal behavior observed with the different agonists: the overall impression is that wortmannin behaved similarly to quercetin in our tested models. Figure 5 shows the dose response of wortmannin on basophils triggered with 4 μg/ml anti-IgE: it showed a pronounced inhibitory activity, with an IC 50 of 2.17 × 10 -9 M and 1.99 × 10 -9 M for CD63 ( Figure 5 panels A,B respectively) and 2.63 × 10 -9 M for CD203c ( Figure 5, panel C). When basophils were activated with a formylated peptide, wortmannin showed a strong inhibitory action in the micromolar range and an increasing expression of CD63-MFI and of CD203c in the nanomolar range ( Figure 6 panels A,B,C), surprisingly performing a biphasic or hormetic behavior as like as quercetin. Wortmannin, too, did not affect significantly the expression of a non activable marker such as CD123 (Figures 5 and 6, panel D). Discussion The results here presented confirm the inhibitory action of relatively high concentrations of quercetin on human basophils function previously reported by others [19,31,32] and by us [20] and assess putative mechanisms of the observed effects at the nanomolar dose range. Quercetin has many targets among intracellular kinases involved in many steps of receptor downstream signaling, leading to various effector functions, such as the degranulatory event [26] but its strong inhibitory action has usually been shown at the high micromolar concentration range, where biphasic effects were not reported. At highest micromolar doses, quercetin actually inhibits a variety of intracellular kinases but at a concentration range from 10 -7 M to 10 -8 M the action of quercetin might depend on more specific and sensitive steps of the activatory pathway used by the cell, probably on the receptor signaling complex. The importance of distinguishing the effects on the basis of in vitro acting dose range is also related to the evidence reported elsewhere by in vivo studies that the plasma concentration of quercetin in healthy volunteers following food supplementation ranged from 0.43 μM to 1.5 μM [33][34][35]. Here below, we would discuss the ability of quercetin to act as a modulatory compound in a sub-micromolar/nanomolar concentration range, taking into account Figure 7 as the summarizing picture of our hypotheses. Q1) Effects of quercetin in the FcεRI-anti-IgE activation model. Previously published reports have shown that quercetin is able to inhibit PI3K by binding to the catalytic pocket of the enzyme: as for instance, LY294002, a synthetic inhibitor of PI3K, has actually a chemical kinship with the flavonoid quercetin [36]. The IC 50 for quercetin as an inhibitor of PI3K from human blood platelets is around 1.8-20 μM [37], which corresponds to the inhibitory range observed in the results here reported on basophils. Taking into account the downstream signaling pathway of FcεRIanti-IgE complex, a suggestion would come that the inhibition of PI3K leads to the loss of phosphorylation of downstream kinases such as Bruton's tyrosine kinase (BTK) [38] which in turn is able to phosphorylate PLCγ, thus leading to the production of inositol-1,4,5triphosphate (IP 3 ) and to diacylglycerol (DAG) from the precursor phosphatidylinosytol-4,5-biphosphate (PtdIns 4,5-P 2 or PIP 2 ) (Figure 7). While DAG remains to the membrane, IP 3 diffuses to the cytosol and binds to and activates the InsP 3 receptor on the membrane of the endoplasmic reticulum, opening a calcium channel, resulting in the release of Ca 2+ into the cytoplasm. DAG is able to activate PKC, which in turn activates membrane markers up-regulation and histamine release [39]. Warner et al. observed that the amount of histamine release associated with activation of basophils through IgE receptor aggregation, among different preparations of basophils, was correlated with an increase in membrane bound PKC-like activity [40]. These results also suggested that PKC activation may have a role in IgE-mediated histamine release in human basophils and that quercetin might inhibit basophil function by blocking DAG precursor for PKC in the upstream signaling pathways. The inhibition of PI3K by quercetin would also prevent the formation of phosphatidylinositol 3,4,5-triphosphate (PtdIns3,4,5-P 3 ) which activates extracellular calcium influx by membrane Ca ++ channels [41]. Q2) Effects of quercetin in the FPR-fMLP activation model. Figure 7 depicts a speculative suggestion concerning the priming phenomenon observed on the fMLP-triggered basophil function. Since in our assay system the effects of quercetin were superimposable to those of wortmannin, a potent PI3K inhibitor [42,36], our results indirectly suggest a role for PI3K in the dual effects performed by the flavonoid. G-protein coupled receptors, such as the fMLP receptor, activate the PI3Kγ isoform through interactions with Gβγ of the PI3K p101 and p110γ subunits [43]. Increasing evidence suggests that monomeric p110γ may function as a downstream regulator of G-protein coupled receptor dependent signal transduction [43]: Gβγ is able to activate a Gcoupled receptor kinase (GRK) which desentitizes the receptor. Interactions of quercetin with this Gβγ-p101/ p110γ might exert an action leading to these possible results: a) the inability of Gβγ sequestered by p101/ p110γ complex to activate G-coupled receptors kinases (GRKs) and to desensitize the receptor, leading to a priming mechanism for example by inducing a sustained activation of downstream protein kinases involved in the degranulatory event, such as p38-MAPK [44]; b) the long-lasting activation of Gβγ-associated PLCβ due to a defect in the Gβγ/PI3K dissociation, leading to an increase in signaling mediators able to trigger the degranulation event (by IP 3 -calcium signaling or by the activation of DAG-PKC pathway), so resulting in a priming effect. Q3) Effect of quercetin on protein kinase C (PKC) activation pathway. In our assay system the flavonol proved insensitive to target protein kinase C (PKC), as resulted from the use of PMA as basophil stimulant, thus confirming previous reports [19]: quercetin was unable to inhibit CD63 and CD203c membrane up-regulation in basophils stimulated with phorbol esters and no dissociation between the two markers investigated was actually observed by using PMA [45]. So, the PKC pathway triggered by PMA, and presumably by other physiologic stimulants, is a quercetin-insensitive route to basophil activation. Q4) Effect of quercetin on basophils triggered with calcium ionophore A23187. In response to calcium ionophore A23187 the expression of both the activation markers CD63 and CD203c was markedly up-regulated, but quercetin exerted a significant inhibitory action, even at nanomolar doses, only on CD63, so dissecting the response of the two activation markers to the ionophore. Previous evidence has reported that CD203c and CD63 upregulation in response to calcium signal by A23187 showed different kinetics [28], an evidence that probably suggests different pathways of calcium involvement in the expression of the two markers [46]. Our results indicate that the calcium-mediated signaling is essential both for the LAMP-1 CD63 and for the ENPP-3 CD203c upregulation, as A23187-mediated calcium influx stimulate both the expression of basophil activation markers and histamine release (see Figure 7), but on the same time they suggest also that the transduction pathway diverges in two distal branches, one of which (LAMP-1) is sensitive to quercetin and is related to the degranulatory event [47], the other is much more resistant to this inhibition. It is well known that A23187 promotes the activation of Ca++/calmodulin pathway [48], which is inhibited by quercetin [49]. Calmodulin constitutes an obligate link in signal transduction pathways leading to human leukocyte histamine release if the trigger is a calcium ionophore but not when responses are induced by anti-IgE, fMLP or PMA [48]. Putative sites sensitive to quercetin are indicated close to the target proteins PI3Ks or calmodulin: for PKC, Q is included in a dashed area, indicating no effect of the flavonoid on the kinase. Membrane markers are indicated by squares (CD63) or triangles (CD203c). Arrows indicate links between different activation pathways while dashed arrows indicate precursors or metabolites incoming to the interior of the cell from membrane and/or from the outside. For further explanation and comments see the text. BTK: Bruton's tyrosine kinase; DAG: diacylglycerol; GAB-2: Grb-associated binding protein-2, an adaptor protein serving as principal activator of PI3K; GRK: G-coupled receptor kinase; IP3: inositol-1,4,5-triphosphate; PLC: phospholypase C; PtdIns 3,4-P 2 : phosphatidylinositol 3,4-biphosphate; PtdIns 4,5-P 2 : phosphatidylinositol 4,5biphosphate; PtdIns 3,4,5-P 3 : phosphatidylinositol 3,4,5-triphosphate; Syk, Lyn: signaling tyrosine kinases linked to IgE-high affinity receptor; other abbreviations as in the text. Quercetin ability to target calmodulin drives to the suggestion that those events inhibited by the flavonoid, i.e. the histamine release and CD63 membrane up-regulation, were presumably related to a Ca++/calmodulin dependent pathway in basophils activated with A23187, while the expression of CD203c, which was not significantly affected by the flavonoid even at its highest dose, might be a calmodulin-independent event. This marker is probably translocated to the membrane by other calcium dependent vesicular-transport mechanisms [50]. These hypotheses and models need for further investigation on a molecular level such as a direct demonstration of the kinases isolated from or detected in the purified basophils and/or by using isoform-selective inhibitors of PI3K and to assay calmodulin involvement in the A23187 activation pathway inhibited by quercetin. What is really interesting is that the observed modulatory biphasic (hormetic) mechanism can be related to the inhibition of PI3K by quercetin and that the efficacious doses are within the nanomolar plasma concentrations reported in several pharmacokinetic and bioavailability studies about this flavonoid [51][52][53]. At these concentrations is commendable that quercetin exerts a fine regulatory action depending on the fine balancing of signaling proteins ruled by PI3Ks. The PI3Ks seem to be strategic both for the activation of downstream protein kinases and for receptor-associate phospholypases C activation thus leading to calcium elevation in the cytoplasm and to PKC-mediated degranulation, two conditions which basophil needs for uploading its markers of activation on the membrane and for histamine release. This might be a first step by which quercetin is able to exert its action at sub-micromolar-nanomolar concentration range, while at the highest doses its action might involve also other receptor and PI3K-downstream kinases such as Akt/PKB, MEK, p38-MAPK, etc. [26]. Allergy is a cause for concern, mainly due to its rising prevalence inside the population and to the increasing difficulty in treating chronic allergy. Quercetin might be a good candidate with the potential to counter this trend: an appropriate intake of this flavonol from food and beverage or from supplemental administration could be expected to improve allergy, to help antiinflammatory and anti-oxidative responses by the organism and to prevent the onset of allergic chronic diseases. However our results introduce a caveat: although basophils play an important role in mediating allergic response and quercetin has proved to have an inhibitory action on basophils following stimulation with anti-IgE and calcium ionophore A23187, the existing bimodal effects of the flavonol and the complex nature of hypersensitivity reactions would require researchers be more cautious before considering quercetin in the practical use of the therapy and prevention of allergy. To achieve this goal, further research insights about cell signaling and about quercetin intracellular targets and studies in animal models are required.
5,570.6
2010-09-17T00:00:00.000
[ "Biology" ]
Diamond-like phase formed of carbon C24 clusters The crystalline structure and some properties of the cubic diamond-like CA6 phase have been calculated by the density-functional theory method in the generalized gradient approximation. As a result of calculations, it was found that the density, cohesive energy, bulk modulus, hardness and band gap width of this phase are 2.824 g/cm3, 7.59 eV/atom, 351 GPa, and 71 GPa, respectively. The phase structure of CA6 should be stable at room temperature. Introduction Compounds with diamond-like structures are widely used as anticorrosive coatings, abrasive and structural materials. A number of new diamond-like phases that can be formed on the basis of fullerene-like molecules, carbon nanotubes, graphene layers, and 3D covalently bonded graphite-like compounds are theoretically predicted [1][2][3][4]. These phases can be close to cubic diamond by mechanical characteristics [1][2][3][4][5][6][7][8][9][10]. The diamond-like CA6 phase formed on the basis of fullerene-like C 24 molecules may be the most stable among other diamond-like carbon C-phases [4]. Therefore, this paper is devoted to theoretical calculations of the structure, properties, and stability of cubic diamondlike CA6 phase. Methods Calculations of the structural and energy characteristics of the CA6 phase were performed in the software package Quantum ESPRESSO [11] using the density functional theory method in the generalized gradient approximation. The Perdew-Burke-Ernzerhof exchange-correlation energy functional was used for the calculations [12]. The effect of ion cores was taken into account using norm-conserving pseudopotentials. We used a 10 × 10 × 10 grid of k-points in our calculations. The wave functions were decomposed in terms of a truncated basis set of plane waves with the cut-off energy of 60 Ry. The unit cell of the CA6 phase was optimized until the magnitudes of the stresses and forces acting on the atom became less than 1 mRy/Å and 0.50 GPa, respectively. The hardness and the bulk modulus were calculated using the methods described in [13,14]. Results The crystalline structure of the diamond-like CA6 phase was modelled as a result of the linking of C 24 carbon clusters according to the procedure described in [1,4]. The geometrical optimized structure of the new phase is shown in figure 1a. The CA6 phase unit cell is body-centred cubic (Im3 m -№ 229) with the parameter a = 4.3927 Å (figure 1b). The unit cell contains 16 atoms, the coordinates of which are given in table 1. All atoms in the CA6 phase are in symmetrically equivalent positions. As in the cubic diamond, all covalent bond lengths in the structure of CA6 are the same and equal to 1.5531 Å. However, there are two non-equivalent bond angles, the values of which are 90 and 120°. Further, the following properties of the cubic diamond-like CA6 phase were determined: density, cohesive energy, bulk modulus and Vickers hardness. The cohesive energy, density, hardness and bulk modulus of a new structural modification of diamond are lower than these properties of 3C diamond polytype by 5.3, 17.5, 18.1 and 21.1%, respectively. The electronic properties of cubic diamond-like CA6 phase were studied using the band structure and the density of electronic states (DOS) calculations. Figure 2a shows the band structure of this phase. The electron energy was calculated with six intervals between four points of high symmetry in the Brillouin zone: ΓN, NH, HΓ, ΓP, PN, and PH. The minimum difference in the electron energies between the bottom of the conduction band and the top values of the valence band is 4.72 eV and is observed at 1/3 of the length of the HΓ vector. As a result of the DOS calculation (figure 2b), the magnitude of the indirect band gap of the diamond-like CA6 phase is determined, and its value is 3.96 eV. Therefore, the studied structural variety of diamond is a wide-gap semiconductor with an indirect band gap, which is 27% less than the corresponding value for 3C diamond polytype. The thermal stability of the cubic diamond-like CA6 phase was studied by the molecular dynamics method. The structure was annealed for 8 ps at a constant temperature of 400 K. The graph of the total energy change from the annealing time is shown in figure 3. As a result of the calculation it was established that the CA6 phase crystal lattice is stable when the temperature is above room temperature. To enable experimental identification of the theoretically predicted phase, the calculation of its Xray diffraction pattern was performed. The calculated powder X-ray diffraction patterns of the cubic diamond and the diamond-like CA6 phase are shown in figure 4. The positions of the most intense diffraction lines of the CA6 phase differs significantly from the positions of the diffraction maxima of the cubic diamond, which should allow its uncomplicated identification in the synthesized carbon materials. Conclusions As a result of the calculations performed by the density functional theory method, a geometrically optimized structure and some properties of the cubic diamond-like CA6 phase are determined. The structure of this polymorph can be formed by linking carbon fullerene-like C 24 clusters in the form of truncated octahedra. The calculated properties of the CA6 phase are close in values to the properties of the 3C diamond polytype. Thus, the density, cohesion energy, Vickers hardness, and bulk modulus of this phase are 5-20 % lower than the corresponding properties of the cubic diamond. The CA6 phase should be a wide-gap semiconductor with a direct band width of ~ 4.7 eV. The energy and mechanical characteristics of the cubic CA6 phase, studied in detail in this paper, exceed the corresponding properties of most diamond-like phases with equivalent atomic positions [1][2][3][4][5][6]9]. It was also found that the structure of this phase is stable at a temperature of 400 K which indicates a high probability of its experimental production.
1,330
2018-11-21T00:00:00.000
[ "Materials Science" ]
J. Hillis Miller's "Humanistic Discourse and the Others": Roundtable Discussion This roundtable discussion of "Humanistic Discourse and the Others", J. Hillis Miller's contribution to the first International Conference for Humanistic Discourses, was held in April, 1994. The papers of this first meeting of the ICHD have been published in volume 4 of Surfaces (1994). The three additional points I would like to make now are: First just to say a word more about the notion of the other and my interest in it -it's my current research project. I have a little statement here that's probably more elegant than anything I could make up. I certainly wouldn't deny in any way that any act of reading or writing, like any ethical act of commitment (like a decision, or choice, or witnessing, or refusal, or saying "no") is constrained by enormously complex overdetermined social and historical contexts. It's not a matter of denying that there is such a context. That context, I think one would have to remember, is unique to each given act. That is to say, you can't establish it for such and such an epoch, postmodernism, and have done with it, say that's postmodernism; everybody in that period is going to be subject to that. You have to do this contextual work, in a way, over again each time. And certainly, that context includes, national, gender, racial, and class, as well as linguistic specificities. I must say that I've been a little uncomfortable with this group having only one woman. That defines our work here; it's all men with the exception of Pauline. And it's worth taking note of that. It doesn't really correspond, the group that we've chosen, for reasons that are perfectly explicable and not at all sinister, to, let's say, the makeup of my Department of English and Comparative Literature, or there would be a lot more women. And a few more, one would hope, a few more so-called minorities. Nevertheless, in spite of all of that that I've said, I hold (and I'm trying to show) that each act of reading or writing, like ethical acts in general (I would include reading and writing under that category), is a performative new start. This new start, in however a minute, tiny way, makes history. That is to say, in the sense that history is changed by that act. That means that the notion of performative language that I'm using here (I called it a performative) must be understood outside the classical conception of speech acts as determined in their efficacy by institutional codes, rules, and expectations that still remain firmly in place after the speech act has done its work. You have the marriage ceremony; it acts in a certain way to marry a couple of people, and the marriage ceremony is still there. Nothing has really happened to change the surrounding context. The new start I'm defining deflects the course history otherwise would have taken. It changes then, however minutely (and it's often very minute), the context it enters. It changes that context. It does this in response to a demand made on the one who performs the act by what I'm calling the other of language, the other of the other person, and the other of the social institutions within which the act in question is formed. Here's a place where I need that what I call the nonconcept of the other. This other is an alterity that cannot be logically understood by being turned to some version of the same. That is to say, it's not a same other. If you say, well, I don't mean this book; I mean the other one. The word "other" as I am using it doesn't mean that. It's the other as wholly other, really other. This other may not be defined either in terms of transcendence (one must resist the obvious danger in this concept of the other, the almost irresistible temptation to return it to some notion of a Platonic or Christian notion of some kind of transcendent place, some other place) or in terms of imminence (that is to say, of something that's inside in some way). Those are two theological concepts: the theological pair, transcendence and imminence. If I can be autobiographical for a moment... those were the concepts that governed my two books after the Dickens book, those two books on nineteenth and twentieth century literature, The Disappearance of God and Poets of Reality. These were entirely controlled by the notion of either transcendence or imminence. In the Victorian period was The Disappearance of God (God wasn't exactly dis-believed in, it's just that God wasn't here), and in the twentieth century there was a diffuse imminence. I now find those notions very problematic, including the zeitgeist notion that I had then. It's the part of my work that most embarrasses me. But I've long, for a long time been haunted by this idea of an altogether other. It's a sort of glimpse out of the corner of the eye, so to speak. That may be the thing that has most concerned me. That's what I'm really working on now, in my work on Henry James and on Proust. Second point. I think we need to spend a little more time than we have so far in recognizing that our... American society, at any rate, and I think world society, is in the midst of a radical transformation that's moving it further and further away from traditional literary culture. So that we have to think, what's the purpose? That's what I meant by saying, why would one want to do this now? I've just received an application from Sam Weber to teach an NEH summer seminar. He says that these transformations, the shift from verbal to audio-visual, an effect of the rise of the electronic media and in particular television and video, are far-reaching, radical, and widespread. He says, I think correctly, that they're made even more complex by the fact that they're ongoing. We're in the middle of this change, and you can't really see which way it's going. Proust is very good on this. A person who lives through an historical moment never knows it until later on. We're in the middle of something that we have some vague idea of, but only in retrospect would we know. Weber goes on to say something shrewd about this, and that is that it's normally by our colleagues in, let's say, film studies and in other such areas, it's normally thought of as a categorical break and used as justification for no longer being interested in literary theory or linguistic theory at all. You say, because film is so different we don't have to read these people anymore. So the question how such transformations are understood -how they're to be approached, interpreted, analyzed, is, as Weber says, of the utmost importance and urgency. The other alternative would be to recognize that the notion of sign and signifier might be extended to include other forms of inscriptional models. After all, as Weber observed, we call it photo-graph-y and cinemato-graph-y. The notion of some kind of graph, graphing is there. And it's certainly there in genetic theory, and in the notion we've been hearing about the Internet this morning and the digitizing of images, and so on. Final, third point. And I'll tell you this very quickly. This is what would really take a long time. One of the things that worries me these days is the question, why study literature in this circumstance? What's the point of it any longer, if we live in an age in which our students don't read anymore, in which their culture is defined by telecommunications of various trends? I'm not at all condescending to the sophistication and complexity of television and cinema culture, but nonetheless it's different. All the time you spend watching Rosanne on television (which is a very interesting program) you're not reading Shakespeare. You're not even reading Thomas Hardy or Charles Dickens. So I'm worried about that. And I have an answer which I'll propose from Proust. Proust is full of marvelous passages. My reading of Proust is that he's the great fractal author. You all know what fractals are. Here's a picture of one. Fractals have the peculiarity of being self-similar. Whatever level of smallness you take is a repetition of the larger... but a repetition with a difference. So fractal theory is part of chaos theory. My argument would be that the large scale of Proust is repeated in what's called a self-similar way in very small parts -for example, in figures of speech. And here's an example where that is made public, and where my willingness to read Proust and study him is justified. Proust says, "Well, I used to believe, when I heard Germany or Bulgaria or Greece say things, and protest their pacific intentions, I used to give credence to these statements. But since life with Albertine and Françoise, I've become accustomed to suspect them in that -that is, the kings of these countries' thoughts and projects which they did not disclose, that is, kept secret... secrets. I now let no pronouncement, however specious, of William II, or Ferdinand of Bulgaria, or Constantine of Greece deceive my instinct and prevent it from divining what each one of them was plotting." And he goes on to say, "Anybody who is incapable of comprehending the mystery, the reactions, the laws of these smaller lives..." And for me this would be Albertine's story primarily, namely Marcel's comic mistake about Albertine, his assumption that he can know whether or not she's lesbian. So he's confusing (it would take a long time to explain this) a cognitive situation with a performative, unknowable one. But he goes frantically to get evidence. He gets people to witness Albertine in the baths, and so on, and none of this proves what he wants proved. Whether Marcel Proust knew this, whether Marcel the narrator or Marcel the character (there are three people here) knew this or not is another question. So what he's saying is, anybody who's incapable of understanding, let's say, the story of Marcel and Albertine, "the mysteries, the reactions, the laws of these smaller lives, will make only futile pronouncements when he talks about struggles between nations." So if you want to know about politics and history, read Proust. It makes me feel so much better that I can, with a clear conscience, spend, I think, the rest of my life now, trying to understand Proust -it's a big book. I've never found a page, by the way in spite of the authority of the standard translation, where you don't have to go back to the French and always find differences. And often the kind of shocking ones where the translators came to a line, they didn't see the point of it, and they just leave it out. And that happens quite frequently, even though this is a very authoritative translation. That's all I have to say. Iser : Just a quick question: to what extent are your readings of otherness, as you have put it in your paper, fractals? Is it that self-sameness would appear to be strangeness, or foreignness, or alterity, if it were not sameness with difference? That would be one consequence of what you have been saying with regard to Proust. Iser : So we would have to read your listing of the various descriptions of otherness in that light? Basically, you do not postulate a stance outside these various types of otherness, which would invalidate what you're doing. Consequently, the fractals might be a way of indicating otherness, which then would pose the question of how to assess that difference if it's the self-sameness with difference? There's one possibility: If we take a pairing like 'theme and horizon,' as advanced by Schütz, we have the advantage of being situated inside the very many types of otherness and make them mutually refract one another. The relationship between 'theme and horizon' allows to see the reverse side of any otherness, because if something is thematic and the other type of otherness forms the horizon for looking at it, it is bound to change the moment you move on to the next one. Such an alternation of 'theme and horizon' allows you to travel inside the various types of otherness, relieves you of establishing a stance outside of them, and prevents you from lapsing into a descriptive taxonomy. Instead, we are able to spot the difference in the sameness as we are now given to perceive something that was initially not in view. Furthermore, why has otherness become such an important issue? Is it another of those hypostatizations that we have been witnessing since the 'sixties, when society was elevated into an all-encompassing blanket concept, which, when on the wane, was substituted by language? After the essentialization of language, caused by the linguistic turn, we have now 'otherness' as the beall, and end-all that is invoked as guidance for all kinds of intellectual activities. What does the 'new start' mean that you are advocating? Is it an exploration of otherness in terms of total difference, ungraspable alterity, or incommensurability? Coming to grips with such an issue we would have to avoid the Scylla and Charybdis of drowning out otherness into a taxonomy and of predicating what we might consider it to be. Again, the pairing of 'theme and horizon' appears to me a framework for exploring the issue under consideration as we can stay inside and nevertheless be able to manage what we are confronted with. Miller : Right. That's a very interesting characteristic... I don't know quite how to put it... Iserian theorizing. You're so good at it and it's so compelling. I'd say two things about it only, very briefly. One is that I don't think the encounter with the other is a theme. Therefore it's in principle unthematizable, so that that would be the problem with using your very attractive way of making sense out of this. How does one order it? For me, precisely the danger is to conceptualize it and to make it routine, so you say, here's another example of it. So the only way in which I could defend encountering the other would be to say that in the acts of reading Henry James, let's say, or Proust, or somebody else, it becomes an event... I encounter it in a different way each time. I'm very anxious about imposing some kind of conceptual scheme about this which I then come to the reading of Proust with. I think that's a general problem in reading, but you do your best with it. So the movement back and forth from theory to reading is tricky. The final thing would be to say, I think that historically it goes back a good bit further. I was teaching a course on this this year. I began with an old essay by Levinas, which goes back I think to 1962, called "La trace de l'autre." Levinas, in any serious attempt to find a history of this, a modern history, would certainly go way before the 'nineties back to the Greeks and the Bible. It's awkward to speak in Derrida's presence of Derrida, but the term "other" in Derrida's work is not new. It goes way way back to something that people have not traced very much. The citation I make from that interview dates from the early 'eighties, when he's already speaking of this as something that he had fixed in his mind. There's a complicated history through Jacques' work of "the other." So I don't think you can say that the use of "the other" as a key term in cultural studies arises with the development of cultural studies. It's precisely that definition of the other that I claim is caught within the return to sameness. You speak of the other as the other of such and such a culture. For example, in David Lloyd's work on Africa. Iser : I didn't mean it as a criticism, but obviously there is a horizon as a backdrop for delineating the specificity of otherness. Miller : Well, I hope that when people read my work it will become very influential and seem timely! Iser : Well, whenever you thematize one manifestation of the other it is bound to turn into a backdrop for other manifestations. That strikes me as the underlying pattern of what you are saying, at least in my understanding. Manifestations may function as contextual constraints. Miller : I don't think so... I don't think it works that way. Behler : I also would like to pursue the theme of the other, although the other two points are of equal interest to me. If I understand you correctly, you picked a whole range of forms of otherness in terms of what confronts us in strangeness, othernesses which can be accommodated, which can be adopted intellectually, emotionally, or appropriated in one way or the other. However, I also find a form of otherness in your paper also, that is an alterity in completion, that is wholly other, that can never be accommodated, that can never be appropriated. You describe it on page ten, and you also talk about it in your remarks this morning, and that is this fear that is almost, you say, "from beyond the world" -it assumes almost metaphysical features. I have the impression that this is an otherness that is not from the outside, that is not a partner of dialogue, that is not a partner at all, but more the inner self, otherness from within. Is my interpretation correct? Miller : No, not exclusively. At the top of page ten, I speak of this as perhaps a feature of my own inner self, but also, as for Marcel [in] Proust, something he encounters in another person. And that's rather different. And that again is different from what I find in Oedipus. Or what I find in Aristotle's Poetics and Rhetoric -the curious role of the irrational in Aristotle's formulations. Nothing irrational should be included -for example, no murders on the stage. Because there is a kind of relationship in both the Poetics and Rhetoric to a notion of an irrational other which I don't think you could define as the inner self or as the encounter with another person. To call it death is to give it another name. You might say that those names, for me, are another form of performative catachresis. One way that language works as a speech act is to say, "I name this, I call this death." But to say that is to... Well, I didn't call it Madness, but you might. In other words, in one way, the name doesn't matter, because no name has authority technically speaking, it's a catachresis. That is to say, you move a name in from somewhere else to cover a kind of blank in cognition. In another way it makes a lot of difference what name you move in. When Stevens writes a great elegy like "The Owl in the Sarcophagus," where the name that's given to the other is "death," that's different, very different from Marcel giving it the name of the impossibility of ever knowing whether or not Albertine is a lesbian. It seems like something you ought to be able to get factual information about... Did she perform lesbian, Gomorrahn acts? That's all he wants to know, or that's what he says he wants to know. Proust's point cannot be known. Once he doesn't love her anymore, then it can be known, like the way you can know history if you're outside it. But while he's in love he can't know. And that parallel with history is made repeatedly. Readings : I just wanted to come back to this question of otherness and what I understood Hillis and Wolfgang to be saying to each other. If I understand what you're saying here, there's a fundamental distinction between something like singularity and something like exemplarity, that the other you're talking about would be non-exemplary, would be singular, and that's the extent to which there is a radical new start involved in the performative and ethical act of reading, whereas the other in cultural studies would be precisely thematic, because it would be an example, the other would be an example of something and would be susceptible to allegorization, and then through allegorization to cognition. And I think that I like the word singularity because it gets me out of that transcendence/imminence bind, the way I often do that. And it seems to me that there I want to ask you to say a bit more about your third question, because it seems to me that if we accept -and I do, I'm entirely convinced by your argument for the singularity of reading and writing as ethical acts, as performances --then we have to say, all of the claims that have traditionally been made for the humanities or the human sciences, and for the benefits of reading, rest upon a certain exemplarity. The institution exists to lend to the act of reading its exemplarity and its diffusion. That's what, you know, the institution does. One reads... You read Proust in an exemplary way and your students then understand the example. Then you say at the end, to know politics and history, read Proust, and I think you're right. And I think it's very important. I think it's an extremely difficult challenge for us to imagine an nonexemplary discourse of the humanities. Because it's clear to me that what you're saying when you read this passage about Albertine is not "This is an example of how to find out about politics." So I want to push you to sort of sketch a little more what this kind of singular account of the act of reading would mean in institutional terms, in terms of the kinds of claims we could then make for the humanities. Miller : Very difficult. Mostly what I said in my paper was negative in the sense of saying that the old assumptions of exemplarity no longer work. I was building on a very interesting paper by a colleague in American Studies here, Brooke Thomas (an unpublished paper) on the crisis of representation. He speaks of a problem in the curriculum, the loss of confidence in, in Bill's terms, the exemplarity of particular works. And I was certainly brought up in that old tradition. With a clear conscience, I could read Proust or Dickens or Thomas Hardy as representative of the culture they belong to, and let's say of Victorian or twentieth-century French literature. I mentioned Auerbach's Mimesis as a classic case of the old paradigm, because it so wonderfully persuades you that out of one passage from To the Lighthouse he can give you all of modernism, that all the rest of it is going to be like that. It's belief in synecdoche, in part for whole. If you no longer believe in that, the question then would be, how would you organize a curriculum that made any kind of sense? And I think my problem with the development of cultural studies now is that I think it's still caught in the old paradigm. It makes the same kind of claims. And if they're serious about wanting to change the university, they're not really doing that. They're just developing another discipline which is subject to the same set of presuppositions. I spoke at the very end of my paper (and this would be the only answer that I could probably give you now, Bill) about a university of dissensus, that is to say, one which recognized that singularity you spoke of and in some way institutionalized it. Because I think we've always had and still have the assumption that even if you expand the disciplines so that we have media studies (and they're now developing here a visual studies program --it's an obvious thing to do -which would combine art history and film, and so on, into one subject), that can be somehow assimilated into a whole, a totality that would have some kind of coherence. If you cease to believe in that, the question would be, how would you then construct a curriculum? The only answer I can give is that each of us then is responsible for teaching from year to year things that are not exemplary, but singular. That is to say, if I get interested in Proust, I can't really defend that anymore in the old way. We know what's wrong with Proust -he's a white male, a canonical author. The only thing that we'd have going for him is that he's a homosexual, so I can defend him on those grounds... The same with Henry James. So I can say, well they're both white males, but they're both homosexuals, so that it's a part of queer theory that I'm doing. But I think the point is that there's no justification any longer beyond the choice that I make out of the encounter that I've had with these works in which I want to tell other people what has happened to me when I read these books. And it's marvelous that I have an institutional opportunity to do that. It's a thing I think we forget that we have in Western universities -the privilege, with some limitations of course, to teach anything we want. Because if I decided here next year that I wanted to teach Beowulf, I don't think they could stop me. They would be a little puzzled by my change on that. And certainly we have not been stopped in English departments from teaching Hegel, Levinas -I don't have any authority to teach Levinas. Then why do I teach him? Because I've read this, and I found something there, and I want to tell people about it. This is taking place already. This is a free, dissensive university. I teach only what I want. Derrida : Yes, but your students finally make the decision. If they don't come at all (which isn't the case, I'm sure), then you'll stop teaching it. Finally, you have to convince, performatively, the students of your choice. Miller : And that's difficult sometimes. Derrida : You have to convince performatively, that is, produce a situation where... Behler : But we no longer have faculty meetings in the sense of agreeing on a curriculum that has a certain coherence in education. Derrida : I'm sure that in some situations, academic situations, the decisions are made collectively in a meeting, but once someone has the authority to make decisions as Hillis does. Iser : There's no authority to singularity, is there? Derrida : Well, Hillis has the authority, first to say, well, to say to his colleagues, "I want to teach Proust or..." Miller : Or Beowulf. Derrida : ... and they won't object. If, to the extent that the students don't object, it is a matter of authority, authority -not simply the institutional authority, but the authority that you have built. Miller : I should say (just to follow that for just a second), on the other hand, it's not quite so simple, at least in my department here, in English and Comparative Literature, which is a community of dissensus if there ever was one. Nevertheless, we have department meetings in which we decide on revisions of the curriculum, and this is voted on. A group of people in American Studies redefine American literature now as multilingual, including our Chicana person and people who teach Native American stuff, and so on -an American literature which is not just New England, but really is multilingual. The Comparative Literature people are a second group. The third group is made up of those in English literature. And these are three very different groups, though they overlap. But those English people are now redefining the English program in a much more conservative and traditional way, throwing out all the stuff that's not English, saying, our students... these students, fifty percent of whom are non-caucasian, must read Samuel Johnson, and so on. I'm not saying that shouldn't be done, but there is a good example, as you say, of authority and power. In this university, the largest by far undergraduate major in literature is in the English Department, a more or less traditional English major. So it's a battleground, but the one that to some degree is still fought in department meetings. Krieger : First I want to say that I couldn't agree more with this whole notion of the new start in every act of reading and in every act of writing, and so on. And the phrase you don't have in the paper, but which you often use, is that "literature makes something happen." This must be central to any claims for the humanities we can make. And I agree with Bill's reading of that too. I want to get back a little to some of what Wolfgang was pressing to begin with, and that is how to keep otherness, as you use it, a non-concept In the course of the paper, "other" does a number of different jobs for you. First, the one that seems soundest, in a way, is the ground, or rather the ungrounded ground for the paper: the deconstructionist notion of "the other of language," which you quote from Jacques. "The other of language," on page five, and then strongly reinforced (with differences, perhaps) but reinforced by your discussion of Paul de Man and his notion of the "otherness within language," "suspension of meaning" (this is on page 7), the non-monological way in which language works. Let me call it linguistic otherness, or verbal otherness, the inevitable, unavoidable, thing that language does as it works. Second, returning to the passage that Ernst mentioned on page 10, the encountering of the inner self with a kind of otherness as well, as the self "may be 'encountered"... as wholly other"; and in the rest of that paragraph otherness is to some extent (forgive me) being thematized into what, for lack of a better word, let me call an existential concept of other. And third, beyond that, in a number of places in the paper, (see, for example, page 6), I find "racial, class, gendered, national other." You cite Lyotard's notion of heterogeneity. This third "other" seems to be, for lack of a better word, let's call it a political other. And I'm wondering, as "other" functions on these three different levels (maybe that's what Wolfgang meant when he suggested that the sort of fractals were replicating one another in different levels and in different ways), I'm wondering, or rather worrying a little about whether having "other" operating in these ways, whether the word is the same word? Is it really the same word in every case? Is the linguistic verbal "other," the other that shows itself in the very way in which language, the words, work, always forcing us to worry, if we read well enough about what is not there, the translation that insists upon telling us why it's untranslatable, and so on? Is that "other," mutatis mutandis, really the same? In what way can we use the word "other" here as we use the word "other" when we refer to our racial, gendered, ethnic others? The "other" there seems to have more substance, if you will, more -I almost said the word "reality" -but in a way, yes, almost that for us. And when I read the bottom of page 10 -"Perhaps the wholly other may be a racial, national, class, or gender other that is truly other" -then the word "truly" makes me very very nervous, metaphysically, or essentialistically, worried about what is a non-truly other if something can be truly other? Aren't we awfully close to a concept of the other? And doesn't the other, as it functions on these several levels -undifferentiatedly, if you will (that is, the linguistic other, the existential other, or the political other) -doesn't it seem suspiciously like a universal? I just have one other small question, which is not related to this at all. And this is perhaps a theoretical question, perhaps a tactical -maybe a political -question. And that is, when, toward the bottom of page twelve, in talking about choosing between Moby Dick and Uncle Tom's Cabin, you claim that the choice of one rather than the other is a "result of a motivated and unjustifiable choice." The word "unjustifiable" is a strong one. Let's hold that for a moment. Then in the very next sentence, "Nor can there any longer be a recourse to some standard of intrinsic superiority allowing us to say that Moby Dick is a better work than Uncle Tom's Cabin, since that standard too is the result of ideological bias." Of course my instinct is to say, is that not giving too much away? But in the context of the paper I want to ask, is this last statement not an ideological statement? And your "is" in it is a very strong, very strong verb there: "since that standard too is the result of an ideological bias." It's a conclusion you could have reached only by having a certain kind of ideological critique of ideology. But as I say, that would be the theoretical question. The political question is, is there no conceivable way of saying Moby Dick is more worth talking about as text than Uncle Tom's Cabin, except an ideological way. Miller : It's your fault. Because you belong to a certain class and race and all the rest of it. The trouble with ideology critique is that it doesn't free you from ideology. Miller Krieger : But that kills your "new start" argument. If you believe that, then the new start argument is out the window. Miller : I don't see why. I don't see how that has anything at all to do with it. Krieger : Because every work is a reconfirmation of the ideology you have going into it. In that case... : No, no, no. I said ideology... no, no. I said that any attempt to establish cognitive principles for saying Moby Dick... It has nothing at all to do with the performative new start. It has to do with the choice of books to read. That wouldn't keep me from choosing to prefer to teach Moby Dick to Uncle Tom's Cabin. All I was saying was that I can't justify that a priori by saying that it's absolutely a better work. Miller Krieger : I'm not saying absolutely. I'm saying, is there nothing between saying it absolutely and not being able to say it at all? Miller : No, no, I don't think there is. Miller : Do you want to go on on that before we go back to the other? Bill, did you have something to say? Readings : I would just intervene there with the canon and choice. I think that the only thing I have difficulty with is when I think you conflate England and America a little too quickly. You say it's very difficult for English people to read this, but the ethnic grounds for the canon in England or France, England or Germany, is different from that in America, and this is why the canon debate is a specifically American debate, because the canon is the object of republican choice ultimately. The Norton Anthology is like American law. We can imagine that it has no ethnic content to it whatsoever; it is simply the republican will of the rational and democratic choice of an ethnic tradition by a people that actually is not vitally linked to that tradition. Whereas of course, there is no choice about Shakespeare in England or about Milton in England because the functioning of the notion of the ethnic tradition is very different. Krieger : As a matter of fact, we could add to that, Bill, that I think someone like Stanley Fish would probably be more offended by this sentence than others of us might be, because for Stanley, it is probably justifiable, given communities of interpretation and the rest. Readings : Exactly. Krieger : ... which does not necessarily make it ideological. Readings : But Stanley believes in a different kind of choice than Hillis. When you have a problem with Hillis on that statement of that ideological bias and the question of whether the new start argument goes out the window, you have to distinguish again between two... Krieger : I was saying, one is theoretical and one's political. Readings : But that's the weight of Hillis's use of the word "ethical," as I understand it, his choice to teach Beowulf next month because that's what interests him, in some sense his demand that that is a singular act of choice and a new start for Beowulf, that there is a performative quality to his choice, is very distinct. That's ethical choice, as opposed to the claim that the choice is in some way authoritative, exemplary, or representative. So that the representative status element isn't the only one that you can make for the canon in America. I mean, that's the distinction. John Guillory in his book Cultural Capital, goes through that quite well. I mean, you can make a time in America that the canon does not have to be representative. It simply has to be the choice of the institution. Miller : Legislated. Readings : Legislated, yes. So those two are available, whereas in Britain or in Germany, and in France despite its supposedly republican status, there is a representative necessity that's different. But I think that it involves completely rethinking... It seems to me that Hillis's argument involves completely rethinking the question of what it means to choose one text over another. Readings : And it involves thinking absolutely without alibis, and that's why I would call it ethically responsible, that you have no reason to give that will absolve you from political responsibility for having chosen to teach Moby Dick this month instead of Uncle Tom's Cabin. And indeed it is a discut-able... discussable question. Someone can raise their hand and say, "Why are we reading this and not that?" And in some sense, you're not allowing yourself the statement, in advance, either "Because that is not part of the canon, because that is not good literature, because that is not recognized by the institution." Ultimately you're saying, "Because I say so." And that's... to return to what Jacques said about how you have to convince the students. But you have to take responsibility for that act of convincing the students about your reading, rather than invoking some alibi. And I think that's a very ethical stance. It's also an enormous amount of work for any teacher, it seems to me, practically. It's not a time-saving response. Krieger : It ought to be looked upon as totally arbitrary, of course, and authoritative, in the sense that people want to take Hillis Miller, and anything Hillis talks about, thinking, I want to be there. And so the choice... Derrida : In that case, it couldn't mean simply, well, because Hillis Miller has such and such a reputation, because implicitly, when he makes his choice, implicitly for him and implicitly for the students, there is a possible hidden political discourse justifying the choice. Even if you don't thematize everything, you could... I think Hillis could, we could explain why the choice of Moby Dick is better, not only literarily, but politically better. Given some time, I could show you that it's not simply a matter of taste or a matter of literary preference. I could try and demonstrate that it is politically more efficient if you leave me the possibility of teaching Moby Dick the way I want to teach it. Krieger : Jacques, if that's the case, then my choice of a Moby Dick might not be the result of my having elitist, defensive ways of protecting the proper American tradition of the academy. Derrida : In that case, which means that convincing the students means that we use not simply our supposed authority, but supposed capacity that we have in principle demonstrated to lead the students to read this and that in such a way that they are convinced that it's politically more important to read Moby Dick than x or y. In a certain context. I wouldn't say, in any context I would prefer Moby Dick. In some contexts. And then I would like to be free to evaluate the context. In some contexts, perhaps I would say, well, Tom's Uncle Cabin would be more appropriate. It depends on the... To come back to this point, the point you made about otherness, thematization, and horizon: Hillis told you that the other is unthematizable, the other in its pure alterity. It's precisely what is unthematizable. Although you may thematize, you thematize always the unthematizable. But it's unthematizable. I agree with a number of points made by Hillis. I agree with this, but I would add (and this is more difficult to show) that finally, if there is such a thing as pure alterity, pure otherness, it's not only unthematizable. It is something which undermines the opposition between theme and horizon. The alterity of the other is something for which you have no horizon, that is, cannot be... The horizon structure the way it functions in Phenomenology, in Heidegger. You need the concept of horizon to have something on this background of the horizon. And I would claim that the other appears without appearing as such, never appears as such. It appears without appearing as such when you cannot anticipate it, him, or her anymore, when there is no horizon. It comes from no horizon. Not only you cannot predict the other as such, you cannot anticipate the other, you cannot foresee the other... Where the other happens, so to speak, there is no horizon. No theme, no horizon. So this couple of concepts is precisely what is put into question by the possibility of the other coming, I would say, the event, the singular event of the other as such. If... (And of course, I say if there is, if there is such an other -I say "if" because this otherness cannot become the object of a cognitive statement, of a determining, judgmental statement. The relationship to it is only a possibility, an act of faith, so to speak, something which belongs to the determining community of judgment.) If there is ("if" there is) such an other, then there is no theme, no horizon. And then the question arises whether (and this question is for Hillis) whether the fractal structure, this repetition with a difference inside, is, let's say, capable of this... has something to do, is a good representation of this other I just ask. Is the self-similarity within the fractal structure, is it commensurable with this otherness? Krieger Krieger : One problem, Jacques, with the fractal metaphor in this case, is that there are certain determining characteristics in the fractal, despite chaos theory... The chaos theory operates within a whole series of determinisms that insist on the absolute. Krieger : But at the same time it insists on the absolute homology among the levels, which is almost a structuralism, though always with an open end. Derrida : Nevertheless, it's I agree a necessary question. But at the same time, I understand that in this mise-en-abyme -fractal, mise-en-abyme -of course there is some, let's say, fragile but radical otherness, within the same, within the same. It's there, it's there. So... I don't know what to do with this. Miller : It's different from the mise en abyme. And then my first answer would be to say no, that the fractal image is no more than a name like these other names, and a dangerous one. For one thing, fractals are not language. You're talking about language and the structure of language... Derrida : But you're using that as a model for... Miller : right. Well, and other structures which are around us all the time, like trees... I find fractals fascinating, but I agree with you. I'm not quite sure whether they fit my non-concept of the other. They do a shoreline. The Maine coast is a fractal coast, because the outline of the whole state is repeated on smaller and smaller and smaller structures, but repeated with differences. This book (by a Dutch mathematician, Hans Lauwerier) specifies the difference between fractals that are absolutely regular, where the smaller stage is just like the larger ones, and other fractals which have an element of unpredictability. That is to say, the next layer down is going to be something like the one above, but you can't tell ahead of time in what way it will deviate. And that makes it different, really, from the mise en abyme, where the next repetition is just smaller. Nevertheless, I'm uneasy about that parallel too. Like all analogies it falsifies. I have used another scientific parallel which has no more authority than the fractal one, and that's the black hole. What's interesting about the black hole is that it gives me what you just very elegantly formulated. You say, "the other, if there is such a thing." Because it's not possible for it to be the object of cognition, you can't ever say there is such a thing. All you can say is, "if there is such a thing." And that's also true of black holes. Over and over again they say, well, the existence of black hole hasn't been proved. We don't know whether there are black holes or not. Why? Because you can have no direct evidence for them. Nothing comes back of a black hole. You can only infer their existence. Miller Krieger : But one crucial difference has made me worry about the black hole analogy: the concept of the other is so powerful and has swept us up into being concerned about it, because we meet it every day. It is a fact of experience. I mean, insofar as there's a fact, it is a fact of experience. As we conceive our experience, we know there is otherness there, whereas black holes are totally speculative. Yu : It's a necessity of theory. Krieger : It's a necessity of theory and not of experience, yes. Yu : I think theory demands it. I mean, the theory demands that the black hole exist. Miller : Yes, well, it's not just the theory. It's the observed celestial phenomena, observed just as much as anything else. But I agree that both of these images, both the fractal, which I didn't put in the paper, and the black hole... Iser : Well, in Gergory Bateson's words, the black hole -or the black box as he prefers to call it -reflects a point at which we are "to stop trying to explain things." Miller : The parallel here is the claim that in spite of the fact that one has to say the other, if it exists, that doesn't mean, as Murray is saying, that one doesn't need the hypothesis of such a thing, if only to get on with the work of reason. Derrida : ... If there is this danger of homogeneity within the fractal structure, repetition of same, sameness, then what would you do with the crypt? That is, of course there is an insertion of something into something, inscription, within the fractal. Is it the same structure as the incrypting of the dead other, with all the work of mourning? So the absolute other adds death, and of course the problem of death is at the center of your paper. And the work of mourning, is it the same inclusion, same structure of inclusion in both cases, or should we distinguish, and by what, between the two structures of inclusion? If indeed the singularity of the other is what we agreed it is a moment ago, then it might be difficult to push the analogy too far, between the fractal and the incrypting. This leads me to another point. Of course Murray reminded us a moment ago that we use the term "other" every day, and especially when we speak, whether in cultural studies or in political discourse, the "other" as a nation for example. I think that the fact that we refer to this absolute other different from these determined others... doesn't prevent us from trying to analyze what's going on in several of them, what you would call exegetic frames. In such a moment, we determine this concept of otherness. In such and such a way it becomes predominant, and so on and so forth. But these are the backgrounds, so to speak, groundless background of this reference to this absolute other. But we could do the two things at the same time. One doesn't exclude the other: that we pay attention to the absolute other and to the historical determinations of the discourses on the other in the academy, in the political space, etc. Now, my second point, Hillis -and I agree with you about the reason why we study literature, we should continue to study literature. And I would argue that there are a number of good reasons, good political reasons, to continue. First, the machinery, all the media technology of today, is something absolutely, utterly primitive, however sophisticated it is, absolutely primitive compared to literary works. So if we want to teach the complexity of the semiotic systems -that's one reason among others -if we want to teach the maximum complexity of the speech act or techno-communication, teletechno-communication, we know that the good models, the best models are in literature. We can show that. That's one reason. The other reason would be directly political. That is, I could justify that teaching reading first, teaching literature is politically justifiable, that reading Beowulf or Proust is politically... I won't say politically correct, but it's politically useful -depending on the way we teach it, depending on the way you read it, and so. But there is no reason why it shouldn't be politically efficient. One reason among others, other political reasons, is the fact that it teaches the memory of language. And if, on the side of, let's say, ethnic nation minorities, we or they want to cultivate the memory of the non-artificial language, the memory of the natural language, then literature is the best place to identify and to cultivate and to grow this memory. Of course, there is as performativity of this act. Now a last point, which has to do with your paper, the written paper, not the presentation you gave, the oral presentation you gave. At the end of the paper, when you advocate an institutionalization of the dissensus in the university, I am afraid you're too optimistic about it. Because at some point, you... That's Gerry Graff's main point. That is, we should institutionalize the conflict, the dissensus. And I'm not sure it's possible, first, and I'm not sure it's desirable then. Because if the dissensus is clearly a dissensus, a dissensus between different interpretations within writings, singularities, then to institutionalize the dissensus is a way of reducing the sharpness of the discussions. And I'm sure the people that you've tried to gather in your agreement, all these people would have probably some trouble agreeing with one another. Miller : Yes, they would. Derrida : Let's have a space where we can argue and where the conversation continues. We won't kill one another... We'll speak. We'll continue to speak. And I must confess, although my agreement with Habermas is very limited, I agree with him that that's what we are doing. And the discussion we have on the discussion, on the disagreement, is part of what's going on. Krieger : But if we institutionalize the entire university to do this, then we're in danger of creating what Lyotard would call a master narrative. Derrida : You cannot, you cannot, let's say, have rules, have a constitution for that. You cannot have a charter for that. But apparently that's what happened. But equally it affects what happens. We'll have to say, well, in fact there is a conflict of forces, and we, as polite discussants, polite advocators, we represent a certain force, a certain power of the field, and for us or for determined interests, we behave that way, we are politely discussing. But we know that it's only a small part of the structure, that it's impossible to imagine an institutionalization of the dissensus. But it's in public, purely in public. But that's what democracy's supposed to be -they disagree, they vote and, it continues. So it's a question about which concept of democracy. Iser : Hillis, would you allow that two more people just make a statement before you reply, or would you want to reply? Miller : Nope. I don't need to reply. Iser : And I'll refrain, Jacques, from replying to you with regard to theme and horizon. Miller : I still have to say something about something Murray said a long time ago. But go ahead, Ludwig. Pfeiffer : I find myself in a slightly strange situation though, because I can't follow, I think I can't follow the urge to posit that absolutely other, but I still don't see where the urge comes from. It cannot be a conceptual urge, because I think conceptually we are more or less into Wolfgang Iser's direction. We have to be within the theme/horizon semantics. That would also explain the facts of experience Murray's talking about. If we talk about the experience of the other as a fact of experience, we are talking within the theme/horizon thing anyway, as soon as we translate that into experience. So you say, Hillis, it's a non-concept. Yes, but where does the urge come from to posit it there? That's basically my question, maybe to both of you. Miller : That was going to be my answer to you. You're right about heterogeneity in my list of "others." That was intentional, because I wanted to show that you can by no means, out of these theorists, put together some kind of coherent notion of the other. They all differ from one another. Derrida : The other is the interruption. Miller : But that interruption can occur... In reading is one way. It's our professional way, but only one way. And my novels give me a report of the other... it's a verbal report, but what I'm being told about here doesn't depend on language except in some secondary kind of way. So I don't think... Krieger : But is the verbal "other" of Paul de Man, in the quotes you have from him, the same word? Miller : That would be an example of this diversity I was talking about, because de Man would have been very uneasy with my terminology. In the same way, I think, Jacques would not use the term "material" in a way that would be at all like de Man. So if we say these are really important notions in these two theorists, it's a place not where they come together, but where to some degree they diverge. And as you say, the end point in a really serious reading of de Man is to try to figure out what he meant by "the materiality of language." That's a very funny use of the word "materiality." And I was, for my own purposes, saying, he wouldn't have liked calling it "the other." The fact that he was interested primarily in language doesn't mean that I have to be interested exclusively in language. That's an important way... Krieger : It's just that the way in which the self/other of language functions with the non-concept element there, and the element of untranslatability, of the mise en abyme that doesn't let you reach to the core because there always is that other -I'm saying that you will have to demonstrate that that way of operation is analogous to, or similar to the kind of operation you're describing in the set-up of the relationships within cultural identities. And the reason I say that is that I do believe the power of the other in your paper is derived essentially from the way in which language functions in a way that doesn't permit essentializing. I'm afraid when you get to the cultural identity issue, or the gender identity or any other identity issue, and the confrontation of otherness, it's not so easy to keep the other from becoming identified conceptually. Miller : But I think there's a contrary danger in language. That is to say that with the reading of poetry there's an equally dangerous mystification in which, in spite of everything, we speak of this other as somehow something that's linguistic, not as the other of language at all. So it helps, I think, to make sure that language is not the only form of this interruption, and to insist on that. Krieger : And you would say the word "other" is appropriately used as the same word in these several instances. Miller : Sure, sure, sure. Marcel didn't need language to have problems with Albertine. That was a bodily, to some degree non-linguistic... If the other is really other, it's really other... it's so much other than language that to link it essentially to language would be a mystification. Krieger : And "the truly other," what's the force of "truly"? Miller : Jacques, a few minutes ago, said something I think is very useful and helpful here. He said that the notion of the absolute other does not in any way preclude other forms of otherness which are historical, et cetera, et cetera. And I would go beyond what he said to say that they always involve one another, that you can't really have one without the other. So that when I say there are two notions of otherness, one in which is some kind of return to the same, and the other notion of the absolutely other, I think they always, in any particular case, involve one another, that you can't really have one without the other, and that the danger is that one will always turn into the other. So that they're not the normal kind of "either/or," but related in a different kind of way. Pfeiffer : Let me pursue this uninformed guess I was making. In spite of what you said, Murray, that reading literature makes, let's say, the world dance before your eyes, or whatever, and Jacques' sense of the high complexity which we don't get in any other media, one might still say... it's not an objection, but one might still describe this as a kind of control or, relatively speaking, intelligible complexity we are dealing there with, even in the most complicated puns or whatever... Krieger : "Intelligible" is a hard word there. Do you mean "intelligible" that we can read it, or "intelligible" as knowable by the mind? Pfeiffer : But still I think we are somehow under the spell of the way Hegel described the workings of language of making you believe that you see what's going on. But literature is the only art which creates the illusion that it can treat everything, and that it can present everything in some kind of palatable (to get away from "intelligible") shape. But it is possible, if we do not see, let's say, the situation with the other media in that contrast, complex and primitive -of course it sometimes is true -yet, if we do not see it exclusively in that way, we may come up with, instead of the difference between the totally other and appropriation, we may come up with, not the frame or theme or horizon, but, let's say, levels of knowability, levels of intelligibility, levels of nonsense, levels of indifference, we would always stay on both sides, inside the alternative appropriation and otherness. And in terms of cultural experience, I'm not quite sure. I mean, this may be just an illusion too, but you come to a foreign culture, and on the one hand it seems totally other. On the other hand, you may have experience of the opposite kind too. What do you make of it then? So that the notion of the totally other seems to me to be provisional too, and one would have to see in each case where the urge for it comes from -not the conceptual legitimacy, but the urge for it. Miller : I think the urge, the real urge, is from the other, at least for me, and I would think historically. Pfeiffer : What you mentioned in your paper. Miller : Yes. That's it. Because it's not too pleasant. It upsets things. Another (as I say, I'm a humble teacher of works of literature) another very striking example is the wonderful novel by Henry James called The Wings of the Dove. This is a novel about someone who makes an agreement with his fiancée to pretend to make love to a woman who's very wealthy, who's dying, Milly Theale. His guilt is the result of a quasi-performative acts... He simply doesn't say to her, "I'm only pretending to court you." The result of this is that after her death (death is fundamental in this novel; it's a novel about death), she wills him all of her money, which he refuses to accept. Nevertheless, he is in love with her, with her memory. She's done the one thing that will separate him from his fiancée, namely to leave the money and to leave a letterwhich he doesn't even have to read, so it doesn't depend on language. The letter is burned. So it's a case where the destructive effect of this otherness is dramatized in a wonderful novel. That novel is very hard to face, very hard to accept... And I would say that most of the interpretations of it attempt to escape recognizing what it's really about. And there's no doubt that a tremendous amount of literary criticism (including, I'm sure, my own, lots of my own) is precisely an attempt to refuse, cover over, explain away, make intelligible, so that you don't have to worry about it anymore. Derrida : This is a machinery where, let's say, the undecidability (to go quickly) is far more complex that any technology yet devised. Of course, we are, I am totally incompetent, don't know how to make a computer work. Okay. Nevertheless, I know it's much simpler (if I knew it), much simpler than the structure of this novel or what is implied in this novel. Just a detail: the fact that the letter is burned doesn't mean that it's not a problem of language. Miller : That's right. That's just the point, that it's efficacious, it's absolutely efficacious even though it's not read. But you're right, it's paradoxically a matter of language. They know what it says. But that's, I think, James's point: having it burned and unread in no way takes away its effectiveness. Derrida : That's the argument... Lacan's argument, when he says that the fact that we don't know what was in the letter meant that it was only a matter of signifying, not the signified. They knew what was in the letter, they knew without reading it, they knew what the contents were supposed to be, like in... Miller : ... in The Wings of the Dove. Exactly. Birus : I'd like to come back to our debate on page twelve, page thirteen. Let me say, all we debate here is just anticipated, not by Goethe, but today by Kant, I would say. Aren't we here on a ground that is conceptualized by Kant's Critique of Pure Judgment, that all aesthetic judgments are singular judgments, but that they have the claim of universality in that respect that we think all should applaud this judgment. We can (and now, your words), we can motivate our judgment. We can say, well, because it's well structured, because this and this and this, but we cannot (that's Kant's term) we cannot necessitate it, make it be necessary. So I think the ethical problem, or the political problem (as you called it), is... well, it's similar to the question Kant debates on having secondary interests in the beauty. And that would mean it is an ethical decision or a political decision to give space for aesthetical judgment and for aesthetical arguments, to choose this model and not that. Maybe there is, in democracy and in this university as a democratical institution, maybe there can be an argument that deals in that way: this is not the time to give aesthetical arguments the first place. Now we have to deal with other problems -of minorities, for example. But if you give a place, if you give space for such aesthetical judgments, I think then are we in a situation, well, we have motivated judgments, and they are unjustifiable. You cannot necessitate it. But on the other hand, it's not only ideological bias. That would be a way to necessitate your judgment, to say, well, it's because and because and because. But an aesthetical judgment has no such cause. You cannot give a reason, sufficient reason for this... Krieger : It's exactly this point, I think: the notion in the aesthetic that there is something associated with disinterestedness. Hillis's point (representing many many other persons who today would make the same point) is that there's no such thing as a disinterested judgment. It is always the result of interest, and the interest is what he's calling ideological. And therefore he is, in effect, as many others are today, ruling out the very possibility of our being able to make an aesthetic judgment, or claim the kind of disinterest required in order to do it. Birus : I understood it in a different way, in that way that the aesthetical argument hasn't, automatically and by itself, the last vote. It has to be argued. Is that, in such a curriculum or in educational programs, the highest value? And I think that has to be debated. Whence the quarrel between cultural studies and more traditional literary studies. Krieger : The question is whether it's ruled out. Birus : But if you accept this aesthetical space, then I think are we in a field we can debate, like yesterday, on what is of higher value, literature that can be fully translated, or literature that always recreates intranslatability? But this is an aesthetical question and, for instance Kierkegaard can say, the aesthetical stage is a minor stage compared with the ethical or the religious stage. Krieger : The aesthetic is ideology free. Readings : But I don't think that... Birus : But there is an aesthetic ideology. Krieger : Oh, yes. Readings : If you just focus on disinterest, you miss the other side of Kant's point, which is that the singularity of the aesthetic judgment is asserted as if it were universally valid, by appeal to a sensus communis that is not anthropological or comparative or empirical in any way. That access to the possibility of universality, which functions for Kant like a kind of... almost like a pocketbook... Derrida : Beauty as symbol of morality, beauty as nonconceptual universality. Krieger : Yes. Readings : The question is, of course, whether there is such a space at the university. Birus : Or should be. Miller : No, the appeal to the aesthetic, partly because of the history of what happened to that concept of the aesthetic later on in the nineteenth and twentieth century, makes me very uneasy -the appeal to the aesthetic as a bridge between the ethical and the cognitive, practical reason and pure reason. So that the sentence in Proust that follows the one I read made me very uneasy. It is a place, a characteristic place, where he's been talking about how if you were a master of psychology, and so on, you'd be able to understand politics... If you're not, you can "only make futile pronouncements." But if he is a master of the psychology of individuals, then "these colossal masses of conglomerated individuals will assume, in his eyes, as they confront one another," -and then what does he say? -"a beauty more potent than that of the struggle which arises from mere conflict between two characters." So it's a place where the political moment here, in this passage, where the political and the individual vanishes in what I would think of as the aesthetic side of Proust, a mere admiration for the beauty of mass conflict. He's talking about the First World War. The First World War is more beautiful than my troubles with Albertine because it's bigger. And not because it's more important. And that made me profoundly uneasy, that aspect of Proust... Well, it's complicated in Proust because he turned against Ruskin, as everybody knows, precisely because he saw Ruskin as an aestheticizer. Wang : After reading your paper, I'm especially interested in what you talk about the role of the English department or the university in modern society. And I thought what you said is that literature is so complicated that it's more complicated than even what we generally call science and technology today. But then Ludwig says it's more complicated than the media, or something like that. Did you...? Pfeiffer : That's what Jacques' point was, yes. Wang : You mean movies? Derrida : I was referring to the technology of the media. Not the words, not the words. Not the films, no, no, no, no. Wang : When I was in a situation teaching at the Hong Kong University of Science and Technology, and for a whole year as the only Professor of Literature there, I had to convince my colleagues and students, all science and business majors, that literature anticipates many things they are or will be doing. Then, for example, I would cite Ch'ii Yuan who in his poetry flies through space. He wrote about the "experience" in the fourth century B.C, his flight to the unknown, and it's very very exciting. Then, I also realized that they could fly, but they never knew that actually some miles above the ground it was forty degrees below zero. Of course our mind for literary creation is great, but there is something else. This is almost like a challenge for myself. And with that then, there are problems in Paradise Lost, which also involves descriptions of flying down and up in the day. Miller : Troubled by the great poem that I defend, right. Behler : I want to respond briefly to Hendrik. I agree with you about the desirability of the aesthetic realm and the autonomy of the aesthetic, but that's precisely the issue in today's university debate, because this tradition has received a very bad name. It's considered to be a realm free of politics, and there's something... Yu : What your model suggests does not preclude any change of the status quo. It doesn't mandate any particular change in the curriculum whatsoever, from the aesthetic to the political standards. We can just, everybody can just teach as before. Cultural studies is not going to... Miller : That is true. It's conservative from that point of view. And obviously my strong motivation (I said that) was to feel that I can teach Proust with a clear conscience. Yu : Right. Miller : On the other hand, it doesn't allow me to say my colleagues cannot do what they are doing... Yu : Absolutely. Miller : ... which is to teach very different, non-canonical works. It doesn't give me any authority to say, "This is wrong, and we need to go back..." It doesn't really justify what our colleagues in the English Department are doing now, cleaning up the English curriculum because they consider English literature to contain values, and so on, that everybody ought to be thought. Yu : Nor does it reformulate the canon. Miller : No. Krieger : Probably You're giving them a bad rap, because in the last meeting we had a different discussion. (I don't know whether you were there or not.) Again and again and again, the phrase "English literature" was replaced by the phrase "literature in English." Miller : Right. Krieger : Specifically, it's to allow literature from any place that happens to use this language. Miller : No, I would see that as a... Miller : Proust would be a "no," but Australia... Krieger : It was looked upon as a liberalizing. That was my point. Miller : Right. No, no, I think that's certainly an opening up. Iser : There is no need for a summary. Perhaps only one question remains. Miller : Yes? Iser : If otherness defies thematization, why do we keep naming it? Is naming a form of translatability, or better still an iteration of translatability? Derrida : Why do we have to stop? Miller : We have to stop. Notice that my title is "Humanistic Discourse and the Others." There's a plural. Why is there a plural on that? It's very awkward. But it seems to me that the word "other" begs all sorts of questions, because it's almost impossible not to think of it then as somehow single, unitary. And even to personify it. So that if you use the plural (which I've tried to do, and it doesn't really work), you're trying to break that down. Because the "other," in the singular, means both at the same time the absolutely other and somehow the other person. That's Levinas' problem, "others" is a singular plural or a plural singular. Derrida : Speaking of translation, if I may quote a sentence you quoted by me, "Tout autre est tout autre", is absolutely untranslatable, absolutely untranslatable... Krieger : So then translate it. Derrida : Absolute other. Krieger : The absolute other sounds so much like a Platonic universal, and all the others we meet, whether they be racial, ethnic, gendered, and so on and so on, sound like representations of the universal other. Miller : That's the problem.
17,374
1996-01-01T00:00:00.000
[ "Philosophy" ]
Automated Defect Detection and Visualization for the Robotic Airport Runway Inspection Detection of both surface and subsurface defects is a vital task for maintaining the structural health and reliability of airport runways. We report the automated data collection and analysis for airport runways based on our novel robotic system, which employs a camera and a GPR (Ground Penetrating Radar) to inspect the surface and subsurface conditions, respectively. To perform the automated data analysis, we propose a novel crack detection algorithm based on the images, and a subsurface defect detection method with GPR data. Additionally, to create a composite global view of a large airport runway span, a camera/GPR data sequence from the robot is aligned accurately to create a continuous mosaic for visualization. We combine these algorithms into a software to perform automated on-site analysis. We have put our robot and software into engineering practice over 20 airports in China, achieving the performance of 70% and 67% F1-measure for crack detection and subsurface defect detection, respectively. More importantly, the results of our algorithms can satisfy the requirement of applications. I. INTRODUCTION Structural defect inspection is essential task to monitor the health and reliability of the airport runways. Early detection of defects, such as surface cracks and subsurface voids, for airport runway is an important maintenance task. By the end of 2019, there are 238 civil transport airports in China, with nearly 300 runways. Structural defects exist in many airport runways, leading to costly maintenance even safety risks if serious defects cannot be detected in their early stages. Traditional manual defect detection methods are very labor-intensive, time-consuming, with low-accuracy, and error prone. Even slight defects may be early warning signs of significant failure for airport runways, and need to be detected accurately and timely. Thus, it is necessary to develop an automated defect detection method for airport runway inspection. We develop an airport runway inspection robot (ARIR), with a camera and a GPR (Ground Penetrating Radar) fixed to perform the condition sensing task. This paper focuses on addressing the problem of analyzing the collected data The associate editor coordinating the review of this manuscript and approving it for publication was Yizhai Zhang. accurately and automatically. Specifically, we propose a robust image-based crack detection algorithm, and a deep learning based subsurface defect detection method from GPR data. In addition, to create a composite global view of a large airport runway span, a camera/GPR data stitching algorithm is presented to create a continuous mosaic for visualization. The rest of paper is organized as follows: the related work is summarized in Section II before we introduce our sensing suite configuration and data collection scheme in Section III. The proposed algorithms are illustrated in details in Section IV. The performance of our algorithms is validated in experiments in Section V, and the paper is concluded in Section VI. II. RELATED WORK In recent years, research in structure inspection using robotic systems has attracted much attention and resulted in several prototypes, such as the bridge deck inspection robot [1], [2], and the robot for tunnel structure health inspection [3]. However, the scenario of airport runway inspection is quite different and more challenging. We develop a robotic system for airport runway inspection. In order to implement the automated data analysis for airport runway inspection, there are three challenging problems, including large scale image stitching, crack detection from images, and defect detection from GPR data, that need to be addressed. General image stitching techniques have been well studied [4]- [6] and even put into commercial use. The homography model is adopted in these methods and point features in images are used to estimate the homography model [7], [8]. However, the image stitching problem in airport runway inspection is significantly different in two points: On the one hand, the surveyed regions is very large, involving huge amount of images to be stitched. On the other hand, images of airport runway are usually lack of features, leading to occasional but inevitable failures of feature-based homography estimation. Existing image-based crack detection methods can be mainly classified into two categories: the standard image processing methods, and machine learning based techniques. Standard image processing methods, including intensity thresholding [9], edge detection [10], and morphological filtering [11], [12], have been widely studied for automated crack detection. However, the performance of these methods is usually dependent upon the parameter choices [13] which are very difficult to accomplish for field images with significant visual clutters, leading to unreliable detection results in airport runway inspection applications. Machine learning based crack detection methods build on techniques such as support vector machines [14], random forest [15], random structured forest [16], and neural networks [17]. The machine learning based methods obtain more robust performance compared with image processing techniques. However, a supervised training stage is needed which requires large amount of labeled data, a difficult requirement for applications in airport runway since the sample images with cracks are rare. Our proposed crack detection algorithm combines the advantages of image processing methods in few samples and machine learning based techniques in robustness, achieving a satisfying performance on airport runway data. Recognizing the subsurface objects automatically from GPR data is nontrivial, because a GPR cannot provide 3D positions but a reflection image full of significant signal clutter. Thus, object detection from GPR in an automated manner is still a challenging problem. Standard signal processing methods, such as template matching [18], S-transform [19], and wavelet transform [20], have been used in GPR data analysis. However, these methods are sensitive to noise, so cannot be employed directly in subsurface defect detection. Machine learning based methods, especially the CNN-based deep learning techniques [21], [22] obtain much attention in GPR data analysis [23], [24]. Although these CNN-based techniques have achieved a certain of positive results, they still cannot satisfy the requirement of field applications. One of the main reasons is that only 2D B-scan images are employed, which ignores the natural 3D property of subsurface defects. Difference with all of the above mentioned works, our paper focuses on the airport runway data analysis which is collected by our ARIR system integrated with camera and GPR. The developed data analysis algorithms allows the robot to detect both surface and subsurface defects robustly, and build the large scale surveyed airport runway image to perform high-efficient assessments. III. SENSOR CONFIGURATION AND DATA COLLECTION This section firstly presents the sensor configuration of our ARIR system. The sensor setup of the ARIR system is shown in Fig. 1, which consists of one Raptor GPR with 900MHz antennas, and one DALS Nano M1920 camera with a resolution of 1920 × 1200. To conduct defect inspection, the robot navigates within a predefined surveyed region on the airport runway to collect images and GPR data, as shown in Fig. 2. The robot first moves from the start to point A, then follows a linear path along each scan. The sensing suite can cover 1 meter wide in each scan. Once the robot finish the current scan, it moves to the next scan until the entire surveyed region is completely covered. At the end of each scan, the robot first moves to the turn point along an omni path, and then moves to the other scan to ensure there is no any region missed out. When scanning, the robot simultaneously transfers the image and GPR data to the nearby data analysis center in a van using 4G/5G connection. Then the collected data will be automatically analyzed off-line, which is presented in the following section. A. LARGE SCALE IMAGE STITCHING FOR VISUALIZATION There are two most challenging problem in our large scale image stitching. One is the large amount of images, easily causing the error accumulation; The other one is the lack of matched features between some adjacent images. To handle these problems, a two-stage image stitching algorithm is proposed. In this first stage, we align all images according to their residing GPS positions. Then in the second stage, we perform the adjustment under a global optimization using the available matched point features. 1) STAGE 1: POSITION BASED INITIAL STITCHING Our robot has synchronized the RTK GPS and camera. Thus we can record global position for each image. Generally, the localization error of GPS while working in the open airport runway is about 5cm. Since we fix the camera downward to the ground, the image plane is approximately parallel to the horizontal airport runway plane. We build a world coordinate system {W } with X − Y plane to be horizontal. Then we align each image in the X − Y plane of {W } to form the initial stitching. Denote the resolution for each image as f x × f y pixels, and each image covers a region of w x × w y meters in the X − Y plane of {W }. Due to the GPS localization error, the initial stitching result need to be optimized in the following stage. 2) STAGE 2: FEATURE BASED REFINEMENT In this stage, we first extract the matched point features between adjacent overlapped images. Then we define a cost function to perform global refinement. Denote I a , a = 1, . . . , N I as the a-th image, with N I being the total number of images. We define I a and I b to be neighbors if their overlapping ratio is bigger than 0.2 in the original stitching result. Then we extract all matched points between I a and I b using SURF [25] method. The homography model is adopted to filter out the mismatched feature points under RANSAC (Random Sample Consensus) [26] framework. We define x a i and x b i to be homogeneous coordinates of two matched SURF points between two neighbored images I a and I b . A homography aims to map x b i to x a i following the relation where H a,b is the homography model between I a and I b . Thus the geometric distance between x a i to x b i can be computed as With the distance defined in (2), we find the outliers as the mismatched points iteratively under RANSAC framework. With the mismatched points removed, we keep the correctly matched SURF points for the following refinement. In theory, the matched points from different images should be at the same position in the stitched image. Due to the errors of feature points, for each image I a , we combine all matched points with its neighbored images to compute the displacement for refinement. The displacement for I a is defined as where M a denotes the total number of I a 's neighbors, g(I a , j) is a function returning the image index number of the j-th neighbor of I a , S denotes the total number of matched feature points between I a and I g(I a ,j) , the symbol ↔ indicates one point corresponds to the other one. Thus, the global refinement for image stitching can be performed by solving the following optimization problem, arg min We summarize our larege scale image stitching algorithm in Algorithm 1. B. CRACK DETECTION FROM 2D IMAGES In a gray scale 2D pavement image, cracks can be visually distinguished from the rest of the image, because of their lower intensity values and their shape of continuous long curvilinear shapes. Based on these two characteristics, an automatic crack detection algorithm is proposed. 1) INTENSITY BASED CANDIDATE CRACK PIXEL EXTRACTION First, the intensity property is used to extract the candidate crack pixels from the background. The image is first smoothed by a mean smoothing filter. We perform variations on the standard mean smoothing filter by means of threshold averaging, wherein smoothing is applied subject to the condition that the center pixel value is changed only if the difference between its original value and the average value is greater than a preset threshold. This has the effect that noise is smoothed with a less dramatic loss in crack like details. Find its neighbored images I g(I a ,j) ; 5 foreach I g(I a ,j) do 6 Extract corresponding points between I a and I g(I a ,j) using SURF; 7 Remove all mismatched points under RANSAC framework; 8 Compute its displacement using (3); 9 Estimate the optimal positions of all images by solving (4); 10 return the global panorama; For each pixel x i , it will be kept as one candidate crack pixel if the following criteria is satisfied, where g o (x i ) is the intensity of the original pixel, g t (x i ) is the intensity of corresponding pixel in the smoothed image, and T g is a specific threshold. Taking into account the different texture and illumination, the contrast between crack pixels and its surrounding background varies significantly, so the threshold T g is adaptively determined. To decease the effect of imhomogenous texture and illumination, we divide the whole image into grid cells, denoted as C i , i = 1, . . . , n c . In each grid cell C i , we select the threshold T g as where µ i and σ i are the mean and standard deviation of the difference between original intensities and their corresponding smoothed intensities of all pixels within C i , k g is an adjustment parameter which we generally set to be 1 in our application. Since the image of airport runway may quite noisy, as illustrated in Fig. 3. The extracted candidate crack pixels based on intensity property contains many isolated or non-crack ones, which need to be found and removed according to their shape characteristic, as presented in the following step. 2) SHAPE BASED CRACK PIXEL FILTERING The shape characteristic is used to distinguish the cracks from other dark patches in the image. First, we classify all candidate crack pixels into different groups, denoted as G i , i = 1, . . . , n g , according to their connectivity. To remove the noisy smaller blobs, we only keep the groups which contain more than T b pixels and whose contour/area ratio is large enough. where notation | · | means the size of a collection, c(G i ) and a(G i ) denote the contour length and area of G i , respectively, T r is a specific threshold. While by applying the above process most crack pixels are detected, noises are also brought in, specifically linear pavement textures for concrete pavement, which possess the similar property, as shown in Fig. 3. However, pavement textures always appear as parallel straight lines, which makes them different from cracks. Thus, firstly, isolated pavement texture is detected as follows. At one hand, a minimum bounding box is detected for each candidate crack pixel group G i , that has a narrow width can be inferred as containing a straight line, which can be classified as pavement texture. At the other hand, pavement texture that is connected to cracks is detected as follows. Skeleton extraction based on K3M algorithm [27] is applied for each connected component G i to extract center axis of the patch, after which line detection using LSD [28] line segment detector is applied, and parallel straight lines can be classified as pavement textures. Morphological processing is applied afterwards to the detected pavement textures, taking into account that the pavement textures are always several pixels wide. Leaving out the pavement texture pixels from the candidate crack pixels, we obtain the skeleton of cracks. Then we perform crack region growing by absorbing dark pixels neighboring the currently detected cracks, if the intensities of these pixels can satisfy a threshold test. The intensity threshold T n is determined as follows, where µ c and σ c are the mean and standard deviation of the grey levels of all currently detected crack pixels, and the parameter k c is set to be 1 in our experiments. This aggregation process is iteratively performed, so that all potential crack pixels surrounding the crack skeleton can be incorporated. Our proposed crack detection method is summarized in Algorithm 2. VOLUME 8, 2020 Algorithm 2 Crack Detection Algorithm input : Image I output: all crack pixels in I 1 Divide I into grid cells C i , i = 1, . . . , n c ; 2 foreach C i do 3 Compute the intensity threshold T g using (6); 9 Remove G i if it belongs to a group of parallel line segments; 10 Extract the skeleton of G i using K3M algorithm; 11 Iteratively perform crack growing by absorbing dark pixels neighboring to G i if their intensities satisfy (8); 12 returnall crack pixels; C. SUBSURFACE DEFECT DETECTION FROM GPR As we discussed in section II, one of the main reasons, that existing CNN-based method cannot be directly deployed in applications, is that only 2D B-scan images are employed, which ignores the natural 3D property of subsurface defects. Thus, we propose an algorithm to handle this problem. Before introducing our algorithm, we first give some definitions and notations. While the GPR scans on the ground along a linear trajectory, it generates a B-scan image, that records the reflected pulses and shows the subsurface situation. When the GPR fixed on robot moves over a regular grid to collect multiple parallel B-scans, it produces a series of B-scans. This ensemble of B-scans forms a 3D data which is named as a C-scan. Denote B k as the k-th B-scan image, C = {B k |k = 1, . . . , n b } as the C-scan consisting of n b B-scans. We can evenly divide the 3D C-scan into slices of 2D images from main view, top view, and left view, respectively. The 2D images from main view are B-scan images. We denote T m , m = 1, . . . , n t and L n , n = 1, . . . , n l as the images sliced from top view and side view, respectively. We propose a voting-based strategy to fuse the information of 2D images sliced from all views of C. We first adopt a CNN-based 2D object detection algorithm, such as Faster R-CNN, to generate the 2D bounding boxes labeled with categories and probabilities as the candidate defect regions. Denote r j as the j-th bounding box, with its probability belonging to category c k as p j,k . For each point X i in C, we define its score belonging to category c k to be the sum of probabilities from all 2D bounding boxes of B k , T m and L n . Thus, we compute the score as with where X i ∈ r j indicates point X i being inside of r j . With the scores of all points in C computed, we only keep the points if the following criteria is satisfied. where T s is a specific threshold for point scores. Then, we cluster all remained points according to their positions and residing categories. DBSCAN algorithm [29] is adopted for this clustering step. DBSCAN starts with a remained point X i of category c k , and retrieves all points of the same category density-reachable from X i with regard to two parameters, ε and MinPts. The ε-neighborhood of X i is defined as, (12) where N ε (X i ) contains at least MinPts of remained points. In our experiments, we set ε = 16 and MinPts = 4. Finally, for each point cluster, we generate a 3D bounding box as the detected defect region. We summarize our subsurface defect detection method in Algorithm 3. Algorithm 3 Subsurface Defect Detection Algorithm input : C output: 3D bounding boxes of defects 1 Divide C into a set of B k , T m and L n ; 2 Generate candidate 2D bounding boxes from B k , T m and L n using Faster R-CNN; 3 foreach X i do 4 Compute its score s(X i , c k ) using (9); 5 Filter all points in C using (11); 6 Cluster the remain points according to their spacial intensities and categories using (12); 7 Generate 3D bounding box for each point cluster; 8 return all 3D bounding boxes; V. EXPERIMENTS We have employed our ARIR system in over 20 airports of China. To evaluate our proposed algorithms thoroughly, we select a representative airport of southeast of China for both quantitative and qualitative analysis. The surveyed region is over 2000 m 2 , and subsurface depth is 1.53 meters. Two human experts have labelled the surface cracks and subsurface defects individually. With the collected inspection data, we fulfill to perform both the surface and subsurface defect detection and visualization. A. LARGE SCALE IMAGE STITCHING RESULTS We succeed to stitch the large amount of images of the airport runways. An example of image stitching results of a region of 20m × 405m, with a total of 11642 original images is shown in Fig. 4, where we can see that the panorama of the surveyed airport runway is generated. B. SURFACE CRACK DETECTION RESULTS To quantitatively evaluate the performance of our crack detection method, we select the representative images to form a dataset, consisting of 112 images with the resolution of 1200 × 900 pixels. We employ three well-known metrics, including precision, recall and F1-measure, for the evaluation. The ground truth is manually labelled by human carefully. Considering it is inevitable to introduce errors when labelling, we assume that true positive pixels are included within a 5-pixel vicinity of the ground truth. These three metrics can be computed based on true positive (TP), false negative (FN), and false positive (FP), We present some example crack detection results in Fig. 5, where we can see that the images are quite challenging due to the special texture, the thin width of cracks and poor lighting conditions. The statistical quantitatively results are shown in Tab. 1. The results of our proposed algorithm can satisfy the requirement of field applications. C. SUBSURFACE DEFECT DETECTION RESULTS We compare our algorithm with Faster R-CNN which is only conducted on B-scan. To evaluate the performance of different methods quantitatively, three metrics, including Precision, Recall and F1-measure, are employed. To illustrate these metrics for object-level detection, we employ the indicator IoU which represents the overlapping rate of overlap between the resulting box and the labelled box. If the IoU value is greater than a specific threshold T IoU , we consider this box as a true positive. The quantitative results are shown in Tab. 2, where we can see that our proposed algorithm achieves a better performance compared with standard Faster R-CNN. Fig. 6 presents an example of defect qualitative results from two different views of a GPR C-scan segment. This global subsurface defect map can be used for quantitative analysis and also for visually assessing global defect patterns. VI. CONCLUSION We developed the automated airport runway inspection data collection and analysis. Specifically, we proposed a crack detection method to detect the surface cracks robustly from the collected images. A deep learning algorithm for GPR data analysis was presented to perform the subsurface defect detection. The image stitching algorithm allowed to generate an acceptable global image of the large scale airport runway. A data analysis software embedding these algorithms was developed. We tested our proposed system in applications over 20 airports and achieved satisfying performance.
5,467
2020-01-01T00:00:00.000
[ "Computer Science" ]
High precision optical fiber alignment using tube laser bending In this paper, we present a method to align optical fibers within 0.2 μm of the optimal position, using tube laser bending and in situ measuring of the coupling efficiency. For near-UV wavelengths, passive alignment of the fibers with respect to the waveguides on photonic integrated circuit chips does not suffice. In prior research, it was shown that permanent position adjustments to an optical fiber by tube laser bending meets the accuracy requirements for this application. This iterative alignment can be done after any assembly steps. A method was developed previously that selects the optimal laser power and laser spot position on the tube, to minimize the number of iterations required to reach the target position. In this paper, that method is extended to the case where the absolute position of the fiber tip cannot be measured. By exploiting the thermal expansion motion at a relatively low laser power, the fiber tip can be moved without permanent deformation (only elastic strain) of the tube. An algorithm has been developed to search for the optimal fiber position, by actively measuring and maximizing the coupling efficiency. This search is performed before each bending step. Experiments have shown that it is possible to align the fiber with an accuracy of 0.2 μm using this approach. Introduction Recent development in photonic integrated circuits (PIC's) require advances in the optical alignment and assembly of the components. For single-mode fiber coupled optical chips, the fiber alignment and packaging is the most expensive phase in the manufacturing of these devices [14]. Moreover, for small wavelengths in the near-UV range, the mode field diameter is small, resulting in a required lateral alignment accuracy in the order of 0.1 μm, to obtain an acceptable insertion loss [1]. Passive alignment methods, such as etched V-grooves in glass or silicon, cannot be employed here, due to the geometrical tolerances (most notably the core-cladding concentricity) of commercial available fibers exceeding the alignment requirements [2]. This also implies that fiber array assemblies cannot be aligned simultaneously, since the core-to-core pitch can not be guaranteed to be within the required tolerances. Therefore, a one-time active alignment per fiber is often used, where the optical coupling efficiency is minimized by sending light through the device and measuring the transmitted power. A hill-climbing algorithm or other optimization method can be used to optimize the transmission while moving and aligning the fiber by, for example, a high-precision motorized stage [6,13]. When the optimal position is found, the fiber is fixed to the chip, usually by UV-curing adhesive [2]. However, adhesives are prone to shrinkage during or after the curing process, which causes misalignment after the final bonding step [8]. Therefore, there is a need for alignment methods to (re)align the fiber actively by an actuator integrated in the device. This onetime alignment is done after any manufacturing processes that might disturb the alignment, such as shrinkage of the used adhesives. To achieve this, we propose using a laser forming micro-actuator. Stark et al. [11] used laser forming to align multiple fibers with respect to a micro-lens array with a pitch of 2 mm. One actuator consists of three 'legs' in a 'Y' shape, where each leg can be shortened by laser forming. A fiber is fixed in the center of each actuator. The authors achieved a minimum lateral step size of 0.2 μm of the fiber. However, the heat input to the legs of this actuator requires careful planning of the irradiations to prevent excessive heating of the fiber and adhesives. Zandvoort et al. [12] aligned multiple optical fibers with respect to an optical chip, using laser forming to shorten the four legs in a '+' shaped actuator. Multiple actuators were stacked to align an array of fibers individually, each with an accuracy of 0.25 μm. However, this actuator requires a large base frame of about 30 mm × 30 mm, which significantly increases the total packaged volume of such a device. Previous work has shown that laser bending of metal tubes is a feasible method to achieve this one-time precision alignment [4], despite the significant scattering of the bending magnitude and direction of this process. The scattering in bending magnitude was found to increase for increasing laser power (and therefore with increasing bending magnitude). On contrary, the scattering in bending direction was found to decrease with increasing laser power. Figure 1 shows a sketch of how the tubes and fibers are positioned to the chip. The fibers are fixed in the tubes and the tubes are fixed to the connection block. The connection block can be aligned and joined to the chip, which would typically result in fiber to chip accuracies in the order of 2 to 5 μm. This assembly allows the use of laser bending to align the individual fibers precisely with respect to the waveguides in the chip, with a waveguide pitch of 1 mm. Little research is available on the laser bending of tubes with a diameter in the order of 1 mm. Jamil et al. [7] recently studied the bending of nickel micro tubes with an outer diameter of 1 mm and a wall thickness of 50 μm, Fig. 1 Sketch of how multiple fibers can be aligned to a PIC using tube bending using a high-power diode laser. The authors concluded that the bending angle is small for any combination of pulse duration and laser power due to the thin walls not offering enough resistance to counteract the thermal expansion, resulting in little plastic deformation. Pre-stressing the tube by displacing its free end resulted in a significant increase of the bending angle, almost linear with the amount of initial displacement. Chandan et al. [3] reported on the laser bending of stainless steel micro tubes with an outer diameter of about 1 mm and a wall thickness of 150 μm. The authors concluded that the bending angle increases both with an increasing pulse energy (at constant pulse duration) and with a decreasing pulse duration (at constant pulse energy). Qi and Namba studied the laser bending of 8 mm 304 stainless steel rods [10], by scanning the laser in the axial direction. The authors achieved a repeatable bending angle of 1.75 × 10 −3 mrad. Moreover, it was observed that multiple scans increase the bending angle linearly, except for the first scan, which shows a significant larger bending. Goal and outline This paper aims at developing a tube laser bending algorithm to align an optical fiber to a PIC, making use of only the light through the fiber for position sensing. Due to the fiber being fully enclosed by the tube, there is no knowledge of the absolute fiber tip position. However, by moving the fiber, the coupling efficiency can be maximized by actively measuring the transmitted power. The complete iteration scheme for aligning one fiber to its target can be broken down as: 1. Scan. Estimate the relative target position by moving the fiber using a scanning algorithm. 2. Parameter selection. Determine optimal laser power and beam position on the tube to reach the estimated target position. 3. Bending. Execute the laser bending step and measure the coupling efficiency. 4. Stop condition. Evaluate the stop condition based on the coupling efficiency. If it is not met, go to step 1. The following sections explain these steps in more detail. First, the fiber alignment tube actuator that was previously developed is detailed in Section 3. The proposed scan algorithm as well as the thermal expansion motion is presented in Section 4. In Section 5, the selection of the laser power and laser spot position to reach the scanned target is explained. The scan and bending step are repeated until the stop condition given in Section 6 is met. The algorithms are tested with an experimental setup, see Section 7. The results of experiments with the scan algorithm and alignment experiments are given in Sections 8.1 and 8.2, respectively. Finally, the results are discussed in Section 9 and the conclusions are summarized in Section 10. Fiber alignment by laser tube bending The assembly of the tube and the optical fiber is shown in Fig. 2. The single-mode optical fiber is fixed concentrically in the 18-mm-long metal tube using a 10-mm-long mating tube. The laser spot is located at an axial distance d from the fiber tip, at an angle φ exp . The steel tube has a outer diameter of 711 μm and a wall thickness of 89 μm. These dimensions were found to perform the best with the alignment algorithm [5]. The bending angles are small, therefore the lateral displacement of the fiber tip can be expressed as δ r = d · α, where α is the bending angle after the tube has cooled down (see Fig. 2). Scanning for the target position The optimal position of the fiber is where the transmitted power from the fiber into the PIC is maximized. The relative coupling efficiency can be measured by measuring the laser power transmitted through the chip, while a fixed-power laser source is connected to the fiber. When using the Gaussian beam approximation [9] and assuming the angular alignment is perfect and the axial separation is zero, the theoretical coupling efficiency of the interconnect reads: Where δ is the lateral offset between the fiber and target, and w f and w c are the mode-field radii of the fiber and chip, respectively. Figure 3 shows the coupling efficiency for w f = 1.55 μm and w c = 2.25 μm, which is used in the remainder of this paper. The fact that the optimal coupling efficiency is not 1 is due to the mode mismatching of the fiber and the chip. As can be concluded from the figure, for δ = 0.2 μm, the loss is 1 % of the optimal coupling efficiency. When δ is larger than 4 μm, the coupling efficiency is close to zero, and cannot be reliably measured. Thermal expansion motion The coupling efficiency through the interconnect is the only measured quantity for the two-dimensional alignment. Therefore, a searching method for the optimal alignment position is required. This searching can be done between each bending step, by exploiting the thermal expansion motion when heating the tube by laser irradiation. That is, at low laser power (1 to 3 W) and short pulse duration (50 ms), the yield stress in the tube is not exceeded, and only thermal and elastic strain occurs. That means that the final bending angle after cooling down is not changed. The direction of thermal expansion is assumed to be equal to the laser spot angle around the tube (φ exp ), and is therefore known. The magnitude of the thermal expansion bending angle β, however, is not known. Using the experimental setup (described in Section 7), it has been found that the bending due to thermal expansion reproduces very well. A prerequisite for this repeatability is that no initial stresses are present in the tube near the laser spot, and no oxidation of the tube surface due to the laser heat is formed. Therefore, the bending angle due to thermal expansion is characterized at d = 7.5 mm, which is outside the region where the actual bending is to occur (3 mm ≤ d ≤ 7 mm). Figure 4 shows the measurement of β for 36 laser pulses on different locations on four identical tubes, all at a laser power of 3 W. The repeatability of this expansion angle β is 0.15 mrad, which corresponds with a repeatability of 1.1 μm at the fiber tip. The maximum bending angle at the end of the pulse is β max = 3.2 mrad, corresponding to a tip displacement of 24 μm. This limits area in which the target can be found to be a circle with a radius of 24 μm around the current position X f (see Fig. 2). The same measurements have been repeated for a laser power of 2 and 1.5 W. The repeatability of the expansion angle were found to equal 0.12 and 0.10 mrad, respectively, corresponding with a fiber tip position repeatability of 0.9 and 0.75 μm. To estimate β when aligning the fiber to a chip (where the fiber tip position is unknown), the heating phase of these measurements is fitted with a fourth-order polynomialβ, shown by the red dashed line in Fig. 4. This curve relates the elapsed time t of the laser pulse to the expansion angle. With the known scan direction φ scan = φ exp , the position of the fiber tip X f during heating can be estimated by: Scanning algorithm The scanning algorithm is an iterative algorithm that aims at finding the maximum coupling efficiency, by moving the fiber with respect to the target. The location of this maximum, relative to the current position, then defines the direction and magnitude for next tube bending step. Figure 5 shows the flowchart of the scanning algorithm. For each scanning iteration i, the fiber is moved by thermal expansion in the direction φ (i) scan . The initial direction φ (1) scan is set to the direction of the initial guess of the relative target positionX The relative coupling efficiency η is measured in real time during the scan time 0 ≤ t ≤ t pulse . If no signal is found (i.e., max η(t) = 0), or if the number of scan iterations is less than four, the next scan direction φ scan is set in the center of the largest 'unscanned' area, and the measurement is repeated. As a result, the four initial scan iterations always result in a '+' shape of the scan path, see Fig. 6. This ensures that an inaccurate initial target guess does not result in an excessive amount of iterations for the hill climbing algorithm. When a signal is found for the current iteration, the maximum coupling efficiency η (i) max occurs at t = t max . The best estimate of X d is nowX . The next scan direction is now chosen according to a simple hill-climbing algorithm by where i best is the scan iteration where the overall maximum coupling efficiency occurred. This means that the next scan direction alternates around the direction where the best coupling efficiency was measured, with the step size halving each iteration. If a signal is found (η(t) > 0) for four or more scan iterations, a two-dimensional Gaussian surface is fitted through Fig. 4 Measured bending angle for 36 laser pulses at different positions on four identical tubes (each tube is indicated by a separate color). For the pulse duration of 50 ms and laser power of 3 W, the yield stress is not exceeded, and therefore the final deformation angle is zero. The dashed red line indicates the fourth-order polynomial fit through the heating phase, used for estimating the angle for thermal expansion the measurement data (η andX f ) of all previous iterations, see Fig. 6. If the squared 2-norm of the residuals of this fit is above 10, it is discarded. This is necessary to prevent bad fits due to numerical problems or noise in the measured data. If the fit is discarded, the hill-climbing is continued. Otherwise, the maximum of this fit is now the best estimate of X d , and φ (i+1) scan is set to the direction of this maximum. The iteration is stopped when the change in coupling efficiency from the previous iteration and the difference with (5) are satisfied. With decreasing η tol , the target position estimation is more accurate, at the cost of more iterations. Therefore, the tolerance depends on the coupling efficiency η 0 before the scan as η tol = 0.1 · (1 − η 0 ). That is, the tolerance is 10 % of the difference of the initial coupling efficiency to the theoretical maximum. This results in a more accurate estimation of X d when the target is already close. Furthermore, the laser power is lowered when the target is near, to increase the accuracy of the fitβ (see Section 4.2). The power is reduced to 2 or 1.5 W when the estimated distance to the target is below 2 or 1 μm, respectively. Laser parameter selection The estimated destination locationX d is found by scanning before each bending step (see Section 4.2). This position is relative to the current fiber position X f . Therefore, the estimated distance of the fiber tip to the target isδ d = X d . Each numbered line is a scan iteration using the thermal expansion of the tube. The line color indicates the measured signal (black: no signal, white: high signal), the shaded area indicates the Gaussian surface fit trough these signals (with inverse colors for contrast). Note the '+' shape of the first four iterations, even when a signal was already found at iteration 3. At iteration 5, the hill climbing starts, ending at iteration 11, which is almost equal to iteration 10, and meets the stop condition A method has been developed [5] that determines the optimal laser power P opt and laser spot position d opt to minimize the required number of bending steps. This method takes the scattering into account, by 'learning' from position measurements of previous bending steps. A tradeoff has been identified between high certainty in either the bending angle α when choosing a low laser power, or in bending direction φ r when choosing a high laser power. An optimum in d and P can be found or each desired step size δ d , by minimizing the expectation of the error e (see Fig. 2) after the bending step, by assuming a normal probability distribution for α and ω, with a mean and variance depending on the laser power P . However, as stated earlier in this paper, it is assumed that the fiber position cannot be measured directly. Therefore, the learning algorithm from [5] cannot be used. It is therefore required to gather historical response data of the bending angle and direction to the laser power beforehand. Figure 7 shows the optimal parameters used in this paper, based on the measurements presented in [5]. Stop condition of alignment iteration The scanning and bending iterations continue until the coupling efficiency η is within 1 % (0.15 μm, see Eq. 1) of the optimal coupling efficiency η opt . However, η opt is unknown beforehand. Therefore, η opt is estimated by the maximum coupling efficiency that has been measured during the past scanning and bending steps. This estimated maximumη opt converges to η opt with an increasing number of bending and scanning steps. The stop condition is defined by Fig. 7 The optimal parameters depending on the desired fiber tip displacement δ d , based on the optimization and data from [5]. The laser power is limited to 10 W to prevent melting of the tube However, the stop condition can be met beforeη opt has converged close to η opt . This is avoided by requiring the condition in Eq. 6 is met twice. The scan step in between those two bending steps (see Section 4.2) ensures that the estimateη opt is improved, due to the scanning in all directions. Experimental setup To test the performance of the scanning and alignment algorithms, an experimental setup has been developed that allows for fully unattended laser bending of the tube to align an optical fiber to a pre-set destination position. The position of the fiber cannot be measured during the fiber to chip alignment. However, the lateral (X,Y) displacement of the fiber tip is of interest to evaluate the performance of the algorithm. Therefore, this experimental setup is used to measure the fiber tip position by the light emitted from the fiber, instead of the coupling efficiency. This means that a PIC chip is not in place in this setup. However, the coupling efficiency with respect to such a chip is simulated in real-time by using Eq. 1 and the measured distance to the destination δ d . This simulated coupling efficiency is therefore used to test the alignment algorithm. Tube sample and fixation The 304 stainless steel tubes received a hard temper treatment after drawing. The tube dimensions are listed in Table 1. A single-mode fiber (Thorlabs SM600) was fixed to each tube as shown in Fig. 2. The tube with the fiber is clamped in a custom brass clamp over 2-mm length (see Figs. 8 and 2). The clamp with the tube can be aligned to the measurement system in all directions (except for the rotation around the tube axis) using motorized and manual stages. Laser and beam delivery A fiber laser (JK 100FL, max. 100 W, 1080 nm) showing a Gaussian intensity distribution and a 1/e 2 spot diameter of 400 μm at the tube surface, is used to heat and deform the tubes. The laser spot is either positioned directly on the tube, or via one of two fixed mirrors near the tube, see Fig. 9. A camera is mounted on the focusing head to align the laser beam to the tube. Using the tip/tilt mirror, three radial positions spaced 120 • from each other, and the complete tube length are accessible by the laser spot. Additionally, since the spot size is smaller than the tube diameter, a small deviation of ± 35 • from these radial positions can be tolerated. This means that for φ exp three evenly spaced 'blind spots' of 50 • are present that can not be reached by the laser beam. Figure 9 shows the optical elements to measure the displacement of the fiber tip; Fig. 8 shows a photo of the tube with fiber positioned in front of the measurement system. A low power laser source (2 mW, 650 nm) was connected to the free end of the fiber. The beam emitted from the fiber was focused with an aspheric lens with a focal length of 4.6 mm. The beam position was measured by two Fig. 8 Detail photo of the experimental setup duo-lateral position sensing detectors (PSDs) via beamsplitters. Furthermore, a camera is used to find the focus position of the fiber with respect to the lens, see Fig. 9. Fiber position measurement To calibrate the PSD signals with respect to the lateral displacement, the relative position of the clamp with the tube and fiber with respect the PSDs is measured by capacitive displacement sensors. The relation of the signal from the PSDs and the fiber position is calibrated by moving the clamp with the tube, and measuring the translation with the capacitive sensors. The (quadratic) relation was found by multivariate linear regression, using the least squares estimate. After calibration, the capacitive sensors are used to subtract any external influences (for example vibrations or thermal variations) from the measurement. The resulting signal gives the lateral position of the fiber tip with a resolution of 50 nm, a maximum absolute measurement error of 0.2 μm and a repeatability better than 0.1 μm over a range of ±25 μm. All measurement signals are processed by a Matlab Simulink Real-time environment, sampling at 5 kHz. For each deformation step, the signals are processed to extract the relative tube deformation magnitude and direction. The beam position, laser power, triggering, and tube positioning stages are all fully computer-controlled. The setup and measurement system is explained in more detail in [4]. Results Both the scanning and alignment algorithms were tested with the experimental setup. Five alignment experiments are performed, where the initial distance to the target was set to 10 μm, each with a random direction, see Table 2. Target searching experiments The scan algorithm has been evaluated with the experiments listed in Table 2. The estimated target position error is defined as e scan = |X d −X d |. Figure 10 shows the histogram of the error relative to the distance to the target (e scan /δ d ) for 94 scan experiments. The histogram shows that 90 % of the scans have an error that is smaller than the distance to the target, resulting in a bending step that is likely to converge to the target. The remaining 10 % of the scans Fig. 10 Histogram of the relative error of the scanning algorithm to the (real) distance to the target δ d for 94 scan experiments. e scan = |X d − X d | have an error larger than the distance to the target, which mostly happens when the δ d < 0.5 μm. This is a limitation on the accuracy of the fitβ at the start of the pulse (see Section 4.1). This accuracy is mostly limited by the sampling time of the acquisition hardware combined with the fast motion of the fiber during the thermal expansion. An even lower laser power would result in a slower expansion, but 1.5 W is the minimum power for the used laser system. Alignment experiments The alignment algorithm has been tested using the estimated target positionsX d (see Section 8.1). For experiments listed in Table 2, the number of steps required to reach the stop condition is between 5 and 16. All experiments ended within 0.2 μm from the target position. A typical path is Table 2). The red circle indicates the actual target zone. The black line indicates the location of the fiber tip between each deformation step, connected by straight lines. The red arrows indicate the measured direction of the thermal expansion at each bending step. The scanned estimated destinationX d is indicated by a numbered '+', corresponding to the scan prior to each bending step shown in Fig. 11. Note the large deviation from the planned path for the second step in this figure. This is due to the 'blind spots' on the tube that cannot be accessed by the laser (see Section 7), instead the closest accessible direction is chosen. Discussion In this paper, we have shown the principle of precision optical fiber alignment by laser tube forming on steel tubes. It is expected that even better accuracy and stability can be obtained using tubes from low thermal expansion metals such as Invar. Unlike steel, the linear thermal expansion coefficient (CTE) of Invar at room temperature (1.6 × 10 −6 K −1 ) is comparable to that of the chip (1 × 10 −6 K −1 to 2.5 × 10 −6 K −1 ). Above 200 • C, the CTE of Invar rises sharply, making it suitable for laser forming, while the position is stable at the operating temperature of the device. Furthermore, the length of the tube has been chosen such that d can be chosen between 3 and 7 mm, allowing for a wide range of step sizes δ d . However, it is expected that the initial 'rough' alignment of the tube assembly is within 5 μm. Figure 7 shows that in this case d opt does not exceed 4.5 mm. Therefore, the tube (and the free fiber length) can be shortened significantly, to increase the stiffness of fiber tip position and reduce its mass. This improves the stability of the fiber tip when the device is subject to external vibrations. Conclusion Fiber positioning using micro tube laser bending was demonstrated to be accurate enough for the alignment with respect to an photonic integrated circuit chip. Because the fiber position cannot be measured inside the tube, the optical coupling efficiency is measured instead. The maximum of this coupling efficiency is found by moving the fiber tip, by exploiting the thermal expansion motion of the tube. A scanning algorithm is developed to find the estimated position of this maximum, even when no light is transmitted in the initial position. The estimated position is used as target for the laser forming step, resulting in a permanent deformation. This is repeated until a stop condition based on the coupling efficiency is met. Experiments using this searching and alignment algorithm show that a final accuracy of 0.2 μm is achieved within 5 to 16 steps. The accuracy is limited by the repeatability of the thermal expansion motion and the measurement timing precision, while scanning for the target position.
6,577.2
2016-09-01T00:00:00.000
[ "Engineering", "Physics" ]
Nanotechnology for Nanophytopathogens: From Detection to the Management of Plant Viruses Plant viruses are the most destructive pathogens which cause devastating losses to crops due to their diversity in the genome, rapid evolution, mutation or recombination in the genome, and lack of management options. It is important to develop a reliable remedy to improve the management of plant viral diseases in economically important crops. Some reports show the efficiency of metal nanoparticles and engineered nanomaterials and their wide range of applications in nanoagriculture. Currently, there are reports for the use of nanoparticles as an antibacterial and antifungal agent in plants and animals too, but few reports as plant antiviral. “Nanophytovirology” has been emerged as a new branch that covers nanobased management approaches to deal with devastating plant viruses. Varied nanoparticles have specific physicochemical properties that help them to interact in various unique and useful ways with viruses and their vectors along with the host plants. To explore the antiviral role of nanoparticles and for the effective management of plant viruses, it is imperative to understand all minute details such as the concentration/dosage of nanoparticles, time of application, application interval, and their mechanism of action. This review focused on different aspects of metal nanoparticles and metal oxides such as their interaction with plant viruses to explore the antiviral role and the multidimensional perspective of nanotechnology in plant viral disease detection, treatment, and management. Introduction Food security has always been the priority and important agenda around the globe to feed the large population [1]. Food sustainability is encountering a serious threat due to the manifestation of devastating infections followed by diseases in cultivated plants [2][3][4]. Majorly, crop infections are caused by plant pathogens such as bacteria [5], fungi [6,7], and viruses [8][9][10][11][12]. Phytoviruses have been reported for several decades as the most contagious pathogens which cause drastic effects on plants. Various scientists working in plant virology have given critical reviews which have demonstrated that the heavy crop losses are due to virus diseases [13][14][15][16][17][18][19][20][21][22][23][24]. This loss can be measured in terms of both quantity and quality of produce [25]. The proper management of virus diseases of plants is always been a matter of great concern from farmers to horticulturists, manufacturers to consumers, and foresters. For decades, nanotechnology has proved its potential for the development of effective formulations [26][27][28], but due to the paucity of commercial applications and its role in agriculture has not gained popularity, various studies showed the use of nanoparticles as insecticides, fungicides, or herbicides and discussed the nanoparticle formulations against a target pest. There are two mechanisms for the application of nanoparticles to safeguard plants: (i) nanopar-ticles themselves provide crop protection and (ii) nanoparticles used as carriers for existing pesticides, for example, the application of double-stranded RNA, can be done by spray application on foliar tissue or on roots or soaking of seeds [29,30]. In this review, we present a focused discussion on different aspects of nanoparticles in plant viral disease detection, treatment, management, and their interaction with plant viruses. The new term is also given to this study called "nanophytovirology." Nanoparticles and Their Application against Plant Pathogens Nanoparticles (NPs) are small materials with nanosize ranging from 1 nm to 100 nm [31,32] and are classified based on their shape or size and also (and most importantly) on their composition ( Figure 1). The different class comprises metal NPs, ceramic NPs, polymeric NPs, and fullerenes. They show unique physiochemical properties due to their large surface-to-mass ratio, high reactivity, and unique interactions with biological systems [33]. Due to these unique properties and characteristics, they have gained attention in all fields from commercial to domestic, medical [34,35] to agriculture [36], and environment [37,38] to energy-based research [39][40][41]. The use of nanoparticles for sustainable agriculture was discussed in [31,42,43]. Different nanoparticles are used to design biosensors for the detection of plant disease, as the delivery vehicle for genetic materials [44], such as nanofertilizers and nanopesticides [28,45]. The nanoparticles could be synthesized by three different methods: biological, physical, and chemical methods. Out of these, biological approaches are considered the best, due to their nontoxic effect, cost-effective, and environmentally friendly nature [46]. The method of synthesizing nanoparticles greatly influences their geometry and further affects the physiochemical properties like morphology, size, crystal structure, and dispersity. The biosynthetic method to synthesize nanoparticles by different methods and utilizing plants and microorganisms is very diverse. Preliminary microorganisms or plant extracts are exposed to metallic salts that in turn reduce the metal to its nanosize. The nanoparticles were further characterized and made available for further applications [47][48][49]. Numerous evaluations have been carried out that show the applications of nanoparticles related to plant diseases are either metalloids, metallic oxides, or nonmetals, involved in disease resistance as bactericide/fungicides or nanofertilizers (Table 1) [44,50]. The metallic nanoparticles include pure metal and metal oxides [51]. The most popular metal nanoparticles comprise silver (Ag), gold (Au), platinum (Pt), nickel (Ni), and iron (Fe), and the metal oxide nanoparticle includes compounds such as TiO 2 , ZnO, MgO, CuO, Cu 2 O, Al 2 O 3 , NiO, and SnO 2 [52]. Systematic Facets of Nanomaterials as Antiviral Agents Phytoviruses are always being a challenge for farmers in terms of the production of crops and vegetables. There is a list of experiments that shows the application of different nanoparticles in bacterial and fungal diseases of plants; however, the focused study of nanoparticles on plant virus management is still in its preliminary stages, and the antiviral mechanisms of action of metal nanoparticles are not completely understood. The summary of published work and the available information concerning nanoparticles and plant viruses are gathered in Table 2. The antiviral mechanism of NPs discussed in different studies and other different possible mechanisms ( Figure 2) and the specific interactions between host (plant), vector (s), and pathogen (viruses) is summarized in (Figure 3, 4,and 5). Antiviral Activity of Metallic Nanoparticles for Plants To protect the plants from pathogen invasion, the nanomaterials can be applied directly either into the soil or to seeds or foliage. This direct application is similar to the use of chemical pesticides. However, direct application of nanoparticles to the soil directly affects microorganisms, especially nitrogen-fixing and mineral solubilizing which play a significant role in plant health and nutrition. Silver nanoparticles were the first to be used in plant disease management and showed their antimicrobial activity [53]. The nanoparticle's interface with bacterial and fungal pathogens is studied very well but with viral particles is still not explored well, although some researchers studied the antiviral and virucidal mode of action of silver nanoparticles (AgNPs) against plant viruses [54][55][56]. The antiviral mechanisms of metal nanoparticles are not very well understood, but the available studies could provide evidence of the mechanisms involved. The antiviral activity of MeNPs has been observed both in vitro and in vivo on different plants, and it is found to be effective against most of the RNA viruses. Various studies revealed that physical properties like size, shape, and surface area are the key factors to control the biological activity of any nanoparticle [57,58]. Reports revealed that the antibacterial activity of AgNPs is size-dependent. The small size (10 nm) of AgNPs has shown more antibacterial affinity in comparison to larger ones [59]. Furthermore, the variable antimicrobial activity of nanoparticles is influenced by the shape of nanoparticles (spherical, rod-shaped, nanoshells, nanocages, nanowires, triangular, and dimensional). The impact of AgNPs on the Bean yellow mosaic virus (BYMV) was studied and reported that the antiviral property of NPs is due to their ability to attach to the envelope glycoprotein of the virus. It binds the disulfide bond regions of the CD4-binding domain present in the envelope glycoprotein gp120 of yellow mosaic virus and prevents entry [54]. Apart from their interaction with the surface glycoprotein of the virus, AgNPs also interact with the nucleic acid of the virus to enter into the cell and complete their antiviral activity. This experiment was intended to compare the impact of the spray of AgNPs before infection, 24 h after infection, and at the time of inoculation. Another work was also evidenced the high attachment capacity of 2 BioMed Research International nanoparticles of different sizes (10 and 50 nm), to virus DNA and extracellular virions. It was also observed that the AgNPs inhibited the production of viral RNA and extracellular virions in in vitro conditions, verified by UV-Vis absorption assay [60] and also found to restrict the fusion of the viral membrane by hindering viral permeation into the host cell [61]. Sun and his coworkers compared the AgNPs and gold nanoparticles and found AgNPs superior when used for cytoprotective activity towards the virus. It was a general [62][63][64][65][66]. Dougdoug et al. [67] experimented with the effectiveness of AgNPs as an antiviral agent against two plant viruses, Potato virus Y (PVY) and Tomato mosaic virus (ToMV) and observed the effect. Different concentrations (50, 60, and 70 ppm) of AgNPs was sprayed on the plants carrying both diseases and at 50 ppm a concentration of AgNP the striking decrease in disease severity and concentration of both viruses was observed. Furthermore, the transmission electron microscopy (TEM) analysis of the viral sap substantiated the binding of coated protein particles of the virus to AgNPs [67]. Furthermore, a study on Sun-hemp rosette virus (SHRV) indicated complete suppression of the viral disease when spraying with AgNPs at the concentration of 50 mg/L. The detailed result showed the binding of these NPs with virus coat protein and virus inactivation is due to inhibition of virus replication [68]. The antiviral effect of AgNPs was observed against Tomato spot wilt virus (TSWV) on Chenopodium amaranticolor. Plants sprayed 24 h after inoculation showed weak infection in comparison to plants sprayed before inoculation [69]. Similar result, reduction in virus concentration and disease percentage, was reported by El-shazly et al. on potato plants against Tomato bushy stunt virus (TBSV) [70], while Cyamopsis tetragonoloba, infected with Sun-hemp rosette virus (SHRV), displayed complete suppression of the disease and inactivation of virus replication [68]. The antiviral effect of ZnO and SiO 2 NPs was studied on tobacco plants against TMV by Cai et al. Both NPs were applied on 3, 7, and 12 days before inoculation of virus. The plant treated 12-days before displayed an extreme antiviral effect by preventing TMV infection and spreading in new leaves [71]. Findings of his work suggest that the inhibition of TMV is due to interaction of metal NPs with envelope glycoproteins, resulting injury of TMV coat protein, and its aggregation. Hao et al. used Fe 2 O 3 or TiO 2 NPs for pretreatment of tobacco plants for 21 days to check the antiviral properties against Turnip mosaic virus (TuMV). The results of the study showed a high decrease in viral proteins, in which the authors suggest could be related to the fact that the NPs interfered with either protein biosynthesis or posttranslational modification processes in the virus, and activated defense mechanisms [72]. Various reports confirmed its action against plant viruses as it successfully induced resistance to mosaic disease impeded by the virus in potato, alfalfa, cucumber, peanut, and snuff [72][73][74]. Malerba and Cerana reported various conceivable mechanisms of chitosan that precede the antimicrobial effects that includes disruption of the cell membrane, inhibition of toxin production and microbial growth, inhibition of H+ -ATPase activity, and preventing the synthesis of mRNA and proteins. Furthermore, their studies revealed the antiviral action of chitosan nanoparticles in bean plants infected with bean mild mosaic virus, tobacco plants infected with tobacco necrosis virus and tobacco mosaic virus [75]. Adeel Nanotechnology in Diagnostics of Plant Viruses Many molecular and serological techniques, viz., polymerase chain reaction (PCR), real time PCR, immunological assays such as Enzyme-linked immunosorbent assay (ELISA), and electrochemical immunoassay (ECIA), are being used for diagnostics and identification of plant viral pathogens [32,[77][78][79][80]. Although these techniques are efficiently and effectively detecting plant pathogens, it requires well-established laboratory settings with high-end equipment and chemical, well-trained/experienced individuals. With fast-developing technology, the hour demands to develop rapid, accurate, reliable, and miniaturized field-deployable devices which do not demand a very trained personnel [81]. The success of any management practice depends on the quick, early, and sensitive diagnostic of the infected material. Nanotechnology recommends major progress through quick and very sensitive pathogen probes in this area. Nanotechnology has gained a pace in the diagnostics of plant pathogens. Nanoparticles are being used as rapid diagnostic tools for the detection of bacterial, fungal, and nematodes, and very few NPs Folia r appli catio n N P s V ir u s in t e r a c t io n N P s P la n t in t e r a c t io n S o i l a p p l i c a t i o n a n d u p t a k e Nanoparticles entry into plant reports [82,83] are there in the diagnostics of plant virus disease. The use of superparamagnetic iron oxide nanoparticles has been used in medicine and water purification for decades [84,85], but now, it has taken advancement, and its potential is being recently been explored in plant pathology. These magnetic nanoparticles adhere to the biological tissue and DNA, eventually facilitating the extraction and detection of the pathogen [86]. Biosensor-Based Detection. The device designed to detect the occurrence of any biological analyte, such as a biomolecule, a biological structure, or a microorganism, is known as biosensors. It consists of three parts: (i) a section that identifies the analyte and produces a signal, (ii) a signal transducer, and (iii) a reader device [87]. Various nanomaterials, basic metallic nanoparticles (carbon and gold nanoparticles), and nanospheres enhance the sensitivity of the assay when used in combination with aptamerbased detection systems. In the case of immunosensors, self-assembled monolayers (SAM) were used for diagnostics of plant pathogens. In this method, gold electrodes are the most commonly used substrate for the detection of Plum pox virus (PPV) [91]. Later on, Jarocka et al. in 2013 applied the same method for the diagnostic of Prunus necrotic ringspot virus (PNRSV) and concluded that the biosensor has alike similarity as ELISA [92]. Another biosensor-based plant virus detection was discussed by Huang et al. [93]. He used the quartz crystal microbalance immune sensor that was based on SAMs for identification of Maize chlorotic mottle virus (MCMV). The sensitivity of the biosensor was found to be similar to ELISA with a detection limit of 250 ng/mL and showed high sensitivity with similar viruses such as Wheat streak mosaic virus (WSMV) [93]. Lateral flow immunoassay (LFIA), a type of optical immunosensor, was initially used by Tsuda et al. [94] for the detection of the Tobacco mosaic virus (TMV). Later on, this method was employed for the diag-nostic of several other viruses, Citrus tristeza virus (CTV) [95], Potato virus X (PVX) [96], Potato virus x [97], Potato virus Y (PVY), Potato virus M (PVM), and Potato virus A (PVA) with a reported sensitivity of 2 ng/mL. An immunoassay is reported to be developed for the detection of multiple substances such as biomarkers and plant pathogens that function based on fluorescence-loaded magnetic microspheres and fluorophore antibodies [98,99]. A study has been conducted using specific antibodies for plant viruses, Chilli vein-banding mottle virus (CVbMV), Watermelon silver mottle virus (WSMoV), and Melon yellow spot virus (MYSV) [100]. Although the techniques have shown high sensitivity for detection along with the capacity of multiple detections in a single assay, they did not become very popular due to the complexity of assays and fluorescent readers. Various reports mentioned the use of label-free biosensors, based on SPR, developed for the detection of CMV, TMV, and Lettuce mosaic virus [101][102][103][104] and for orchid viruses, Cymbidium mosaic virus (CymMV) or Odontoglossum ringspot virus (ORSV) [90]. Table 3 summarizes the application of different biosensors for the detection of various plant viruses. Plant Virus Detection Based on Quantum Dots (QD). Quantum dots (QD) are small semiconductor nanocrystals that have been used for the construction of biosensors [105]. It has been used for disease detection as it consists of a unique optical property that is used in fluorescence resonance energy transfer (FRET) [106]. Rad et al. used this approach for the detection of phytoplasma disease known as Witches' broom disease of lime (WBDL) caused by Candidatus Phytoplasma aurantifolia [107]. The consistent result with 100% specificity and sensitivity was achieved by this approach for approximately 5 Candidatus Phytoplasma aurantifolia per μL. This technique was applied to detect Rhizoctonia, the disease vector of the Beet necrotic yellow vein virus (BNYVV) [108]. Metal Nanoparticles as Biostimulants in Virus-Infected Plants Biostimulants are substances that enhance the physiological process of plants and promote growth, development, and defense responses. When applied directly to plants or seeds, they cannot be considered pesticides or nutrients [109]. The positive or negative effect of nanoparticles on the plant is based on the type of nanoparticles and the condition of the plant [110,111]. Healthy tobacco plants were studied for the effect of SiO 2 , Fe 2 O 3 , and ZnO nanoparticles and observed to have increased growth [112,113]. When the effect of NiONPs was observed on the virus-infected cucumber plants by foliar spray and soil drench, it showed an increased number of leaves along with higher fresh and dry weight [114]. The tobacco plant infected with Turnip mosaic virus was being treated with foliar spray of TiO2 and FeO3 with the concentration of 50 mg/L and observed with enhanced fresh and dry weight, whereas no effect was observed with the treatment of 200 mg/L in comparison to nontreated plants [115]. When the Potato virus Y-infested [71]. When the ROS level increases than the threshold, oxidative stress is being produced and this interrupts the steadiness between ROS and antioxidants. The role of antioxidants in plants is to counterpoise the antioxidants effect. Superoxide dismutase (SOD) acts as the initial boundary of defense and coverts the O 2 into water and H 2 O 2 [113,114]. The enzymes like catalase, ascorbate peroxidase, and guaiacol peroxidase make antioxidant systems [113]. The type of metal nanoparticles, their concentration, and the culture type define the interaction of metal nanoparticles with cellular redox homeostasis and alter the incident of oxidative stress inducing or reducing it [114]. The foliar application of Fe 3 O 4 NPs to tobacco leaves resulted in enhanced production of ROS, which indicates the stimulation of resistance against the virus in tobacco [71]. When cucumber plants were treated with SiO 2 NP, they displayed the expression of pox and pal genes a day after inoculation of PRSV [116]. A similar observation was reported, with increased pod gene expression, when cucumber plants were treated with NiO NPs, after four days of CMV inoculation [112]. The AgNP-treated tomato plants when inoculated with TMV and PVY revealed a major increase in the activity of enzymes such as polyphenol oxidase and antioxidant enzyme POD [67,117]. BioMed Research International up-or downregulated in different types of stress. Nanoparticles have been shown to stimulate hormonal balance in plants [110]. Various studies and discussions concluded that the expression of any particular plant hormone is completely dependent on the particular interaction of plant and metal nanoparticles together with the dose and time of application. Vincovi'c et al. reported that treatment of Capsicum annum L plants with AgNPs increases cytokinin [117]. Tobacco plants infested with TMV, when given the treatment of Fe 2 O 3 and TiO 2 NPs, influence the levels of zeatin, ribose (ZR), abscisic acid, and brassinosteroid (BR) phytohormones [115]. When the treatment of similar nanoparticles was given to tobacco, plants infected with TuMV showed an enhanced level of BR and ZR, but the decrease in ABA concentration was observed. Various other reports suggest that treatment of ZnO and SiO 2 [111] to uninfected tobacco plants upregulated salicylic acid-(SA-) induced pathogenesis and a similar effect was reported for Fe 3 O 4 NPs [111]. Conclusion Nanophytovirology is a very promising field towards sustainable crop protection against viruses. The different nanoparticles and their applications have tremendous potential to deal with plant virus disease-related problems. Among plant viruses, DNA plant viruses specially geminiviruses [118] are a continuous threat to farmers and cause a serious threat to the crops [12,119]. It consists of a very wide host range, with varied symptoms. Geminivirus constitutes a major and rapidly emerging group [120,121] of circular, single-stranded plant viruses. Various countries like the United States, Africa, India, and Pakistan have reported large crop losses due to geminivirus infection, worth several million dollars [10,122,123]. Moreover, the effect of nanomaterials in the tripartite interaction of plant-viruses-vector is still not known. Although various roles and uses have already been studied, precise complementary methodologies are needed to establish so that a ready-to-use technology could be given to farmers without posing any risk to the environment or consumers. This additional information and knowledge are required to particularize the doses, the stage of the plant for application, and the particular type of NPs that can produce the greatest advantages. In addition, the effect of nanoparticles on the virus-vector relationship also needs to be explored, whether it is dose-dependent or stage-dependent. It is important to say that for sustainable management of phytoviruses, the multidisciplinary research is required with proper planning, development, and implementation of nanobased antiviral strategies. Conflicts of Interest There is no conflict of interest. Authors' Contributions All authors contributed to the article and approved the submitted version. Rachana Singh and Deki Choden were involved in designing, conception, and revising of the manu-script critically for intellectual content. Mohammad Kuddus and Pradhyumna Kumar Singh were involved in critically examining the manuscript and incorporation important relevant information.
5,027.4
2022-10-03T00:00:00.000
[ "Biology" ]
Differential Cerebral Cortex Transcriptomes of Baboon Neonates Consuming Moderate and High Docosahexaenoic Acid Formulas Background Docosahexaenoic acid (DHA, 22:6n-3) and arachidonic acid (ARA, 20:4n-6) are the major long chain polyunsaturated fatty acids (LCPUFA) of the central nervous system (CNS). These nutrients are present in most infant formulas at modest levels, intended to support visual and neural development. There are no investigations in primates of the biological consequences of dietary DHA at levels above those present in formulas but within normal breastmilk levels. Methods and Findings Twelve baboons were divided into three formula groups: Control, with no DHA-ARA; “L”, LCPUFA, with 0.33%DHA-0.67%ARA; “L3”, LCPUFA, with 1.00%DHA-0.67%ARA. All the samples are from the precentral gyrus of cerebral cortex brain regions. At 12 weeks of age, changes in gene expression were detected in 1,108 of 54,000 probe sets (2.05%), with most showing <2-fold change. Gene ontology analysis assigns them to diverse biological functions, notably lipid metabolism and transport, G-protein and signal transduction, development, visual perception, cytoskeleton, peptidases, stress response, transcription regulation, and 400 transcripts having no defined function. PLA2G6, a phospholipase recently associated with infantile neuroaxonal dystrophy, was downregulated in both LCPUFA groups. ELOVL5, a PUFA elongase, was the only LCPUFA biosynthetic enzyme that was differentially expressed. Mitochondrial fatty acid carrier, CPT2, was among several genes associated with mitochondrial fatty acid oxidation to be downregulated by high DHA, while the mitochondrial proton carrier, UCP2, was upregulated. TIMM8A, also known as deafness/dystonia peptide 1, was among several differentially expressed neural development genes. LUM and TIMP3, associated with corneal structure and age-related macular degeneration, respectively, were among visual perception genes influenced by LCPUFA. TIA1, a silencer of COX2 gene translation, is upregulated by high DHA. Ingenuity pathway analysis identified a highly significant nervous system network, with epidermal growth factor receptor (EGFR) as the outstanding interaction partner. Conclusions These data indicate that LCPUFA concentrations within the normal range of human breastmilk induce global changes in gene expression across a wide array of processes, in addition to changes in visual and neural function normally associated with formula LCPUFA. INTRODUCTION The vertebrate central nervous system (CNS) is rich in the long chain polyunsaturated fatty acids (LCPUFA) docosahexaenoic acid (DHA) and arachidonic acid (ARA), and this composition is highly conserved across species [1]. Within the CNS, DHA and ARA are found at highest concentration in gray matter [2], and DHA is particularly concentrated in retinal photoreceptor membranes where it has long been known to play a key role in visual excitation [3]. In humans, DHA and ARA accumulate perinatally [4] and many studies of DHA/ARA supplemented formula show improvements in visual acuity [5] and cognitive function [6]. Despite the high demand for LCPUFA during perinatal CNS development, the best current evidence indicates that ARA and DHA can be synthesized only very inefficiently from dietary precursors and must be obtained from the diet [7]. DHA and ARA are present in all human milks studied to date [8], however their concentration is variable. For DHA it is closely linked to the mother's intake of preformed DHA, which is in turn reflective of the mother's intake of fatty fish or fish/marine oil supplements [9,10,11,12]. Dietary factors associated with ARA are less well understood [13]. High levels of precursor fatty acids LA and ALA in formulas yield negligible or at most moderate increases in plasma ARA and DHA concentrations [14,15]. However, in randomized controlled studies where preterm and term infants are fed preformed DHA and ARA supplemented formula, improve-ments in LCPUFA status as well as cognitive development and visual functions are observed [16,17,18,19,20]. While the importance of LCPUFA in infant nutrition has been established, the underlying mechanisms are only beginning to be understood. Brain accretion of LCPUFA is most intense during the brain growth spurt in the third trimester of pregnancy and during early childhood [21,22,23,24]. Selective incorporation and functional properties of LCPUFA, especially DHA, in retinal and neural membranes suggests a specific role in the modulation of protein-lipid interactions, membrane bound receptor function, membrane permeability, cell signaling, regulation of gene expression and neuronal growth [25,26,27,28,29,30]. Additionally, LCPUFA mediate metacrine regulation and changes in gene expression by interacting with nutrient sensitive transcription factors [18,31]. Accordingly, poor nutrition during prenatal life and early infancy may have a lasting influence on neural function, as well as adult risk for chronic diseases [32,33,34]. Studies suggest that infant diets low in LCPUFA can lead to health complications such as insulin resistance, obesity, or blood pressure changes later in life [35,36]. DHA and ARA were introduced in 2002 to infant formulas in the United States, but initial concentrations varied over more than a factor of two (range of DHA 8-19 mg/kcal; ARA 21-34 mg/ kcal), [37] and there are no dose response studies in humans or non-human primates available as a guide to optimal levels. A previous study in our laboratory on 4-week-old baboon neonates with preformed DHA and ARA (0.33%,w/w DHA and 0.67% ARA) in formulas showed DHA concentrations in various regions of the brain similar to breastfed controls, with the important exception of the cerebral cortex; ARA concentrations were not much altered by inclusion of dietary preformed ARA [2]. These results inspired our present study on 12 week old baboon neonates with the higher level of 1.00% DHA, along with 0.67% ARA. We report elsewhere [38] that DHA in the precentral gyrus of cerebral cortex increased beyond that achieved for 0.33% DHA, while regions such as the basal ganglia that reached DHA concentrations similar to breastfed animals at 0.33% DHA did not show further increases with 1.00% DHA. These data demonstrate that formula DHA in the high normal range of breastmilk DHA supports enhanced cortex DHA, but do not reveal how this compositional change may influence metabolic function. To gather mechanistic information on the role of DHA and ARA in the primate cerebral cortex, we investigated global gene expression for cerebral cortex of animals in this study, consuming two different levels of formula DHA both within the range found in human breastmilk [8]. We report here changes in expression of thousands of genes in 12-week-old baboons in response to two different levels of LCPUFA: 0.33%DHA and 0.67% ARA; 1.00% DHA and 0.67% ARA. We have reported in detail on consequences for tissue fatty acid composition [38] and other factors elsewhere (Hsieh et al., 2007, submitted). RESULTS AND DISCUSSION Significance analysis (P,0.05) identified changes in expression levels of 1108 probe sets (ps) for comparisons of L3/C and/or L/ C, representing 2.05% of the total.54,000 ps on the oligoarray. Most ps showed ,2-fold change. For the L/C comparisons, 534 ps were upregulated, and 574 ps were downregulated, while for the L3/C comparisons, 666 ps were upregulated and 442 ps were downregulated, showing that more genes were upregulated in the cerebral cortex in response to increasing formula ARA and DHA. Functional characterization by gene ontology of these differentially regulated genes assigns them to diverse biological processes including lipid and other metabolism, ion channel and transport, development, visual perception, G-protein and signal transduction, regulation of transcription, cell cycle, cell proliferation, apoptosis etc. Known functions were assigned to 702 differentially expressed probe sets, whereas 406 ps had no known functions as shown in Table S1A, S1B, S1C, S1D. Probe sets with $1.4 fold expression change are presented in Table S2. Experimental details for nine genes used for confirmatory RT-PCR analysis are presented in Table S3. We note that in our L/C and L3/C comparisons, expression patterns fall into four groups, L/C and L3/C both upregulated and both downregulated, or one upregulated and one downregulated. Because the L and L3 groups have the same amount of ARA but different amounts of DHA, our treatments do not strictly represent a DHA dose response. The L/C comparison corresponds to inclusion of DHA and ARA at current levels near the worldwide breastmilk means, while the L3 group corresponds to DHA near the worldwide high [8]. Nine genes were tested by quantitative real time PCR to confirm the array results, as shown in Table S4. All were qualitatively consistent with the gene array results. We highlight results in several categories of gene ontogeny as follows. Lipid (fatty acid and cholesterol) Metabolism Table 1 presents results from genes related to lipid metabolism that are regulated by dietary LCPUFA. Genes related to phospholipids biosynthesis (PLA2G6 and DGKE) were differentially expressed. PLA2G6 was downregulated in both groups. This gene codes for the Ca-independent cytosolic phospholipase A2 Group VI. Alterations in this gene have very recently been implicated as a common feature of neurodegener- ative disorders involving iron accumulation [39], as well as the underlying factor in infantile neuroaxonal dystrophy, a neurodegenerative disorder caused by accumulation of iron in the globus pallidus and resulting in death by age 10 [40]. In a previous study of four week old breastfed baboons, the globus pallidus was found to have 15.860.5% DHA (w/w of total fatty acids) and was the richest in DHA of 26 CNS regions examined [2]. The globus pallidus is also rich in ARA, with 10.3% (w/w) in four week old baboons. PLA2 are a superfamily of enzymes that liberate fatty acids from the sn-2 position of phospholipids; in the globus pallidus DHA and ARA are the most abundant acyl groups at this site. Remarkably, among the elongation and desaturation enzymes associated with LCPUFA synthesis, only a single elongation enzyme was differentially expressed. The human ELOVL5 transcript was downregulated slightly in the L/C group and upregulated in the L3/C group. This enzyme, also called HELO1, catalyzes the two carbon elongation of polyunsaturated 18 and 20 carbon fatty acids [41,42]. We also found that DGKE was upregulated in the L3/C comparison. Genes involved in ceramide metabolism (NSMAF, LASS5), glycosphingolipid metabolism (SPTLC2) and steroid metabolism (OSBP2, UGT2B15) showed increased expression in L3/C group, whereas NSMAF and OSBP2 were downregulated in L/C group. The best studied role of ARA is as a precursor for eicosanoids including prostaglandins, leukotrienes, and thromboxanes. One of the genes derived from membrane-bound ARA, which catalyze the first step in the biosynthesis of cysteinyl leukotrienes, Leukotriene C4 synthase (LTC4S), is downregulated in both DHA-ARA groups. LTC4S is a potent proinflammatory and anaphylactic mediator [43]. An elevated level of mRNA for PGES3 (prostaglandin E synthase 3) was observed in both the groups. PGES3 is also known as TEBP (telomerase-binding protein p23) or inactive progesterone receptor, 23-KD (p23). p23, a ubiquitous highly conserved protein which functions as a co-chaperone for the heat shock protein, HSP90, participates in the folding of a number of cell regulatory proteins [44,45]. p23 has been demonstrated to bind to human telomerase reverse transcriptase (hTERT) and contribute to telomerase activity [46]. Decreased levels of Annexin A3 (ANXA3) also known as Lipocortin III was observed with increasing DHA. Genes involved in fatty acid oxidation (ACADSB, ACAD10 and GLYAT) were upregulated, and carnitine palmitoyltransferase II (CPT2) downregulated, in the L3/C group. ACADs (acyl-CoA dehydrogenases) are a family of mitochondrial matrix flavoproteins that catalyze the dehydrogenation of acyl-CoA derivatives and are involved in the b-oxidation and branched chain aminoacid metabolism [47,48]. Both the ACADs family members ACADSB and ACAD10 were upregulated in L3/C group, consistent with greater energy production in the high DHA group. Mitochondrial-specific GLYAT (glycine-N-acyltransferase) also known as acyl CoA:glycine N-acyl transferase (ACGNAT), conjugates glycine with acyl-CoA and participates in detoxification of various drugs and xenobiotics [49,50]. Mawal et al [50] suggested that delayed development of GLYAT might impair detoxification process in children. SOAT1 (sterol O-acyl transferase) or Acyl-coenzyme A: cholesterol acyl transferase (ACAT) is an intracellular protein which catalyzes the formation of cholesterol esters in endoplasmic reticulum and is involved in lipid droplets that are characteristic of foam cells of atherosclerotic plaques [57,58,59]. Increased expression was detected for ATP8B1, PDE3A in both groups, comparatively more in L3/C, while transcripts involving HNF4A (Hepatic nuclear factor-4a), CLPS and ALDH3B2 showed decreased expression with increasing DHA. Intrahepatic cholestasis, or impairment of bile flow, is an important manifestation of inherited and acquired liver disease resulting in hepatic accumulation of the toxic bile acids and progressive liver damage. Bile acids enhance efficient digestion and absorption of dietary fats and fat-soluble vitamins, and are the main route for excretion of sterols. Expression of ATP8B1 is high in the small intestine, and mutations in ATP8B1 gene have been linked to intrahepatic cholestasis [60,61]. ATP8B1 expression was confirmed by real time PCR (Table S4). PDE3A (phosphodiesterase 3A, cGMP-inhibited) is a 120 kDa protein found in myocardium and platelets [62]. Ding et al [63] showed significantly decreased expression of PDE3A in the left ventricles of failing human hearts. PDE3A expression is required for the regulation of penile erection in humans [64]. Leptin (LEP), which has a role in energy metabolism, was upregulated in L3/C group. Leptin is a secreted adipocyte hormone that plays a pivotal role in the regulation of food intake and energy homeostasis [65,66]. Leptin suppresses feeding and decreases adiposity in part by inhibiting hypothalamic Neuropeptide Y synthesis and secretion [67,68]. Ion Channel and Transport Expression levels of transcripts involved in ion channel and transporter activity were altered by dietary LCPUFA (Table S5). Uncoupling protein 2, LOC131873 (hypothetical protein) and ATP11C, which have ion channel activity, are upregulated in both the groups but moreso in L3/C. Other transcripts with ion channel activity, including VDAC3, FTH1, KCNK3, KCNH7 and TRPM1 were upregulated in L3/C group and downregulated in L/C. GLRA2, TRPV2 and HFE are upregulated in L/C and repressed in L3/C. P2RX2, GRIA1 and CACNA1S are repressed in both the groups. One of our significant observations is the increased expression of uncoupling protein 2 (UCP2), a mitochondrial, proton carrier. Our data shows, for the first time, increased expression of UCP2 in neonatal cerebral cortex associated with dietary LCPUFA; increased expression is observed in both the groups but more in L3/C. QRT-PCR confirmed the array results (Table S4). Nutritional regulation and induction of mitochondrial uncoupling proteins resulting from dietary n3-PUFA in skeletal muscle and white adipose tissue have been observed [69,70]. Increased UCP2 expression is beneficial in diseases associated with neurodegeneration, cardiovascular and type 2 diabetes [71]. Dietary fats in milk increased the expression and function of UCP2 in neonatal brain and protected neurons from excitotoxicity [72]. VDAC3 (voltage-dependent anion channel 3) belongs to a group of pore forming proteins found in the outer mitochondrial membrane and in brain synaptic membranes [73,74]. Massa et al [75] observed a significant reduction of VDAC3 mRNA levels in the skeletal muscle and brains of dystrophin-deficient mdx mice during postnatal development. Mice lacking VDAC3 exhibit infertility [76]. All the transcripts (VDAC3, KCNK3 and KCNH7) having voltage-gated anion channel porin activity were upregulated with increasing DHA. FTH1 (Ferritin heavy chain 1) is required for iron homeostasis and it has been previously shown to be expressed in human brain [77]. Genes encoding small molecule transporters were differentially expressed, including carriers of glucose (SLC2A1, SLC5A4), chloride (SLC12A6), sodium (SLC13A3), monoamine (SLC18A2) and others (SLC26A4, SLC17A6). These transporters might help in exchange of nutrients and metabolites. Members of the cytochrome P and B family of proteins were also differentially expressed. Transcripts encoding VDP, RSAFD1, C1QG and OXA1L were significantly repressed by increasing DHA. G-Proteins and Signaling Numerous genes encoding G-protein activity were differentially regulated (Table S5), and the majority were induced by high DHA. GNA13, GNA14, PTHR2, RCP9 and FZD3 showed increased expression in both DHA groups. EDG7, SH3TC2, GNRHR, ADRA1A, BLR1, GPR101, GPR20 and OR8G2 were downregulated in L/C and upregulated in L3/C. NPY1R is downregulated in both the groups. DHA regulates G-protein signaling in the brain and retina [78]. G-proteins are membrane-associated proteins which promote exchange of GTP for GDP and regulate signal transduction and membrane traffic [79]. GNA13 deficiency impairs angiogenesis in mice [80] while GNA14 activates the NF-kB signaling cascade [81]. Parathyroid hormone receptor 2 (PTHR2) is activated by parathyroid hormone and is relatively abundant in the CNS [82,83]. RCP9, also known as calcitonin gene-related peptidereceptor component protein, may have a role during hematopoiesis [84]. Tissir and Goffinet [85] showed expression of FZD3 during postnatal CNS development in mice. FZD3 array results were confirmed by SYBR green real time PCR assay (Table S4). Neuropeptide Y is a 36-amino acid peptide with strong orexigenic effects in vivo [86]. Two major subtypes of NPY (Y1 and Y2) have been defined by pharmacologic criteria. NPY1R was suggested to be unique for the control of feeding [87]. Pedrazzini et al [88] observed a moderate but significant decrease in food intake in mice lacking the NPY1R gene. NF1 is a tumor-suppressor gene; mutations in this gene cause neurocutaneous defects [91]. NF1 gene expression and function are needed for normal fracture healing [92]. NF1 expression levels were confirmed by QRT-PCR (Table S4). WSB1 is a SOCS-boxcontaining WD-40 protein expressed during embryonic development in chicken [93]. RAS and RAS related gene families of small GTPases (RIT1, KRAS, RERG and RAPGEF6) were upregulated by increasing DHA. Diets deficient in n-3 PUFA induce substitution of n-6 DPA (22:5n-6) in neural membranes, and impairment of functions mediated by G protein mediated signaling, such as visual perception, learning and memory, and olfactory discrimination. Abun-dant evidence indicates that this results in reduced rhodopsin activation, and signaling in rod outer segments compared to DHAreplete animals [78,94,95,96,97]. Table 2 shows differential expression of 24 genes related to development. The products of 11 transcripts play a role in nervous system development. The expression of TIMM8A, NRG1, SEMA3D and NUMB genes were upregulated in both L/C and L3/C groups. HES1 and SIM1 were downregulated in both the groups. GDF11, SMA3/SMA5, SH3GL3 were downregulated in L/C and upregulated in L3/C. The mRNA levels of growth factors FGF5 and FGF14 displayed increased abundance in L/C and decreased abundance in L3/C. Development TIMM8A also known as Deafness/Dystonia Peptide 1 (DDP1) is a well conserved protein organized in mitochondrial intermembrane space. Loss-of-Function mutations in the TIMM8A gene cause Mohr-Tranebjaerg syndrome (a progressive neurodegenerative disorder with deafness, blindness, dystonia and mental deficiency) and Jensen syndrome (opticoacoustic nerve atrophy with dementia) [98,99,100]. TaqMan assay confirmed the array results (Table S4). NRG1 is essential for the development and function of the CNS facilitating the neuronal migration and axon guidance [101,102]. NUMB negatively regulates notch signaling and plays a role in retinal neurogenesis, influencing the proliferation and differentiation of retinal progenitors and maturation of postmitotic Visual Perception Nine transcripts having a role in visual perception were differentially expressed (Table 3). Genes coding for LUM, EML2, TIMP3 and TTC8 were upregulated in both the supplement groups. IMPG1 was upregulated in L3/C and downregulated in L/C. RGS16 and TULP2 were upregulated in L/C and downregulated in L3/C. RAX and IMPDH1 were downregulated in both the supplement groups. Lumican (LUM), is an extracellular matrix glycoprotein and a member of the small-leucine-rich-proteoglycan (SLRP) family [105]. It is widely distributed in the corneal stoma and connective tissues [106]. Lumican helps in the establishment of corneal stromal matrix organization during neonatal development in mice. Those lacking lumican exhibit several corneal related defects [107]. It is important for corneal transparency in mice [108]. TaqMan assay showed 5-fold more upregulation of LUM more than the microarray data (Table S4). Mutations in TIMP3 gene result in autosomal dominant disorder Sorsby's fundus dystrophy an age-related macular degeneration of retina [109]. Clarke et al [110] suggested that a possible mechanism for retinal degeneration in Sorsby's fundus dystrophy was traceable to nutrition. IMPG1 is a proteoglycan which participates in retinal adhesion and photoreceptor survival [111]. Higher amounts of DHA in the infant formula increased the expression of IMPG1. Expression of RAX transcript is decreased in both the supplement groups. Increased RAX expression is seen in the retinal progenitor cells during the vertebrate eye development and is downregulated in the differentiated neurons [112,113]. DHA is well known to promote neurite growth in the brain [30]; this could be the possible reason for RAX downregulation in our study. Integral to Membrane/Membrane Fraction Transcripts that are integral part of biological membranes or within the membrane fractions were differentially expressed (Table S5). EVER1, PERP, Cep192, SSFA2, LPAL2, TMEM20, TM6SF1 were upregulated in both the groups. ORMDL3, SEZ6L, HYDIN, TA-LRRP, PKD1L1 were upregulated in L3/C and downregulated in L/C. MFAP3L was upregulated in L/C and downregulated in L3/C. Transcripts of GP2 and SYNGR2 were downregulated in both the groups. Numbers of transcripts were upregulated by increased DHA in the formulas. LCPUFA can affect biological membrane functions by influencing membrane composition and permeability, interaction with membrane proteins, membrane-bound receptor function, photoreceptor signal transduction and transport [114,115,116]. Mutations in EVER1 or transmembrane channel-like 6 (TMC6) gene cause epidermodysplasia verruciformis, a type of skin disorder [117]. HYDIN is a novel gene and nearly-complete loss of its function due to mutations causes congenital hydrocephalus in mice [118]. The exact function of GP2 is unknown, but it has been associated with the secretory granules in the pancreas [119]. Programmed Cell Death/Apoptosis Transcripts with apoptotic activity were differentially expressed (Table S5). Seven out of nine transcripts in our study were upregulated with increasing DHA, including CARD6, TIA1, BNIP1, FAF1, GULP1, CASP9 and FLJ13491. Programmed cell death (PCD) plays an important role during the development of immune and nervous systems [120]. Jacobson et al [121] proposed PCD as an important event in eliminating unwanted cells during development. Mice with targeted deletion of CASP3 die perinatally due to vast excesses of cells deposition in their CNS as a result of decreased apoptotic activity [120]. CARD6 (caspase recruitment domain protein 6) is upregulated in both the groups. It is a microtubule-interacting protein that activates NF-B and takes part in the signaling events leading to apoptosis [122]. TIA1 is upregulated in L3/C and downregulated in L/C. TIA1 is a member of RNA-binding protein family with pro-apoptotic activity, and it silences the translation of cyclooxygenase-2 (COX2). Narayanan et al, [123] suggested that DHA indirectly increases the expression of genes which downregulate COX2 expression. The COX2 enzyme catalyzes the rate-limiting step for prostaglandin production, which influence many processes including inflammation [124]. Downregulation of TIA1 in L/C could be due to the influence of ARA, the major COX2 substrate, rather than that of DHA which is a competitive inhibitor. GULP1 assists in efficient removal of the apoptotic cells by phagocytosis [125]. CASP9 activates caspase activation cascade and is an important component of mitochondrial apoptotic pathway [126]. Cytoskeleton and Cell adhesion Dietary LCPUFA regulated expression of several transcripts involved in cytoskeleton and cell adhesion (Table S5). The expression of 27 ps involved in cytoskeleton was altered. MYO1A and MYO5A were upregulated with increasing amounts of DHA whereas MYO1E showed decreased expression. Myosin-1 isoforms are membrane associated molecular motors which play essential roles in membrane dynamics, cytoskeletal structure and signal transduction [127]. COL4A6 and COL9A3 showed increased expression whereas COL4A2 and COL9A2 showed decreased expression with increasing DHA. Type IV collagen is the major component of the basement membrane. Mild forms of Alport nephropathy is associated with deletion in COL4A6 gene [128] and eye abnormalities are common in people afflicted with Alport syndrome [129]. WASL, also known as neural WASP (WASP), was upregulated in both the groups. Actin cytoskeleton regulation is vital for brain development and function. WASL is an actin- regulating protein and mediates filopodium formation [130,131,132]. HIP1 (huntingtin interacting protein 1) and HOOK2 (hook homolog 2) were downregulated in both the groups. The expression levels of 15 transcripts involved in cell adhesion changed as a result of dietary LCPUFA (Table S5). BTBD9, CD44, ARMC4, CD58, LOC389722 and PCDHB13 showed increased expression in both the groups. Glycoprotein CD44 is a cell-surface adhesion molecule that is involved in cell-cell and cell-matrix interactions [133] while PCDHB13 is a member of protocadherin beta family of transmembrane glycoproteins [134]. NLGN3 and CYR61 were downregulated in both groups. Peptidases Several transcripts having peptidase activity were differentially expressed (Table S5). SERPINB6 is significantly upregulated in L3/C and downregulated in L/C. Of note, the ADAM families of proteins (ADAM17, ADAM33, and ADAMTS16) were upregulated and ADAMTS15 was downregulated in both the supplement groups. ADAM proteins are membrane-anchored glycoproteins named for two of the motifs they carry: an adhesive domain (disintegrin) and a degradative domain (metalloprotease) [135]. These proteins are involved in several biological processes including cell-cell interactions, heart development, neurogenesis and muscle development [136,137,138,139]. ADAM17 is required for proteolytic processing of other proteins and have been reported to participate in cleaving of the amyloid precursor protein [140,141]. Loss of ADAM17 is reported in abnormalities associated with heart, skin, lung and intestines [142,143,144]. Real time PCR confirmed array results of ADAM17 (Table S4). ADAM33 has been recently implicated as an asthma and bronchial hyperresponsiveness gene [145]. It is required for smooth muscle development in the lungs helps in airway wall ''modeling'', and proper functioning of lungs throughout life [146,147]. CTSB (Cathepsin B) also known as amyloid precursor protein secretase (APPS) was upregulated. It is involved in the proteolytic processing of amyloid precursor protein [148]. Felbor et al [149] reported deficiency of CTSB results in brain atrophy and loss of nerve cells in mice. CTSC (Cathepsin C) was downregulated in the L/C group and upregulated in the L3/C group. Loss of function mutations in CTSC gene are associated with tooth and skin abnormalities [150]. Cell Cycle, Cell Growth and Cell Proliferation Fifteen transcripts having a role in cell cycle regulation, growth and proliferation were differentially expressed (Table S5). Four of the transcripts SESN3, RAD1, GAS1 and PARD6B involved in cell cycle regulation were upregulated in both the groups. Cell growth factors, INHBC and OGN were induced in both the groups. FGFR1OP is a positive regulator of cell proliferation and showed increased expression. KAZALD1, CDC20 and CDKN2C were downregulated. Growth arrest specific gene 1 (GAS1) expression is positively required for postnatal cerebellum development. Mice lacking GAS1 had significantly reduced cerebellar size compared to wild type mice [152]. Liu et al [152] proposed that GAS1 perform dual roles in cell cycle arrest and in proliferation in a cell autonomous manner. PARD6B has a role in axonogenesis [153]. INHBC is a member of transforming growth factor-beta superfamily (TGF-beta) and is involved cell growth and differentiation [154,155]. Osteoglycin (OGN) is also known as Mimecan and Osteoinductive factor (OIF). Mimecan is a member of smallleucine rich proteoglycan gene family and is a major component of cornea and other connective tissues [156,157]. It has a role in bone formation , cornea development and regulation of collagen fibrillogenesis in corneal stroma [157,158,159]. CDC20 regulates anaphase-promoting complex [160]. Response to Stress MSRA, SOD2, GSTA3 and GSR genes were differentially expressed (Table S5). MSRA was upregulated in both the supplement groups. SOD2 is downregulated in L/C and upregulated in L3/C. GSR is upregulated in the L/C and downregulated in the L3/C. GSTA3 is downregulated in both the groups. Oxidative damage to proteins by reactive oxygen species is associated with oxidative stress, aging, and age-related diseases [161,162,163]. MSRA is expressed in the retina, neurons and the nervous system [162]. Knock-outs of the MSRA gene in mice result in shortened life-spans both under normoxia and hyperoxia conditions [164]. MSRA also participates in the regulation of proteins [165]. MSRA plays an important role in neurodegenerative diseases like Alzheimer's and Parkinson's by reducing the effects of reactive oxygen species [163]. Overexpression of MSRA protects human fibroblasts against H2O2-mediated oxidative stress [166]. SOD2 belongs to the iron/manganese superoxide dismutase family. It encodes a mitochondrial protein and helps in the elimination of reactive oxygen species generated within mitochondria [167]. In our study increased amount of DHA reduced the expression of glutathione-related proteins GSR and GSTA3. Kinases and Phosphatases Phosphorylation and dephosphorylation of proteins control a multitude of cellular processes. Several proteins having kinase activity were altered (Table S5). Of note, transcripts involving STK3, STK6, HINT3, TLK1, DRF1, GUCY2C and NEK1 were significantly upregulated with increasing DHA. A number of MAP kinases were downregulated in L3/C group, including MAP4K1, MAPK12, MAP3K2 and MAP3K3. Other transcripts which showed significantly decreased expression were CKM, LMTK2, NEK11, TNK1, BRD4 and MGC4796. Ubiquitin Cycle Twenty-five probe sets having a role in the ubiquitination process were differentially expressed (Table S5). Interestingly, five members of F-box protein family (FBXL7, FBXL4, FBXL17, FBXW4 and FBXW8) showed increased expression in L3/C group. F-Box proteins participate in varied cellular processes such as signal transduction, development, regulation of transcription and transition of cell cycle. They contain protein-protein interaction domains and participate in phosphorylation-dependent ubiquitination [169,170]. Proteins associated with anaphasepromoting complex (CDC23 and ANAPC1) were downregulated in L3/C group. Ingenuity Network Analysis We explored relationships among sets of genes using Ingenuity Systems network analysis. Out of 1108 differentially expressed probe sets in our data, 387 probe sets (34.93%) were found in the Ingenuity Pathway Analysis (IPA) knowledge database, and are labeled ''focus'' genes. Based on these focus genes, IPA generated 41 biological networks (Table S6). Among these 41 networks, 24 had scores of .8 and the top 2 networks with 35 genes had scores of 49. We focus here on the most significant network. The top network identified by IPA is associated with nervous system development and function, cellular growth and proliferation (Figure 1). Epidermal growth factor receptor (EGFR) is the most outstanding interaction partner found within the network. EGFR interacts with TIMP3, NRG1, ADAM17, EDG7 and FGF7; all are upregulated, and involved in neural or visual perception development. EGFR signaling is implicated in early events of epidermal, neural and eye development. Loss of EGFR signaling results in reduced brain size and loss of larval eye and optic lobe in drosophila [171]. EGFR expression is required for postnatal forebrain and astrocytes development in mice [172]. Functional pathway analysis conducted on this network using the IPA tool set identified three genes, ADAM17, NUMB and HES1, involved in the Notch signaling pathway which regulates nervous system and eye development [173,174]. ADAM17 and NUMB were upregulated while HES1 was repressed in both the groups. This analysis suggests that LCPUFA influence many processes with influences that converge on EGFR. LCPUFA are known to directly interact with nutrient sensitive transcription factors such as peroxisome proliferator-activated receptors (PPARs), liver6receptors, hepatic nuclear factor-4a, sterol regulatory binding proteins, retinoid6receptors and NF-B. Upon ingestion, LCPUFA can elicit a transcriptional response within minutes [31,175,176,177]. Microarray studies on LCPUFAsupplementing animals have identified several tissue-specific pathways regulated by LCPUFA, particularly involving the liver, adipose, and brain tissue transcriptome [26,178,179]. Using murine 11K Affymetrix oligoarrays, Berger et al [178] [180] showed increased hepatic expression of lipolytic and decreased expression of lipogenic genes. However, in the hippocampus brain region, increased expression of HTR4 and decreased expression of TTR and SIAT8E, genes involved in the regulation of cognition and learning, as well as POMC, a gene associated with appetite control, was identified. The first paper published on the brain gene transcriptome with respect to LCPUFA supplementation by Kitajka et al. in 2002 [181] demonstrated that feeding fish oil (DHA 26.9%) to rats increased expression of genes involved in lipid metabolism (SPTLC2, FPS), energy metabolism (ATP synthase subunit d, ATP synthase H + , cytochromes, IDH3G), cytoskeleton (Actin related protein 2, TUBA1), signal transduction (Calmodulins, SH3P4, RAB6B small GTPase), receptors, ion channels and neurotransmission (Vasopressin V1b receptor, Somatostatin), synaptic plasticity (Synucleins) and regulatory proteins (protein phosphatases). In the same study, fish oil supplementation also significantly reduced the expression of phospholipase D and Transthyretin. In related work, Kitajka et al [26] , using rat cDNA microarrays with 3,200 spots, found results similar to those previously reported. Barcelo-Coblijn et al. [182] were the first to report moderation of age-induced changes in gene expression in rat brain as a result of diets rich in fish oil (DHA 11.2%). In this study, 2 month old rats showed increased expression of SNCA and TTR, however, 2-year old rats exhibited no significant changes. In addition, Puskas et al. [183] demonstrated that administration of omega-3 fatty acids from fish oil (5% EPA and 27% DHA; total fat content: 8%) for 4 weeks in 2 year old rats induced expression of transthyretin and mitochondrial creatine kinase and decreased expression of HSP86, ApoC-I and Makorin RING zinc-finger protein 2, genes in hippocampus brain region. Finally, Flachs et al [179] showed increased expression of genes for mitochondrial proteins in adipose tissue. In comparison with previous brain transcriptome analyses, the present study employing the use of high-density Affymetrix oligoarrays (.54,000 ps) revealed genes differentially regulated by LCPUFA at ranges mimicking breastmilk. With the exception of SPTLC2, which we also found to be upregulated in the L/C and L3/C comparisons, none of the remaining, previously identified genes, were differentially expressed in our dataset. Many factors are likely to contribute to the observed differences in differentially expressed genes between our study and previous work. One likely source is the difference in dietary DHA/ARA, which is within the range of human and baboon breastmilk; previous studies used much higher amounts of DHA, from 11.2% to 27% [182,183]. Also, interactions between levels of ARA and DHA supplied in our study add some complexity to the interpretation since the three treatments do not represent a strict dose response to DHA. However, our DHA and ARA come from sources that are routinely consumed by human infants in commercial infant formulas, and thus are directly relevant to that group. Despite lower levels of DHA/ARA, genes in our data set show subtle changes in expression. Moreover, the magnitude of these results is not surprising given the nutritional focus of the study, in which subtle, widespread shifts in transcription may have profound biological effects. Our data indicate that LCPUFA supplementation within the ranges of breastmilk will induce global changes in gene expression across numerous biological processes. Conclusions The impact of DHA and ARA on infant baboons was both significant and widespread. We identified several novel differentiallyexpressed transcripts in 12-week old baboon cerebral cortexes modulated by dietary LCPUFA. The majority of probe sets showed subtle changes in gene transcription. In the cerebral cortex, we observed increased expression of mitochondrial proton carrier, UCP2 (uncoupling protein 2) in both groups, but more in L3/C. PLA2G6, implicated in childhood neurodegeneration, was differentially expressed. TIA1, a silencer of the COX2 gene translation is upregulated in L3/C. Increased expression was observed for TIMM8A, NRG1, SEMA3D and NUMB, genes involved in neural development. LUM, EML2, TIMP3 and TTC8 genes with roles in visual perception were upregulated. Hepatic nuclear factor-4a (HNF4A) showed decreased expression with increasing DHA. RARA was repressed in both the groups. A network involving 35 genes attributed to neural development and function was identified using Ingenuity pathway analysis, emphasizing EGFR as the most outstanding interaction partner in the network. In this network EGFR interacts with genes involved in neural or visual perception, TIMP3, NRG1, ADAM17, EDG7 and FGF7. Although subtle, the upregulation of NUMB and downregulation of HES1 in the Notch signaling pathway, not previously shown to interact with fatty acids, supports the involvement of LCPUFA, particularly DHA, in neural development. Interestingly, no known desaturases and only one elongase, LCPUFA biosynthetic enzymes, were differentially expressed in cerebral cortex. In a study of liver gene expression in preparation, fatty acid desaturases SCD and FADS1 were significantly downregulated in liver, where we identified a multifunctional protein TOB1 which is significantly upregulated. These data represent the first comprehensive transcriptome analysis in primates and have identified widespread changes in cerebral cortex genes that are modulated by increases in DHA, induced by dietary means. Importantly, the range of DHA used here is within limits of human and primate breastmilks, the natural food for infants, and indicate that CNS gene expression responds to LCPUFA concentrations. MATERIALS AND METHODS Details of experimental design, animal characteristics, and tissue sampling are available elsewhere [38] and will be outlined briefly here. Animals and Diets The animal phase took place at the Southwest Foundation for Biomedical Research (SFBR), San Antonio, TX, and was approved for animal care and research protocols from SFBR and Cornell University Institutional Animal Care and Use Committee (IACUC). Twelve baboon neonates born spontane-ously around 182 days gestation were randomized into 3 groups (n = 4 per group). They were fed for 12 weeks on one of three formulas: C: Control (no DHA-ARA); L: 16LCPUFA (0.33%DHA-0.67%ARA); L3: 36LCPUFA (1.00%DHA-0.67%ARA). Formulas in color-coded cans were kindly provided by Mead-Johnson Nutritionals (Evansville, IN) in ready-to-feed form, 2 colors per treatment, so that investigators were masked to the treatments. Sampling and Array Hybridization Twelve week old baboon neonates were anesthetized and euthanized at 84.461.1 days. Tissue collected from the precentral gyrus of the cerebral cortex was placed in RNALater according to vendor instructions and was used for the microarray analysis and validation of microarray results. Microarray studies utilizing baboon samples with human oligonucleotide arrays have been successfully carried out previously [184,185]. Cerebral cortex global messenger RNA in the three groups was analyzed using Affymetrix Genechip TM HG-U133 Plus 2.0 arrays ,http://www.affymetrix.com/products/arrays/ specific/hgu133plus.affx.. The HG-U133 Plus 2.0 has .54,000 probe sets representing 47,000 transcripts and variants, including 38,500 well-characterized human genes. One hybridization was performed for each animal (12 chips total). RNA preparations and array hybridizations were processed at Genome Explorations, Memphis, TN ,http://www.genome-explorations.com.. The completed raw data sets were downloaded from the Genome Explorations secure ftp servers. Microarray Data Analysis Raw data (.CEL files) were uploaded into Iobion's Gene Traffic MULTI 3.2 (Iobion Informatics, La Jolla, CA, USA) and analyzed by using the robust multi-array analysis (RMA) method. In general, RMA performs three operations specific to Affymetrix GeneChip arrays: global background normalization, normalization across all of the selected hybridizations, and log2 transformation of perfect match oligonucleotide probe values [186]. Statistical analysis using the significance analysis tool set in Gene Traffic was utilized to perform Multiclass ANOVA on all probe level normalized data. Pairwise comparisons were made between C vs L and C vs L3 and all probe set comparisons reaching P ,0.05 were included in the analysis. Gene lists of differentially expressed probe sets were generated from this output for functional analysis. RNA Isolation and RT PCR RT PCR was conducted on nine genes to confirm the results of the array analysis. Total RNA from 30 mg samples of baboon cerebral cortex brain tissue homogenates was extracted using the RNeasy Mini kit (Qiagen, Valencia, CA). Each RNA preparation was treated with DNase I according to the manufacturer's instructions. The yield of total RNA was assessed by 260 nm UV absorption. The quality of RNA was analyzed by 260/280 nm ratios of the samples and by agarose gel electrophoresis to verify RNA integrity. One microgram total RNA from each group (C, L, L3) was reverse-transcribed into first strand cDNA using the iScript cDNA synthesis kit (Bio-Rad, Hercules, CA). The iScript reverse transcriptase is a modified MMLV-derived reverse transcriptase and the iScript reaction mix contains both oligo(dT) and random primers. The generated first strand cDNA is stored at 220uC until used. Quantitative real-time PCR using SYBR green and TaqMan assay methods was used to verify the differential expression of selected genes that were upregulated in L3/C comparison. All the primers were gene-specific and generated from human sequences ,www.ensembl.org.. PCR primers were designed with Primer-Quest software (IDT, Coralville, IA) and ordered from Integrated DNA Technologies (IDT, Coralville, IA). Initially primers were tested by polymerase chain reactions with baboon cerebral cortex brain cDNA as template in a 30 ml reaction volume using Eppendorf gradient thermal cycler (Eppendorf), with 1 mm of each primer, 0.25 mM each of dNTPs, 3 ml of 106PCR buffer (Perkin-Elmer Life Sciences, Foster City, CA, USA), 1.5 mM MgCl 2 and 1.5 U Taq polymerase (Ampli Taq II; Perkin-Elmer Life Sciences). Thermal cycling conditions were: initial denaturation at 95uC for 5 min followed by 25-35 cycles of denaturation at 95uC for 30 s, annealing at 60uC for 1 min and extension at 72uC for 1 min, with a final extension at 72uC for 2 min. PCR products were separated by electrophoresis on 2% agarose gel stained with ethidium bromide and bands of appropriate sizes were obtained. The PCR products of LUM, TIMM8A, UCP2, ß-ACTIN, ADAM17 and ATP8B1 were sequenced and deposited with GenBank (Acc Numbers: DQ779570, DQ779571, DQ779572, DQ779573, DQ779574 and DQ779575, respectively). Initially standardized primers for genes (ATP8B1, ADAM17, NF1, FZD3, ZNF611, UCP2, EGFR and control ß-ACTIN) were used for SYBR green real time PCR assay (Power SYBR Green PCR Master Mix, Applied Biosystems, Foster City, CA). We used the baboon LUM, TIMM8A and ß-ACTIN sequences to design TaqMan Assay (Assay by Design; ,www.appliedbiosystems. com.). The selected gene symbols, primer pairs and probe details are depicted in Table S3. Quantitative real time PCR reactions were done with the Applied Biosystems Prism 7300/7500 real time PCR system (Applied Biosystems, Foster City, CA). After 2 minutes of UNG activation at 50uC, initial denaturation at 95uC was carried out for 10 minutes, the cycling conditions of 40 cycles consisted of denaturation at 95uC for 15 seconds, annealing at 60uC for 30 seconds, and elongation at 72uC for 1 minute. For SYBR green method UNG activation step is eliminated. All reactions were done in triplicate and ß-ACTIN was used as the reference gene. Relative quantification was performed by using comparative CT method (ABI Relative Quantification Chemistry guide # 4347824). Network Analysis We used a new web-delivered bioinformatics tool set, Ingenuity pathway analysis (IPA 3.0) ,http://www.ingenuity.com., to identify functional networks influenced by our dietary treatments. IPA is a knowledge database generated from the peer-reviewed scientific publications that enables discovery, visualization and exploration of functional biological networks in gene expression data and delineates the functions most significant to those networks. The 1108 differentially expressed probe sets identified by microarray data, as discussed below, were used for network analyses. Affymetrix probe set ID's were uploaded into IPA and queried against all other genes stored in the IPA knowledge database to generate a set of networks having up to 35 genes. Each Affymetrix probe set ID was mapped to its corresponding gene identifier in the IPA knowledge database. Probe sets representing genes having direct interactions with genes in the IPA knowledge database are called ''focus'' genes, which were then used as a starting point for generating functional networks. Each generated network is assigned a score according to the number of differentially regulated focus genes in our dataset. These scores are derived from negative logarithm of the P indicative of the likelihood that focus genes found together in a network due to random chance. Scores of 4 or higher have 99.9% confidence level of significance as defined in detail elsewhere [188].
9,828
2007-04-11T00:00:00.000
[ "Biology", "Medicine" ]
Contextual Video: Critical Thinking-Based Learning Media in the Implementation of Curriculum 2013 This research aims to know the development of critical thinking-based contextual video media on the subject matter of economic issues and how to overcome it, and to find out the feasibility of critical thinking-based contextual video media and student’s response towards the critical thinking-based contextual video media. The implementation of curriculum 2013 cannot be separated from the government’s expectation that learners have good soft skills.The desired skills are 21st century skills known as 4C: creative, critical thinking, communicative and collaborative. These skills emphasize the soft skills implemented in daily life, learners understand the importance of these skills. The existence of contextual video helps to simplify the process; learners are brought into real life. The learning is more implementative. Critical thinking is the basis of video, the video making process is made in such a way so that the learners understand the material by temselves from analyzing every plot of video. The video material is about economic issues and how to overcome them. This research was Research and Development (R&D). The development cycle used Thiagarajan, Semmel and Semmel’s model or 4-P model that are defining, designing, developing, and deploying. However, this study was only up to the stage of development. The place of this study was in Vocational High School Muhammadiyah 1 Babat. The result of the research stated that based on media feasibility test from the material expert, it is very feasible with the percentage of 80,44% and the media learning expert assessed very well with the percentage of 88,82%. And the result of class X students’ response to media is very good with the percentage 83, 64%. So it is concluded that contentxtual video media is feasible to be used as a learning media. must be present in improving national education as a whole. Curriculum is a guideline that contains material, purpose, method and time allocation, in other words in the curriculum will contain about the competence that will achieved by the learning and owned by the graduated students after a certain time program (Widiyanto, 2010). The curriculum is also an actual activity that must be carried out and implemented during the learning process (Asrori, 2016). The school curriculum is generally accepted as explicit, conscious, and formally planned courses (Kentli, 2009). The curriculum as a construction can consolidate undertaken initiatives and highlight a coherent strategy or focus for the provision of more valuable and meaningful learning opportunities (Hicks, 2007). The curriculum is defined as the sum of all experiences, which must be provided in the educational institution (Bharvad, 2010). Research shows that the curriculum has a high level of importance in supporting teachers on conducting teaching and learning activities. Qualitative and quantitative data were collected in the form of trust interviews and classroom observations from 27 high school chemistry teachers. The data analysis shows that curriculum implementation is strongly influenced by teachers' belief in teaching and learning, and the existence of supporting networks in their school's locations (Fullan, 2008). One of the curriculum's role is to support teacher's professional development and improvement of student's learning outcomes, findings indicate that teachers and students show significant gains in knowledge and learning outcomes (Fishman, 2013). The curriculum is enhanced to improve the education quality nationally (Hasyim, 2011). In 2013 the Indonesian government through the ministry of education and culture formally issued a regulation that suprised all stakeholders in the education field, that is the discourse of thechange of curriculum, from KTSP to a new curriculum that is named the kurikulum 2013. The discourse is based on the assumption that the national curriculum INTRODUCTION Education is organized by giving exemplary, build a will, and develop learner's creativity in the learning process. In a broader way, the function of national education is to develop the ability and form the character and civilization of dignified nation in order to educate the life of the nation, in order to develop the potential of the learners to be a human being who believes and cautious to God the Almighty, noble, healthy, knowledgeable, capable, competent, independent, and become a democratic and responsible citizen. In order to achieve these objectives, the government seeks to establish a strong national education system. The accomplishment of educational component unity of the national education system cannot be separated from the strength of the educational triad, that are: family education, school education and community education, because the learner's development cannot be separated from these three elements, the three elements together provide education on the learners' development in each of the growth phases. Family education is the basic education first obtained by someone, in the society environment, one will get education about morals, behavior, norms and so on while in the school education, one will get education from both elements that are academic elements and behavior elements. "Education is about interaction between people, the interaction between educator and learner, between parent and children, between teacher and student, as well as between the environment and learner" (Baswedan, 2014). The government positioned education as one of the national goals for independence, it was stated in the fourth paragraph of preambule of 1945 constitution "... to educate the nation's life ..." (Annisalsabila, 2014). It becomes a consequence of the constitution that the state should facilitate its people in education, education becomes an important component in building advanced and competitive nation civilization, therefore the government needs to be changed because it is no longer relevant and meet global needs and current state. Through the new curriculum, the government has an important vision to promote national education through a whole education by developing all aspects of cognitive, affective and psychomotoric. Improvement and balance of soft skills and hard skills that include the aspects of attitude, skills, and knowledge competence (Windriyas, 2014). These three aspects are developed in the learning of kurikulum 2013. The cognitive aspect is directed to scientific approach-based learning which is scientific learning where the learning process applies five scientific skills in learning that are the skill of observing, asking, trying/gathering information, associating/ reasoning, and communicating the findings ( Kemendikbud, 2013). In 2016, the government revised the curriculum which one of the results also suggested that the 5M scientific approach is not the only method on teaching and if it is used then the composition do not have to be in sequence (http://gerbangkikulum.psma.kemdikbud.go.id). The affective aspect is also very emphasized because recently, the Indonesian people experienced tremendous moral degradation, many people express their blasphemy, intolerance, hate speech, hoax news, and others that all are aimed to disunite the nation. The focus of the 2013 curriculum brings the character education strengthening program (penguatan pendidikan karakter/PPK) that included in the learning process. And the last aspect is the psychomotoric or skills aspect, in improving success in today's adulthood, current high school graduates should be equipped not only with academic skills but also life and career skills. Therefore, it is very important for schools to consider student's mastery towards non-cognitive. Such skills are very important and the top priority in the global market (Ball, Joyce, and Anderson-Butcher, 2016). What is meant by skills are the 21st century skills, the government's revision of kurikulum 2013 is themed the 21st century skills in the development of skills mastered by students known as 4C that is creative, critical thinking, communicative and collaborative. These skills emphasize the soft skill that is implemented in everyday life, so that learners understand the importance of the knowledge. 21st century skills can encourage learners to solve their current problems and can encourage learners to collaborate with many parties in solving problems and also 21st century skills enable someone to communicate and collaborate effectively with various parties through the collaboration ( Ah-Nam and Osman, 2017). 21st century skills also provide opportunities for learners to develop their learning comprehensively and can support them to understand: (1) what needs to be learned/ obtained comprehensively in the subjects studied, (2) how do they learn by supported of learning innovations that are active-participatory, relevant, and student-centered (Imam, 2016), by considering the importance of 21st century skills that are currently needed, then by the learning must be able to be directed to that skill. One of the skills that exist in 21st century skills is critical thingking. This skill demands students to think critically about the subjects they get, not just thinking procedurally. This skill, if developed in the learner, will have implications on the mindset that is more sensitive to the events around the environment. Critical thinking brings students to explore the existing problems and is expected to solve the existing problems. Critical thinking skills can be realized through supportive learning as well as with supportive learning media. One of the media that can be used in development of critical thinking capability is in the form of contextual video. According to Daryanto (2011) learning videos have advantages that the video adds a new dimension in learning, the video presents moving images to the students in addition to the accompanying audio and video can display a phenomenon that is difficult to see in real life. The existence of contextual video can also help to simplify the process, because through the video, learners are brought into real life. So that learning is no longer textual but implementative. It has been mentioned that critical thinking is the basis of the contextual video, the video making process is organized in such a way so that the learners understand the materials by themselves from analyzing each of the plot in the video. The video material is on economic issues and how to overcome them. This research was conducted in SMA Muhammadiyah 1 Babat, where this senior high school is one of the schools in Lamongan regency appointed to implement curriculum 2013. Based on preliminary study, it is found that in implementation faced some obstacles that is lack of socialization from the government, many teachers experience confusion in applying scientific learning, the lack of learning media that helps the learning process so that learning remains teacher oriented. Hence, the existence of the curriculum 2013 provides both demand and challenge to all educators to innovate in learning. Problems that occur in SMA Muhammadiyah 1 Babat can be bridged by the development of learning media in the form of critical thinking-based contextual video. Based on the explanation of the previous problem, a research is conducted with the purpose of knowing the development of critical thinking-based contextual video media on the subject matter of economic issues and how to overcome it, to find out the feasibility of critical thinking-based contextual video media and student's response towards the critical thinking-based contextual video media. This research uses research and development approach (R & D) . The method is an industry-based research with development method, where the research findings are used to design new products and procedures, which are then systematically tested, tested and evaluated, refined to meet the criteria of effectiveness, quality, or other standards. Meanwhile, according to Sugiyono (2013) defined research and development method is a research method used to produce a specific product and test the effectiveness of the product, so that the product can later be utilized by the community in need. So it can be concluded that the research and development method (R & D) is a method used to produce certain products, models, methods, strategies, ways, services, and specific procedures and its feasibility and effectivity is tested, so that later can be beneficial for public. The research procedure uses research and development model of a 4-D model of thiagarajan, semmel and semmel. The model, in developing a learning media product, there are four development stages: defining, designing, developing, and desseminating or in bahasa Indonesia: pendefinisian, perancangan, pengembangan, dan penyebaran (Trianto, 2011), but in this study, the development stage is until the developing stage only. The defining stage, the purpose of this stage is to set and define the requirements of learning. In determining and defining the requirements of the learning, begins with the objectives analysis of the limitations of material which its device is developed. This stage includes five basic steps, namely: (a) initial and final analysis, (b) student analysis, (c) task analysis, (d) conceptual analysis, (e) learning objectives specification (Anwar, 2006). The purpose of the designing stage is to prepare the draft 1 of the learning device. This stage consists of 3 steps, which are: (1) Test preparation (2) Media selection, (3) Format selection, (4) Initial design, as for the explanation of each stages is as follows (Anwar, 2006). Developing stage, the objective of this stage is to get the media that has been revised by the experts (expert judgment). This stage includes: (1) experts validation, (2) developmental trials by simulating the media, i.e. the activities of operating the media on a limited trial of the trial subject number in a small group evaluation of 20 students with a composition that is reflecting the characteristics of the population consisting of less intelligent, moderate, and intelligent students (Sadiman et al, 2009), the results are used as a revision basis. Disseminating stage, this stage is the stage of the use of the devices that have been developed on a wider scale. At this stage it is not done because the media development is only developed for learning in the concerned school and class only. In order to better understand the development flow, it can be seen in chart 1. The test subjects in this study are divided into two, which are the expert (expert judgment) and the target user. The expert experimental subject is divided into two, which are: material experts and learning media expert, for the material experts are a lecturer of economic education study program of State University of Surabaya and an economic teacher of SMA Muhammadiyah 1 Babat and for the media expert is a lecturer in curriculum and educational technology (kurikulum dan teknologi pendidikan/KTP ) of State University of Surabaya. The subjects of target user trial are the students of class X of SMA Muhammadiyah 1 Babat who are taught economic subjects. This research generates qualitative and quantitative data. According to Sugiyono (2013), qualitative data is data in the form of sentences, words, or images. While the quantitative data is data in the form of numbers or numbered qualitative (Scoring). Qualitative data is obtained from interview result data and also data from description result, description, suggestion and input from trial subject to develop/improve the developed learning media, while quantitative data obtained from validation/assessment result from the experts and students' response. The data is collected by using the following techniques: interview, observation, expert review sheet, expert validation sheet, and student response questionnaire. The data generated from the conducted research, then will be analyzed, while the data analysis is performed as follows: (1) data obtained from the validation and review by the experts (expert judgment), data obtained from revies by experts (expert judgment) is in the form of input, correction and suggestions that later will be used as the instructional media revision reference, while the validation result is in the form of a feasibility assessment of the developed learning media. Data obtained from the validation results, then analyzed by following these steps: (a) converting the qualitative assessment into quantitative with the following terms: Source: Sugiyono (2013) The percentage of feasibility is calculated by using the total score divided by the criteria score then multiplied by 100%. Then interpreting the percentage of feasibility by using the criteria listed in the table below: (2013) From the criteria above, the learning media is considered as feasible if it gets the percentage of ≥ 61%. Student response data is obtained from small group evaluation and field evaluation. The data obtained are then analyzed by the following steps: (a) change the qualitative assessment to quantitative with the following terms (Sugiyono, 2013) Source: Sugiyono (2013) The percentage of student responses and the interpretation the score percentage of student response are calculated by using the following formula and criteria: RESULT AND DISCUSSION This research uses research and development by using Thiagarajan, Semmel and Semmel's model that are defining, designing, developing, and disseminating. As for the results of those stages are as follows: The defining stage consists of five steps: the first step, preliminary and final analysis yields data that SMA Muhammadiyah 1 Babat uses the Kurikulum 2013 in teaching and learning activities, as well as the results of the analysis on the class X students on the economic subjects, learners have learning difficulties on the subject matter of economic problems and how to overcome it because of the lack of learning media that is used to support the implementation of kurikulum 2013 and encourage learners in critical thinking so that learning objectives in achieving High Order Thingking (HOT)-based learning can be achieved. This approach takes students through critical reflection and learning processes to challenge their current thinking about the various issues in society (Lloyd and Bahr, 2010). The second step, the learner's analysis is done to identify the characteristics of the concerned students, the data of the analysis includes knowledge background, academic ability and cognitive development. The knowledge background is that previously taking two basic materials, the material about the introduction to economics, with the material provision, it is expected to help the learning process on the subject of economic problems and how to overcome them, the average ability of academic learners in good condition, although on certain materials that require a lot of memorization and material delivered at the same time with the high amount of material, students tend to experience boredom so that in the material that has lots of memorization, there are many who get the test score under the minimum mastery criteria (Kriteria Ketuntasan Minimal/KKM). The average age of the students of grade X SMA Muhammadiyah 1 Babat is between 15-17 years, according to Piaget, at that age the cognitive abilities of student are in the formal operational period. A formal operational period is when a student has the ability to think abstractly, reason logically, and draw conclusions from available information. Formal operational time is also characterized by someone who has the ability to think systematically, flexibly and effectively, and able to deal with complex issues. One can think flexibly because (s)he can see all the elements and possibilities that exist. One can think effectively because (s)he can see which thoughts are appropriate for the problem faced. Ability possessed by students in this formal operational period, is in accordance with the skills to be achieved in the kurikulum 2013. The third step is the concept analysis, includes on the material to be used in the learning video media development, as for the material coverage used is the economic problem and how to overcome it. As for the results of the concept analysis can be seen in chart 1. Figure 2. Concept analysis result Based on chart 2, it can be seen that the task analysis activity in this research begins from the selection of subject, and the subject chosen is economics subject, because based on preliminary studies that students have learning difficulties on economic subjects due to the amount of material, and most of the learning requires memorization, boring learning so that students have difficulty in understanding the lesson. Economic subjects were chosen also in consideration that economic subjects at Senior High School (SMA) is one of the subjects which are tested on the national exam, therefore the it is important to take economic subjects for improvement in learning. The material chosen in this study is the material on economic issues and how to overcome them, based on a preliminary study showed that the daily quizzes of the material showed that there are still many students who scored the test under the school's minimum mastert criteria (Kriteria Ketuntasan Minimal/KKM). The learning implemented in SMA Muhammadiyah 1 Babat has been using the Kurikulum 2013 where in addition to the learning must use a scientific approach, also in the activity of teaching and learning should use 21st century learning approach where students must have the 4 C skills: creative, critical thinking, communicative and collaborative. These skills are essential for someone when entering the working world. One of the skills a student must possess is the critical thinking skill. Critical thinking skill emphasizes the aspects of evaluation and synthesis to understand its meaning, resulting in understanding of cause, evidence, and theory. In order for the process of critical thinking to occur in learning, it requires the existence of special planning on materials, construction, and learning conditions (Pratiwi, 2012). Critical thinking is different from selfthinking. Critical thinking is a process of intellectual thought where thinkers deliberately judge their thinking quality. Thinkers use reflective, independent, clear, and rational thinking. Critical thinking is the process of analyzing or evaluating information about a problem based on logical thinking to make a decision (Murti, 2010: Jacobsen, Eggen, andKauchak, 2002;Bashith, and Amin, 2017). Therefore, critical thinking requires a deep thought process toward something that produces good thinking in decision making. The developed learning media is in the form of critical thinking-based contextual learning video. The fourth step is the task analysis, listed in chart 3. The task analysis describes the tasks undertaken by learners in learning by using critical thinking-based contextual learning video. First the students read the material on economic issues and how to overcome them from various sources from books, modules, internet materials etc. that can help students have initial knowledge about the material, then the teacher show the critical thinkingbased contextual learning video, students are assigned to observe the learning video. After the video is finished, the teacher then gives the questions that correspond to the played video. Questions are analytical questions that require critical reasoning. As for the answers to these questions lie in every plot of the learning video media by being implicitly showed through the plot of the video. The students then answer the questions. There is interconnectedness between the questions given with understanding and self-efficacy in applying critical thinking in learning (Lloyd and Bahr, 2010). The final step of the defining stage is the formulation of the objectives of learning which is is a step to formulate the learning objectives contained in the achievement indicators of the objectives to be achieved. According to the Kurikulum 2013, in formulating achievement indicators should refer to basic competence (Kompetensi Dasar/KD) 1 to 4. The learning objectives should include spiritual, attitudes, knowledge and skill objectives (Indonesian Government, 2014). The learning objective is formulated by converting concept analysis and task analysis. Aspect of skill that is highlighted in this material is to develop critical thinking. In the designing stage, there are three steps, the first is test preparation. The activities undertaken at this stage are the preparation of the test instruments used in the research process, the types of tests produced are: material expert validation and media expert validation questionnaires, and learners' responses questionnaires distributed to learners after the trial to find out their response towards the media. The second step is media selection. The activities in this stage are to determine the media developed in the research. The planned media is a critical thinking based contextual video media, the selection of this media is based on the existing needs in the field where there are still minimal learning media used to support the implementation of Kurikulum 2013 and to improve learners' critical thinking. The concept of the created learning video is to contain material about economic problems that occur in the community with adapted to the existing material, the basis of critical thingking to be achieved after students have learned by using the developed learning media. Learning Video is used in conveying lessons that utilize many images, texts, sounds or animations. There are also other reasons why this learning video is widely used, such as: (1) the video can be played repeatedly, (2) the video can be quickly forwarded or slowed down, (3) no special requirements for space, (4) the operation is relatively easy (Abu and Abidin, 2013). The importance of using video in learning as presented by Abu and Abidin (2013) stated that by using video as a learning media, students are not only able to make a mental representation of the semantic understanding of a story in both audio and visual form. But when presented together, each source provides additional information and completion that helps students remember symbols or images naturally. Video presentations should be designed to improve students' mental abilities and involve them in active learning. The third is the selection of initial formats and initial. Activities at this stage include producing learning video scripts, providing equipment and tools used in video making, choosing shooting locations and preparing people as actors in the video and providing costume/clothing, video taking at the shooting location, video editing process untul pro-ducing the learning video 1. At the time of video taking and editing, it is also noted the composition of the image, text, and sound clarity. The process of script-making begins with analyzing all the material that is on the economic problem and how to overcome it, from the next material analysis, is poured into the story line adapted to the daily life reality starting from the presentation of economic problems that exist in society, followed with solution to solve the problem. At the stage of this storyline making, the cast in the video is also chosen. When finished making the video script, then provide tools and equipment to be used in video shooting, as for tools and equipment in the video making are: camera, memory card, tripod stand, moving tripod, lighting, microphone, sound recorder, take video board, stationery, and computers for video editing. After all the tools and equipment are available then the next is background/venue selection for the video capture process that is adjusted to the desired conditions in accordance with the video scripts that have been previously prepared. The next process is choosing the actor/ cast and providing appropriate clothing with the make up based on the desired character in the script. The main process video shooting in the pre-determined background/ place, before the video capture process, the cast is prepared for the process of script reading and practicing the role of the character. Preparation of tools and equipment is also done in this stage. After the video shooting process is complete, then it is followed by editing using the computer and the appropriate application. In this editing process, also note the accuracy of the merger between the pieces of video, voice, text adder in the video. Once it is completed then the videos is rendered/ merged one whole video. The finishing of video editing produces a learning video 1 that is ready for experts validation. In the developing stage, there are two steps at this stage that are the validation step by the experts / experts and development trials. Validation is the act of proving through the appropriate steps of a mechanism, activities, procedures, processes and materials that are used in something. The validation process at this stage is done by material experts and media experts. Material expert are consisting of 2 lecturers of economics education and 1 economics teacher of SMA Muhammadiyah 1 Babat and the media experts are a lecturer of educational techonology and a lecturer of electrical engineering as well as the of video maker practitioner. The validation process is carried out with two activities: review and feasibility assessment. The review of this process is to provide learning video 1 to both experts (material experts and media experts) to obtain inputs for the improvement of developed media. Inputs from material experts are used as an improvement in the context of the economic subject matter that is featured in the learning video to fit the clump of economics science in general and applied curriculum in the school. The review process by media experts is used to get input from the expert in the context of the developed media display, and the composition of the media to conform to the applicable media rules. All inputs from the review activities are then used in the revision process of the learning video 1. The result of the next revision generates the learning video 2. Then, the learning video 2 is given back to the two experts to ensure the revision is in accordance with the given inputs, on the same occasion, the media also gets its feasibility and validity assesed. This feasibility assessment is as a media benchmark, wether feasible or not to be applied as a media that is used in economics learning in schools. The percentage of feasibility assessments obtained from material experts is 80, 44%, according to table 2 on the criterion of the expert validation category previously mentioned, it can be interpreted to be included in the very feasible category, while the percentage of feasibility assessment obtained from media experts is 88, 82%, the percentage is interpreted into a very feasible category. Aspects of feasibility criteria include: quality of content, language, practicability, visual appearance, audio aspect and media's ease of use. The presentation of a more detailed assessment of the learning video eligibility can be seen in Source: Processed Data (2017) The findings are in line with the opinion of Taylor (2014) that the use of video can be used in the learning process to improve the quality of learning. The existence of learning videos in the learning process can also improve the learners' learning activities because the existence of learning media provides new innovations in learning (Febriyanti, 2014). The result of the research showed the effectiveness of learning video in improving students' thinking level on the taught material. The result of data analysis showed that from 90 students in sample group 1, 60 of them showed improvement. In group 2 consisting of 60 students, there are 43 who showed improvement, while in group 3 consisting of 30 students, 15 of them showed improvement (Abu and Abidin, 2013). The next step in the developing stage is the development trial. In this step, it is done by applying a learning video worthy of assessed by experts in SMA Muhammadiyah 1 Babat to get the students' response towards the learning media. The result of the student's response was obtained during the development trial with the limited class as many as 20 students of grade X SMA Muhammadiyah 1 Babat with the composition of students with high, moderate and low achievements as seen from the student's score (Sadiman, 2011 Source: Processed Data (2017) The result of the percentage of students 'responses was obtained an average total of 83,64%, the results were then interpreted based on the table 4 on the category criteria of students' response assessment previously described, the students' response to the learning video media were very good. The data is also supported by the observation of the students in the students' learning, that the students are very enthusiastic in following the learning by using the video media and the students are actively answering questions given by the teacher. Presentation of the students' response aspect can be seen in more detail in table 6. CONCLUSION The conclusion of this research is the development of critical thinking based contextual video media by using Thiagarajan, Semmel and Semmel's model that is defining, designing, developing and disseminating. But in this study, is only until developing stage. The media feasibility result from the material experts were assessed very feasible with the percentage of 80,44% and the learning media experts assessed very feasible with the percentage of 88, 82%. As well as the results of grade X SMA Muhammadiyah 1 Babat students' responses on the application of the media is very good with the percentage of 83,64%. So it is concluded that critical thinking based contextual video media is feasible to be used as an economics learning media.
7,408.8
2018-03-01T00:00:00.000
[ "Education", "Economics" ]
Sociogenesis in Unbounded Space: Modelling Self-Organised Cohesive Collective Motion Maintaining cohesion between randomly moving agents in unbounded space is an essential functionality for many real-world applications requiring distributed multi-agent systems. We develop a bio-inspired collective movement model in 1D unbounded space to ensure such functionality. Using an internal agent belief to estimate the mesoscopic state of the system, agent motion is coupled to a dynamically self-generated social ranking variable. This coupling between social information and individual movement is exploited to induce spatial self-sorting and produces an adaptive, group-relative coordinate system that stabilises random motion in unbounded space. We investigate the state-space of the model in terms of its key control parameters and find two separate regimes for the system to attain dynamical cohesive states, including a Partial Sensing regime in which the system self-selects nearest-neighbour distances so as to ensure a near-constant mean number of sensed neighbours. Overall, our approach constitutes a novel theoretical development in models of collective movement, as it considers agents who make decisions based on internal representations of their social environment that explicitly take into account spatial variation in a dynamic internal variable. Here we discuss details relating to simulations performed to obtain the results presented in the main text.All simulations were performed using a codebase written in the C++ programming language.An asynchronous time updating scheme is used in simulations to avoid parity issues arising from symmetries in agent behaviour.For example, when a synchronous updating scheme is used, two neighbouring agents may exchange positions without ever meeting on the same lattice site.This situation is avoided when using an asynchronous updating scheme.A flow diagram of the timing scheme is shown in Fig. 1.At each discrete time-step, t, the order in which the N agents move is randomised.When an agent, k, is selected to move, they firstly check if there are any other neighbouring agents on the same lattice site, x k .If neighbours are detected, one neighbour, j, is selected at random as an interaction partner.An interaction then takes place between k and j with probability P kj (S k , S j ).In simulation results presented in the main text, agents do not interact if they already had an interaction in the current time-step, or if the currently encountered neighbour was their most recent interaction partner.Note that these conditions mainly influence the system by slowing down the convergence rate of social ranks, and the same stability regimes form without them.The social ranks of the winner and loser are then modulated up and down by δ + and δ − respectively.The initially selected agent, k, then performs a regression measurement with probability r.A high value for r enables agents to be more responsive to socio-spatial dynamics of their neighbours, but incurs a larger computational load for individual agents, and when performing simulations to study the system.On the other hand, when the measurement rate is too low, the rate of information updating is slow, resulting in agents being less responsive to system dynamics.Finally, the agent, k, updates its potential center, x * k , and takes a step according to its individual potential, V k (x; x * k , L).This completes the move of agent k, and the next agent in the randomised list then moves. Agents on same lattice site as ? k Yes Increase timestep Flow diagram for the timing scheme of simulations.The first time-step begins at the top-left corner of the flow diagram, and proceeds by following the direction of the arrows.The simulation ends when t reaches the pre-specified maximum number of time-step iterations. A. Polynomial Regression Agents compute their belief, S k (x), using polynomial regression, a linear regression method that assumes S k (x) to have the form of a polynomial of order u, where ξ is assumed to be a Gaussian white noise with variance σ 2 .Such a belief may also be represented by the coefficient vector β k = (β 0 , . . ., β u−1 ) T .When an agent, k, senses n neighbours, k obtains a data-set consisting of n observations {x j , S j } n j=1 .Linear regression then consists of solving a set of n simultaneous equations, corresponding to Eq. (S1), for the coefficients in β k .Expressing the simultaneous equations in matrix form, this is equivalent to solving where and for simplicity, it is assumed that each of the observations is subject to the same external process noise ξ j = ξ, so that the problem is homoscedastic.The ordinary least squares solution for the coefficient vector, β k , is found by minimising the residual sum of squares (RSS) between observations S j and belief predictions S k (x j ) whose minimum is found through differentiation with respect to the coefficient vector β k , and solving the resulting set of simultaneous equations.The ordinary least squares solutions is then given by [1] B. Weighted Polynomial Regression In the model presented in the main text, agents effectively perform local regression [1] to collectively estimate the social field, S(x, t).Weighted polynomial regression is used in order for these estimates to be smooth, by scaling the contribution of each sensed neighbour, j, to the regression by a distance dependent weight factor w j .The weight is defined according to the distance from each neighbour, j, located at x j to the location of the measuring agent, x k , through a smoothing kernel, w j = K h (x k , x j ), defined below.This smoothing kernel makes the agent sensing more local by reducing the contribution of data associated to agents who are further away from the agent performing the measurement.Furthermore, the smoothing kernel, K h , can be considered as a model of agent perception, by defining the extent and strength of agent sensory capabilities. The smoothing kernel, K h (x k , x j ), has a general functional form [1] where different choices of G define different kernels, and h is the width of the smoothing neighbourhood, which in the present case is the sensory radius of the agent, k.Here, the tri-cube kernel [1] is chosen as it offers compact support, which is desirable for computational convenience.If the support is not compact, agents are assumed to possess an infinite sensory range, which takes into account all other agents in the system for each computation.Weighted regression computes S k (x) by minimising the residual sum of weighted squares between observations, S j , and belief predictions, S k (x j ), given by with normalised weights W jj = w j / k w k .The weighted least squares solution is found by the same method as in the previous section [1].In matrix form, this corresponds to computing the coefficients using the equation where W is a diagonal matrix with elements W jj = w j / k w k , so that j W jj = 1. C. Model Fidelity The model fidelity, Φ k , associated to a belief, S k (x), is used by agents as a local measure of fitting accuracy.The quality of the fit is evaluated by considering the amount of information in the sensed data that is explained by the computed belief, by comparing the unweighted residual sum of squares (RSS) to the unweighted total sum of weighted squares (TSS) that would result from a zero-order polynomial regression, S k (x) = β 0 , that is equivalent to an average of S j , where S is the average of all S j in the fitting domain.Supposing the agent, k, located at x k senses n neighbours, the model fidelity is given by which results in a value Φ k ∈ (−∞, 1], with Φ k → 1 indicating a good fit, and Φ k → −∞ indicating a worse fit.Mathematically, the model fidelity is equivalent to computing an unweighted coefficient of determination for a model fitted by weighted regression.Weighted regression model fitting is performed by giving a higher weight to data associated with neighbours that are closer to the sensing agent, and so the estimated model is "constrained" to fit these data points more closely.Hence, by construction, the weighted model may fit the data associated to far-away neighbours more poorly.The model fidelity, Φ k , is evaluated by weighing all data equally, irrespective of distance from the sensing agent.Hence, unlike the coefficient of determination, it is possible that Φ k < 0 when the states of far-away agents diverge greatly from the estimated weighted regression model.Restricting agents from responding to estimates with Φ k < 0 by using a measurement waiting time t c prevents agents from inferring incorrect motion decisions from their weighted regression model when it disagrees greatly with the states of far-away neighbours. D. Problem Conditioning and Data Quality For a regression measurement to be performed, the agent must have at least 3 neighbours within sensing distance, h.This is due to the fact that a second-order polynomial regression has 3 coefficients and so requires at least 3 datapoints to fit.Given that a regression is performed, the weighted least squares regression solution in equation (S9) requires a square symmetric moment matrix, X T W X, to be inverted.For this matrix to be invertible, the problem must be well conditioned, as poor conditioning of X T W X may lead to numerical instabilities when it is inverted [2].The reciprocal condition number is a measure of how fluctuations in the input data change the resulting regression solution.This number depends on aspects of the moment matrix such as its rank and scale, with a low reciprocal condition number signifying poor conditioning.In simulations performed, a reciprocal conditioning number threshold, ϵ c = 10 −6 , is defined, below which the moment matrix, X T W X, is considered to be too poorly conditioned to invert. E. Data Standardisation for Regression Feature re-scaling [3] improves the regression problem conditioning by mapping the sensed values of x j to a standardised scale.This allows for a single reciprocal conditioning number threshold, ϵ c , to be used, since the scale of the input data has a direct effect on the reciprocal conditioning number.The regression method used assumes sensed data to have a normal distribution [1], and so standardisation is used in order to map the data, x j → x ′ j , so that it has a standard normal distribution, x ′ j ∼ N (0, 1), according to where x and σ x are the mean and standard deviation of the input data, x j . Due to the linearity of the regression, any re-scaling of the regression coefficients β k → β ′ k that results from rescaling of data x j → x ′ j is reversible after computation of β ′ k .In the case when S k (x) = β 0 +β 1 x+β 2 x 2 , the coefficients for the original regression problem, β k = (β 0 , β 1 , β 2 ), can be retrieved from the re-scaled coefficients, III. NAVIGATION PROCEDURE The potential center locations, x * k (t), are found at each time-step, t, through the solution to the minimisation problem which can be considered graphically as finding the points of intersection between the curve defined by a belief, S k (x), and the horizontal line at , the solutions to Eq. (S16) are given by which yield zero, one or two real solutions.In case two solutions are found, the solution that is closest to the previous potential center location, x * k (t − 1), is chosen, If there are no real solutions, the location, x * k , that minimises Eq. (S16) is the maximum of the belief, S k (x), given by If β 2 = 0, meaning that the agent had performed a first-order polynomial regression, the solution to Eq. (S16) is the location, x * k (t), which solves which is given by Finally, a threshold, ϵ β = 10 −4 , is used, such that the potential center, x * k (t), is only updated when |β 1 | > ϵ β .This is done in order to mitigate large fluctuations in computed potential center locations resulting from near-constant beliefs, S k (x), i.e. when β 2 = 0 and β 1 ≈ 0. This may occur when all agents have similar social ranks, S k , for example in early stages of the formation, or near minima of S(x, t) when agents fit a first-order polynomial regression.In both cases, the low magnitude gradient leads to potential centers, x * k , that can be far away from the current location of the agent, removing them from the group.This parameter requires tuning, since too high a value leads to unresponsive agents, whereas too low a value leads to agents that are too responsive to low magnitude gradients. IV. INTEGRATED SIGMOID SHAPE FUNCTION The integrated sigmoid function with respect to x, where K is obtained as a constant of integration.It can be shown that the integrated sigmoid in Eq. (S22) interpolates between an absolute value function (α ≈ 0) and a quadratic function (α → ∞).Using L'Hospital's Rule as α → ∞, When x > 0, the first term tends to zero.When x < 0, the first term tends to 2mx.Hence, When α ≈ 0, F (x; α, m, K) may be expanded around α = 0 using the Taylor series ln(1 + e y ) = ln(2) + where y = −αx.Simplifying the resulting expression, we obtain the quadratic function for α ≈ 0. However, α alone is not enough to classify the different regimes due to the additional dependence of F (x; α, m, K) on m and K, as shown in Fig. 2. Namely, values may be chosen for m and K which counteract the influence that α has in its asymptotic regimes, for example sending m → ∞ or K → ∞ in Eq. (S22).For this reason, the variable J = αK/m − ln(4) is introduced, which captures this multiple asymptotic dependence.This form of J is chosen because it also determines the number of real roots of Eq. (S22), and so determines the region of validity for the fit. The roots, x * ± , of the integrated sigmoid are found by solving F (x; α, m, K) = 0. Letting z = αx in Eq. (S22), this reduces to Re-arranging for z, exponentiating, and taking a square root both sides results in The inequality in Eq. (S40) is true if the inequality in Eq. (S42) is true.Hence, letting J = Kα/m − ln(4), this defines a lower bound such that the integrated sigmoid has real roots only when J ≥ 0. The value J > 0 corresponds to two real solutions, whereas J = 0 corresponds to one real root where the function maximum touches the x−axis, and the rest of the function is negative.In this latter case, the integrated sigmoid no longer defines a valid fit for the configuration of agents.4) enables us to distinguish the different macroscopic configurations between the different regimes.In both cases, the unstable regime has not been considered, because here agents in the system expand away from each other to such an extent that the integrated sigmoid no longer provides an informative fit.Data corresponds to the results presented in Fig. 3 of the main text. V. MEASUREMENT WAITING TIME EXPERIMENTS FROM VARYING INITIAL CONDITIONS To test the efficacy of the two-stage dynamics, we investigate the ability of the system to converge to a concave annular state from a variety of initial conditions when t c = 0 and when t c = 3 × 10 6 .The system is considered to have converged at time t * when the socio-spatial correlation first passes below a pre-defined threshold value, C(t * ) < C * = −0.97.The 'W'-initial condition, shown in the inset of Fig. 3(a), is defined by an initial social rank profile where m = 1, . . ., N is the ranked initial position of each agent from left to right and z 1 = 0, . . ., N/2 interpolates between initial convex (z 1 = 0) and concave (z 1 = N/2) annular configurations.We find that for both conditions of t c , as z 1 is decreased from z 1 = N/2, there exists a threshold value, z * 1 , below which the system no longer converges to a concave annular state.This value of z * 1 is lower for t c = 3 × 10 6 than for t c = 0, indicating that the two-stage dynamics increases the likelihood of reaching the desired goal state.This is illustrated in the insets of Fig. 3(a), which show the system time evolution with and without t c for a selected value of z 1 = 30.When t c = 0, the system remains frozen in the initially defined state and the presence of the radially increasing (convex) edges, facing away from the group center, is maintained.Motion biased by beliefs of agents located at the minima of S(x, t) quickly pushes the system apart so that agents cannot interact with or sense their neighbours.The system becomes locked into the initial configuration, which remains for the duration of simulations.The presence of a measurement waiting time, t c > 0, enables agents to have more interactions in the minima of the configuration.This differentiation in social ranks leads to a destabilisation of the radially increasing edges, which re-configure themselves into radially decreasing, concave edges. The 'W'-initial condition experiments suggest that the existence of concave, radially decreasing edges at early times is a precursor for the system to reach the desired cohesive pattern at long times.To confirm this hypothesis, we define the concave edge initial condition by initialising the social ranks with S C 0 (z 2 , m) = ∆S max (N/2 − |N/2 − m|) when the ranked position m = 1, . . ., N is at least z 2 agents away from the center (m < z 2 or m > N − z 2 ), and otherwise the agent social rank is randomly sampled from a uniform integer distribution S C 0 (z 2 , k) ∼ U (0, N ∆S max /2), where S k = N ∆S max /2 is the theoretical maximum social rank in the concave annular state.As shown in Fig. 3(b), here a threshold value, z 2 = z * 2 , also exists, below which the system does not converge to an annular state.The value of z * 2 is decreased when t c > 0, illustrating that the two-stage dynamics allows the system to reach the goal state from a larger perturbation.Insets correspond to z2 = 3.In this case, the configurations obtained for tc = 0 are concave but not radially symmetric.In both cases, the inclusion of tc > 0 increases the range of initial condition states that converge to a concave annular state, defined by a socio-spatial correlation C(t) < −0.97.Simulation parameters are the same as those in Fig. 2 of the main text, with k h = 0.5 and kL = 1, and results are averaged over 50 Monte Carlo realisations. A. Social Convergence The systems exhibits convergence to a concave annular state in the stable Full Sensing and Partial Sensing regimes, as shown in Fig. 4(a).It can be seen that parameter values with lower magnitude socio-spatial correlation are found in the Unstable regime, where the sensed proportion of agents is below n s (t) < 0.06, which in simulations corresponds to approximately 6 sensed agents.Below this value, the collective agent motion resulting from computed beliefs, S k (x), becomes unstable.If such instabilities lead small groups of agents to move away from the group, the group center of mass is shifted, thereby reducing the magnitude of the socio-spatial correlation. FIG. 2 . FIG. 2. (a)The shape regimes of the integrated sigmoid in Eq. (S22) cannot be classified using α alone.(b) Using J = αK/m − ln(4) enables us to distinguish the different macroscopic configurations between the different regimes.In both cases, the unstable regime has not been considered, because here agents in the system expand away from each other to such an extent that the integrated sigmoid no longer provides an informative fit.Data corresponds to the results presented in Fig.3of the main text. FIG. 3 . FIG. 3. (Colour online) Convergence times, t * , from different initial conditions with and without a measurement waiting time, tc.(a) 'W'-initial condition, S W 0 (z1, m).Insets correspond to z1 = 30.(b) Concave edge initial condition, S C 0 (z2, m).Insets correspond to z2 = 3.In this case, the configurations obtained for tc = 0 are concave but not radially symmetric.In both cases, the inclusion of tc > 0 increases the range of initial condition states that converge to a concave annular state, defined by a socio-spatial correlation C(t) < −0.97.Simulation parameters are the same as those in Fig.2of the main text, with k h = 0.5 and kL = 1, and results are averaged over 50 Monte Carlo realisations. 4 FIG. 4 . FIG. 4. (a)Socio-spatial correlation, C(t), in the (kL, k h )-state-space plotted after t = 10 9 steps.Lower magnitude values of C(t) are obtained in the Unstable regime, where unstable expansion leads some agents to move away from the group.This shifts the group center of thereby the socio-spatial (b) Scatterplot of SSC plotted against the average proportion of sensed neighbours ns(t).Points are coloured by the corresponding value of J and their shape is determined by the mean nearest-neighbour distance ∆(t).Colourless points are those with ns(t) < 0.06 and correspond to the Unstable regime.Data corresponds to the results presented in Fig.3of the main text. FIG. 5 . FIG. 5. (a)Fitted power-law coefficient, |γν |, for the mean-squared-displacement, ν(t), at time t = 10 9 .The absolute value is taken so that data can be plotted on a logarithmic scale when γν is close to zero and negative.Higher power-law coefficient values are found on the cusp between the Full and Partial Sensing regimes, where the system exhibits two possible expansion states and has not yet reached equilibrium.The power-law coefficient was fitted between t = 6.45 × 10 8 and t = 9.95 × 10 8 .(b) Corresponding MSD curves plotted on a log-log scale for each parameter-pair kL and k h .On each vertical axis, the MSD, normalised by the initial value is plotted, while time, t, is plotted on the horizontal axes.Data corresponds to the results presented in Fig.3of the main text. FIG. 6 . FIG. 6. (a)Fitted power-law coefficient, |γ∆|, of the mean nearest-neighbour distance, ∆(t), at time t = 10 9 .The absolute value is taken so that data can be plotted on a logarithmic scale.As in Fig.5, higher magnitude values of |γ∆| are found on the cusp between the Full and Partial Sensing regimes.Power-law coefficient fitting times are the same as in Fig.5.The colourbar has a lower limit of |γ∆| > 10 −3 to aid visualisation, as values in the Unstable regime reach |γ∆| = 10 −15 .(b) Normalised mean nearest-neighbour distance for each parameter-pair, kL and k h .Vertical axes show ∆(t) normalised by the initial value, while time, t, is plotted on the horizontal axes (log-log scale).Data corresponds to the results presented in Fig.3of the main text.
5,441.8
2023-03-16T00:00:00.000
[ "Computer Science" ]
Plasmonic Molecular Nanohybrids—Spectral Dependence of Fluorescence Quenching We demonstrate strong spectral dependence of the efficiency of fluorescence quenching in molecular systems composed of organic dyes and gold nanoparticles. In order to probe the coupling with metallic nanoparticles we use dyes with varied spectral overlap between the plasmon resonance and their absorption. Hybrid molecular structures were obtained via conjugation of metallic nanoparticles with the dyes using biotin-streptavidin linkage. For dyes featuring absorption above the plasmon excitation in gold nanoparticles, laser excitation induces minute changes in the fluorescence intensity and its lifetime for both conjugated and non-conjugated mixtures, which are the reference. In contrast, when the absorption of the dye overlaps with the plasmon resonance, the effect is quite dramatic, reaching 85% and 95% fluorescence quenching for non-conjugated and conjugated mixtures, respectively. The degree of fluorescence quenching strongly depends upon the concentration of metallic nanoparticles. Importantly, the origin of the fluorescence quenching is different in the case of the conjugated mixture, as evidenced by time-resolved fluorescence. For conjugated mixtures of dyes resonant with plasmon, excitation features two-exponential decay. This is in contrast to the single exponential decay measured for the off-resonant configuration. The results provide valuable insight into spectral dependence of the fluorescence quenching in molecular assemblies involving organic dyes and metallic nanoparticles. Introduction Plasmon excitation, the result of collective electron oscillation in metallic nanoparticles, has been used for more than a decade to control the optical properties of fluorophores. The ability to control and very often dramatically modify absorption or fluorescence of a molecule by placing a metallic nanoparticle in its vicinity owes to strong localization of the electromagnetic field by the latter. There are three key parameters that determine the strength of the interaction [1]. First of all, the relation between the energy of plasmon excitation and the optical properties of a fluorophore implies whether the emission or the absorption of the fluorophore is enhanced. In order to observe strong fluorescence radiative rate enhancement the plasmon resonance should overlap with the emission spectrum of the fluorophore [2]. Analogous relation has to be conserved for predominant enhancement of the absorption [3]. Secondly, the distance between the fluorophore and the metallic nanoparticle determines whether the fluorescence is enhanced due to local electromagnetic field created by the metallic nanoparticle or, alternatively, if the energy is efficiently transferred from the fluorophore to the metallic nanoparticle, which would result in fluorescence quenching [4,5]. Last but not least, the relative orientation of the fluorophore and the metallic nanoparticle to the laser excitation could contribute to the net effect measured for an assembly comprising fluorophores and metallic nanoparticles. This parameter has proven to be the hardest to control. The benchmark experiment that evaluates all these parameters and their impact on the strength of plasmon-induced interaction between a single metallic nanoparticle and a single fluorophore has been reported by Anger et al. [2]. The metallic nanoparticle was placed at the end of the tip and precisely positioned above the emitter at distances between 0 and 60 nm. Upon changing the separation between the two particles the fluorescence intensity was measured. It has been found that there exists an optimal distance, typically in the range of 10 to 20 nm, for which the enhancement of the fluorescence intensity is the strongest. For smaller distances, the quenching was observed while for larger distances there was essentially no effect upon the fluorescence intensity. Similar qualitative behavior has been observed for a few other hybrid nanostructures comprising metallic nanoparticles and either semiconductor quantum dots [6][7][8], organic molecules [9][10][11], or natural photosynthetic complexes [12][13][14][15]. Several architectures that allow the study of interactions between plasmon excitations and fluorophores have been proposed and demonstrated in addition to the highly sophisticated strategy described in [2]. The most straightforward one is a mixture of both components in solution [16]. In this case there is no robust control of the separation between the nanoparticles and fluorophores or of their relative orientation. It has been frequently shown that mixing fluorophores with metallic nanoparticles directly in solution leads to efficient quenching of the fluorescence [5,16]. In order to more accurately define the distance between the fluorophores and metallic nanoparticles, bioconjugation approach has been applied [6], in which both metallic nanoparticles and fluorophores are functionalized with proteins or functional groups characterized with high affinity for binding. One of the most prominent examples is the biotin-streptavidin linker used for attaching gold or silver nanoparticles to semiconductor nanoparticles [6]. In the case of bioconjugation, some degree of separation control has been demonstrated by using polymer chains as intermediates [17]. Another strategy is to fabricate a layered structure, in which a thin layer of metallic nanoparticles is separated from the fluorophores by a dielectric layer with controlled thickness [3,14]. The experiments carried out for such structures, where the distance can be precisely controlled, have mirrored the dependence of the fluorescence intensity upon the distance as measured for a single molecule architecture [14]. In this work we focus on a molecular assembly obtained in solution using bioconjugation approach with biotin-streptavidin linker. The structure consists of gold spherical nanoparticles conjugated with Atto organic dyes. While the plasmon resonance of the nanoparticles appears approximately at 530 nm, the absorption maximum of the dyes varied between 488 nm and 550 nm. In this way we can study the spectral dependence of the plasmon-induced effects upon the fluorescence properties of the dyes. In the case of reference sample, where organic dyes were mixed directly with non-functionalized gold nanoparticles we observe a modest reduction in fluorescence intensity, with the strongest effect seen for Atto 550. These changes in intensity were not accompanied with modifications of fluorescence transients. In contrast, when gold nanoparticles are functionalized with streptavidin, thus enabling conjugation, the fluorescence quenching is much stronger, reaching up to 95%. The degree of the quenching depends critically upon the overlap between the plasmon resonance and absorption/emission of the dye molecules: the larger the overlap the stronger the effect of quenching. For streptavidin-functionalized gold nanoparticles the fluorescence decays of Atto dyes feature characteristic behavior. While the transients measured for Atto 488-Au conjugate show no dependence upon the concentration of gold nanoparticles, the quenching-and thus rapid shortening of the fluorescence lifetime-in the case of Atto 550 is almost instantaneous. The results indicate that a bioconjugate is formed in a solution and that plasmon-induced effects depend upon the relation between the optical properties of the fluorophore and metallic nanoparticles. These observations will be important for designing novel molecular sensors based upon plasmonic interactions [17]. Spectroscopic Characterization In Figure 1 we present the optical spectra of all the components used for assembling molecular hybrid nanostructures. The absorption spectrum of the gold nanoparticles in solution ( Figure 1a) features prominent plasmon resonance at 521 nm. Upon attachment of streptavidin to the nanoparticles we observe measurable shift of the resonance towards lower energies (537 nm, red). It is well known that such a shift takes place due to change in local dielectric constant due to attachment of the protein [6]. Therefore, it may suggest that streptavidin is indeed attached to gold nanoparticles. In Figure 1b we show fluorescence and absorption of three Atto dyes used for conjugation with gold nanoparticles: Atto 488 (black), Atto 520 (red), and Atto 550 (green). Both the absorption and the fluorescence spectra are very similar across all three dyes except for the higher intensity of the high-energy shoulder in the absorption spectrum of Atto 520. The maxima of absorption appear at 500 nm, 520 nm and 550 nm for these molecules, with corresponding fluorescence maxima at 518 nm, 540 nmm and 570 nm. By comparing these data with the spectrum measured for gold nanoparticles functionalized with streptavidin, we see that in the case of Atto 488 the absorption as well as emission are located above the plasmon energy of the nanoparticles. The spectra of Atto 520 match much better with the plasmon resonance of the nanoparticles than for Atto 488, while maxima of emission and absorption of Atto 550 are almost perfectly associated with the plasmon resonance. The set of fluorescent dyes gives us not only an opportunity to test the conjugation procedure but also to study plasmon-induced changes in qualitatively different spectral overlap configurations. Optical Properties of Atto-Au NP Conjugates Fluorescence spectra of Atto dyes mixed with gold nanoparticles are shown in Figure 2. For all measurements the excitation wavelength was 485 nm. The experiments were carried out on mixtures containing Atto dye and non-functionalized gold nanoparticles (upper row) and gold nanoparticles with streptavidin attached (lower row). The emission was monitored through over 20 min. after mixing the components in a quvette. The measurements were repeated several times for each preparation and the results were very similar to the ones displayed in Figure 2. For all sample preparations we observe a decrease in fluorescence intensity as a function of time. We note that, for pure Atto dyes in aqueous solution, the effect of photobleaching is much less compared to the fluorescence quenching observed for Atto-Au nanoparticle mixtures and it is similar to all three dyes. The comparison for Atto dyes with different absorption/fluorescence maxima mixed with non-functionzalized gold nanoparticles indicates that the effect is the strongest for the Atto 550, the one that features the highest absorption overlap with the plasmon excitation in metallic nanoparticles. This quenching results from random movement of both the nanoparticles and fluorophores in the solution. The percentage of molecules quenched after approximately 20 min. from the start of the experiment was determined to be 41%, 73%, and 85% for Atto 488, Atto 520, and Atto 550, respectively. Repeating the same experiment with the same concentrations, but with the gold nanoparticles functionalized with streptavidin, yields substantial differences in the efficiency of the fluorescence quenching. Each of the three dyes features much more rapid decrease of the fluorescence intensity, as displayed on the lower row in Figure 2. We find 70%, 81%, and 95% of Atto 488, Atto 520, and Atto 550 molecules, respectively, are quenched after approximately 20 min. Again, similarly as for the gold nanoparticles without streptavidin, the emission is more strongly reduced for the red-shifted dyes. The quenching dynamics on hybrid nanostructures has been shown to depend upon the amount of metallic nanoparticles added to the solution [16,18]. In Figure 3 we show an example of fluorescence intensity dependence on time of reaction for three concentrations of gold nanoparticles, 1 μL, 5 μL, and 10 μL. The experiment has been carried out for solution containing Atto 520 dye together non-functionalized gold nanoparticles ( Figure 3a) and nanoparticles functionalized with streptavidin ( Figure 3b). The nanoparticles were added to the solution after 2 min. After approximately 20 min. the intensity of pure dye solutions decreases by about half. In agreement with the results shown in Figure 2, addition of gold nanoparticles results in more rapid decrease of the emission and the reduction is stronger when highly concentrated gold nanoparticle solution is added to the mixture. Nevertheless, in the case of non-functionalized nanoparticles, the decrease of the emission features relatively smooth gradual characteristics for all three concentrations of gold nanoparticles. Conversely, when the nanoparticles are functionalized with streptavidin, enabling thus conjugation with the organic dyes, the decrease is much more rapid, in particular for 5 μL and 10 μL additions of gold nanoparticles. In fact, for the highest concentration of gold nanoparticles the fluorescence drops to less than 20% within the first 3 min. after the reaction. In perfect agreement, the analogous results obtained for Atto 488 have shown the fluorescence of this dye is less affected by plasmon excitations, while Atto 550 features more rapid changes in the fluorescence intensity. 0,00 1,00x10 6 2,00x10 6 3,00x10 6 4,00x10 6 5,00x10 6 6,00x10 6 7,00x10 Atto488-B + 10 ul AuNP-SH after 3 min. 6.5 10 13 16 18 0,00 5,00x10 5 1,00x10 6 1,50x10 6 2,00x10 6 2,50x10 6 3,00x10 6 Atto520-B + 10 ul AuNP-SH after 3.6 min. 7.3 11 14.6 18.3 0,00 5,00x10 5 1,00x10 6 1,50x10 6 2,00x10 6 2,50x10 6 3,00x10 6 Atto550-B +10 ul AuNP-SH atfer 4.5 min. 9 13.5 18 22.5 500 525 550 575 600 0,00 1,00x10 6 2,00x10 6 3,00x10 6 4,00x10 6 5,00x10 6 6,00x10 6 7,00x10 6 Atto488-B + 10 ul AuNP-S-SA after 3 min. 6.5 10 13 16 18 525 550 575 600 625 650 0,00 5,00x10 5 1,00x10 6 1,50x10 6 2,00x10 6 2,50x10 6 3,00x10 6 Atto520-B + 10ul AuNP-S-SA after 3.6 min. 7 5 1,00x10 6 1,50x10 6 2,00x10 6 2,50x10 6 3,00x10 6 Atto550-B +10 ul AuNP-S-SA after 4.5 min. 9 13.5 18 22.5 Figure 3. (a) Intensity of fluorescence measured for Atto 520 mixed with non-functionalized gold nanoparticles. (b) Intensity of fluorescence measured for Atto 520 mixed with functionalized gold nanoparticles. In both graphs squares, circles, and triangles correspond to 1 μL, 5 μL, and 10 μL addition of nanoparticles. Solid line represents pure Atto 520 dye. Fluorescence was excited using 485 nm laser. Time-resolved fluorescence spectroscopy provides valuable and complementary information to the results obtained with continuous-wave laser excitation [19]. The dynamics of the fluorescence shed light into the processes responsible for the fluorescence quenching and the factors that determine its strength. In Figure 4 we display fluorescence transients measured for mixtures containing organic dyes and both non-functionalized (upper row) and streptavidin-functionalized (lower row) spherical gold nanoparticles. The transients displayed were measured at the end of 20 min. long reaction, which started with adding gold nanoparticles to the solution of Atto dyes. Continuous-wave experiments indicate that for these two cases the fluorescence intensity decreases with the strongest effect observed for Atto 550. Time resolved measurements show that despite the strong fluorescence quenching observed for mixtures containing Atto dyes with non-functionalized gold nanoparticles, the fluorescence lifetime remains unchanged, even for the very high concentration of metallic nanoparticles (105 μL). The decay times of fluorescence for all three Atto dyes are very similar and equal to 4.1 ns. In contrast, the experiments carried out for mixtures of streptavidin-functionalized nanoparticles, feature strong spectral dependence of the transient behavior. In the case of Atto 488, which overlaps weakly with the plasmon resonance of the gold nanoparticles, the fluorescence lifetime remains unchanged upon addition of metallic nanoparticles to the solution. As the absorption/emission energy of the organic dyes is shifted towards longer wavelength, we observe qualitatively different behavior to all previously discussed cases. Namely, for both Atto 520 and Atto 550 mixed with streptavidin-functionalized gold nanoparticles-we find biexponential decay. The longer decay is identical to the one measured for pure dye solution, thus we attribute it to molecules that are not conjugated with gold nanoparticles, therefore not experiencing efficient non-radiative energy transfer to the metallic nanoparticle. The much faster component (~ 100 ps), which is comparable to our system resolution, we assign to the fraction of molecules that are quenched due to efficient energy transfer to the metallic nanoparticles upon conjugation. The results shown in Figure 4 demonstrate strong influence of the spectral overlap between plasmonic resonance and the absorption/emission of the fluorophore upon the efficiency of fluorescence quenching, in full correspondence to the continuous-wave fluorescence experiment. It also shows that the mechanism responsible for decrease of the fluorescence is different in the case of functionalized gold nanoparticles, which bind with organic dyes, as compared to direct mixtures with no covalent binding taking place in the solution. The exact origin of the observed differences in the fluorescence transients is not completely clear. However, we can speculate that in the case of conjugated hybrid nanoparticles, where the separation between metallic nanoparticles and fluorescent dyes is fixed, the dominant process could be the Foerster resonant energy transfer. On the other hand, in the case of non-conjugated sample we could argue this is purely collisional quenching as the metallic nanoparticles are negatively charged and the Atto dyes are either positively (Atto 520 and Atto 550) or negatively (Atto 488) charged. The difference in charging could result in differences observed in fluorescence intensity between various dyes mixed with non-functionalized gold nanoparticles. Future experiments are planned to solve this issue. However, the strength of fluorescence quenching upon conjugating Figure 4 indicate that adding 40 μL of streptavidin-functionalized gold nanoparticles to Atto 520 results in approximately 80% of molecules being conjugated, and thus quenched. Complete quenching takes place for 105 μL addition of streptavidin-functionalized gold nanoparticles. Therefore, in agreement with continuous wave fluorescence data, the results obtained for Atto 520 indicate that the relative ratio of conjugated molecules to emitting molecules increases with increasing the concentration of streptavidin-functionalized gold nanoparticles in the solution. Furthermore, for the most red-shifted Atto 550, which features the largest spectral overlap with the plasmon resonance in gold nanoparticles, the almost complete quenching of the emission takes place already upon addition of 40 μL of streptavidin-functionalized gold nanoparticles. It is not only the percentage of conjugated dyes in any given solution that depends upon the concentration of gold nanoparticles, but also the speed of the reaction itself. Namely, for the Atto 550 mixed with 20 μL of streptavidin-functionalized gold nanoparticles the intensity of dyes that are not covalently bound to the gold nanoparticles, i.e.,: these responsible for unchanged fluorescence decay seen in Figure 4, decreases gradually till the equilibrium is reached. The equilibrium most certainly corresponds here to the situation when all metallic nanoparticles are conjugated with the dye. In contrast, for 40 μL and 105 μL of streptavidin-functionalized gold nanoparticles the appearance of the biexponential fluorescence decay is instantaneous and the relative contribution of conjugated and non-conjugated Atto 550 molecules remains unchanged from the beginning till the end of the reaction. The results measured with pulsed excitations fully corroborate the conclusions derived from the steady-state fluorescence spectroscopy, pointing towards extremely efficient energy transfer between fluorophores and metallic nanoparticles that show large spectral overlap. Preparation of Au spherical nanoparticles coated with 3-mercapropropionic acid (AuNP-S-COOH) [20]. Briefly, 1 mL of 5 × 10 −5 M HAuCl 4 water solution was mixed with a 34 mL of 3.25 × 10 −3 M ethanol solution of 3-mercaptopropionic acid. A freshly prepared 0.15 M solution of NaBH 4 (3.4 mL) was added dropwise under vigorous stirring in an ice bath to reduce the gold salt. The resulting dark brown solution was stirred for one hour after which the obtained nanoparticles were allowed to sediment to the bottom of the flask and the supernatant was removed by pipette. The particles were washed twice by dispersing them in 98% methanol and the solvent was removed by centrifugation (2000 rpm). Finally, the product was redispersed in 1 mL of water. Functionalization of the gold nanoparticles with streptavidin. Protein can be directly adsorbed on gold nanoparticles via electrostatic interaction, but this method is non-specific. To make the functionalization more specific, the EDC cross-linking procedure can be applied [17]. Covalent binding. To link affinity bioligands on Au NPs EDC cross-linking procedure was utilized. Fresh solution of 0.2 M EDC was prepared in PBS buffer solution at pH 7.4. 50 μL of EDC solution was reacted with 2 mL of gold nanoparticles solution for 30 min. To this mixture we added 100 μL of streptavidin with a concentration of 1.6 × 10 −7 M and left it to react for 2 h. The result of this method is a stable amide bond that is formed between free amine group of streptavidin and carboxylic group on gold nanoparticles surface. Conjugation procedure. The streptavidin-biotin bond is particularly suitable for conjugating biomolecules with inorganic nanostructures, as it is one of the strongest non-covalently interacting pairs; the binding is relatively fast and only slightly affected by the pH, temperature, organic solvents, etc. We used the streptavidin-biotin linkage to conjugate the gold nanoparticles functionalized with streptavidin to different dyes from family Atto substituted with biotin. NPs-Atto superstructures were obtained by mixing appropriate volumes of stock solutions of NPs-SA and Atto-B. The conjugation processes were carried out in plastic UVette with a range of transparency from 220 to 1600 nm. In typical preparation of the Atto-B-SA-AuNPs conjugate, a different value of gold nanoparticles solution in range of 1 to 105 μL was added into the constant concentration of PBS water solution Atto-Biotin dyes. The following concentrations of dye molecules were used: Atto488: 5.41 × 10 −8 , Atto520: 4.1 × 10 −8 , and Atto550: 3.67 × 10 −7 . The concentration of gold nanoparticles was approximately 5.2 mM. Spectroscopic Characterization of Hybrid Nanostructures in Solution The optical properties of Atto dyes, Au nanoparticles, and mixtures thereof were investigated with UV-Vis absorption spectroscopy, fluorescence spectroscopy and time-resolved fluorescence. Absorption spectra were measured at room temperature, using quartz cuvette with a 1-cm optical path in range 350-1100 nm. Fluorescence spectra of the solutions were measured using Fluorolog 3 spectrofluorimeter (Yobin-Ivon) equipped with a photomultiplier detector and a Xenon lamp coupled to a double monochromator for the excitation. The excitation wavelength was 485 nm, which corresponds to the excitation used for time-resolved experiments. In order to evaluate the strength of fluorescence quenching upon addition of gold nanoparticles, fluorescence spectra were monitored up to 20 min. after start of the reaction. Fluorescence transients were measured for solutions placed in cuvette. Various amount of gold nanoparticles, both non-functionalized and functionalized with streptavidin, were used. For excitation we used a laser with wavelength of 485 nm that can be operated in either continuous-wave or pulsed mode with 80 MHz repetition rate. The excitation power was 500 μW and the focal spot was about 100 μm in diameter. Fluorescence decays were collected using a Becker & Hickl SPC-150 time-correlated single photon counting card with a fast photodiode detector (idQuantique id100-50) [21]. For these measurements a set of filters comprising of FD1Y longpass and HQ550/40 (Chroma) bandpass were used. The temporal resolution of the setup was about 100 ps. Conclusions We studied the fluorescence properties of Atto dyes coupled to metallic nanoparticles. In order to study the influence of spectral properties of the dyes upon the fluorescence quenching caused by non-radiative energy transfer from the molecule to the nanoparticle, we used three different dyes with varied emission/absorption overlap with the plasmon resonance of gold nanoparticles. For little overlap values we observed significantly less efficient quenching of the fluorescence as compared to the strongly overlapping case, where the quenching was almost complete 20 min. after adding gold nanoparticles to the solution of organic dyes. Functionalization of the gold nanoparticles with streptavidin that enables conjugation with biotin-functionalized dyes strongly increases the efficiency of the fluorescence quenching. The results of time-resolved fluorescence spectroscopy point towards qualitatively different mechanism of fluorescence quenching for conjugated and non-conjugated mixtures of hybrid nanostructures.
5,258.4
2012-01-18T00:00:00.000
[ "Chemistry", "Materials Science", "Physics" ]
The secretome from human-derived mesenchymal stem cells augments the activity of antitumor plant extracts in vitro Cancer is understood as a multifactorial disease that involve multiple cell types and phenotypes in the tumor microenvironment (TME). The components of the TME can interact directly or via soluble factors (cytokines, chemokines, growth factors, extracellular vesicles, etc.). Among the cells composing the TME, mesenchymal stem cells (MSCs) appear as a population with debated properties since it has been seen that they can both promote or attenuate tumor progression. For various authors, the main mechanism of interaction of MSCs is through their secretome, the set of molecules secreted into the extracellular milieu, recruiting, and influencing the behavior of other cells in inflammatory environments where they normally reside, such as wounds and tumors. Natural products have been studied as possible cancer treatments, appealing to synergisms between the molecules in their composition; thus, extracts obtained from Petiveria alliacea (Anamu-SC) and Caesalpinia spinosa (P2Et) have been produced and studied previously on different models, showing promising results. The effect of plant extracts on the MSC secretome has been poorly studied, especially in the context of the TME. Here, we studied the effect of Anamu-SC and P2Et extracts in the human adipose-derived MSC (hAMSC)–tumor cell interaction as a TME model. We also investigated the influence of the hAMSC secretome, in combination with these natural products, on tumor cell hallmarks such as viability, clonogenicity, and migration. In addition, hAMSC gene expression and protein synthesis were evaluated for some key factors in tumor progression in the presence of the extracts by reverse transcription-quantitative polymerase chain reaction (RT-qPCR) and Multiplex, respectively. It was found that the presence of the hAMSC secretome did not affect the cytotoxic or clonogenicity-reducing activities of the natural extracts on cancer cells, and even this secretome can inhibit the migration of these tumor cells, in addition to the fact that the profile of molecules can be modified by natural products. Overall, our findings demonstrate that hAMSC secretome participation in TME interactions can favor the antitumor activities of natural products. Supplementary Information The online version contains supplementary material available at 10.1007/s00418-024-02265-1. Introduction Cancer is one of the main causes of death in the world, with more than 9.9 million victims recorded in 2020 (Sung et al. 2021).Followed by colorectal and prostate cancer, breast and lung cancers have the highest lethality rates (GLOBO-CAN 2020).Tumor cells have been characterized for having a high proliferation rate, evading growth suppression mechanisms and the immune system as well as resisting programmed cell death, promoting angiogenesis, migration, metastasis, and reprogramming energy metabolism.These features, driven by the cells themselves or influenced by the surrounding tumor microenvironment (TME), contribute to tumor progression (Hanahan andWeinberg 2000, 2011).TME comprises the cancer and stromal cells, all embedded within a dynamic extracellular matrix.Through this matrix, these different cell populations interact, either directly or through paracrine signaling (Arneth 2020).TME components significantly influence the tumor progression and its potential elimination, in part due to the composition of the immune infiltrate, which can be either enriched in effector or suppressor cells (da Cunha et al. 2019;Duan et al. 2020). MSCs are currently known for their immunomodulatory capacity, mainly through their secretome (Praveen Kumar et al. 2019); they can also migrate and home in sites of inflammation, including tumors (Caplan 2008(Caplan , 2019;;Nombela-Arrieta et al. 2011).However, their role in the TME has been shown to be paradoxical, either promoting or preventing tumor progression (Hong et al. 2014).There is growing evidence to suggest that as MSCs establish their niches, influenced by their interactions with surrounding cells, they may shift to a pro-or antitumor phenotype (Berger et al. 2016;Hass 2020). Due to the multifactorial nature of cancer, plant-based therapies offer an alternative to the required multifactorial approach based on the hypothesis that the different metabolites act synergistically on different molecular targets, potentially achieving tumor eradication (Williamson 2001;Lopes et al. 2017).On this basis, we propose to investigate two extracts: one rich in flavonoids obtained from the plant Petiveria alliacea (Anamu-SC), which has been shown to have a cytotoxic effect on leukemic and melanoma tumor lines, and the other obtained from the plant Caesalpinia spinosa (P2Et), which stands out for its antioxidant capacity, induction of immunogenic cell death, and decrease in tumor size in vivo (Urueña et al. 2015;Gomez-Cadena et al. 2016;Sandoval et al. 2016;Hernández et al. 2017;Ballesteros-Ramírez et al. 2020;Lasso et al. 2020). Regarding natural products, the roles that they may play in tissue regeneration processes acting on MSCs has been described (Saud et al. 2019).The use of MSCs for delivering antitumor drugs and for modulating tumor angiogenesis through secreted factors has been proposed (Lan et al. 2021).However, there are no clear studies that investigate the effect of MSCs within the TME for therapeutic purposes.In this study, we aim to investigate the effects of the natural products Anamu-SC and P2Et extracts on the interaction between MSC obtained from human adipose tissue (hAMSC) and human melanoma and breast cancer cell lines (A375 y MCF7).The MSC's secretome is thought to be responsible of several of the MSC properties; therefore, we decided to also study the effect of the hAMSC-conditioned medium (hAMSC-CM) combined with the natural products evaluating their effects on cancer cells viability, migration, and clonogenicity.We further analyzed the expression of secretome-associated molecules that have been reported to play a role in tumor progression or elimination by RT-PCR and Multiplex assay on hAMSCs. Cell culturing hAMSCs were obtained from the cells bank of the Tissue Engineering and Cell Therapy laboratory and were characterized by flow cytometry (detailed in the Supplemental Material).Tumor cells lines were provided by the Immunobiology and Cell Biology group of Pontificia Universidad Javeriana.Cells were kept in standard conditions as described in the Supplemental Material. Natural products Natural products were produced using Caesalpinia spinosa and Petiveria alliacea plant materials from Colombia; the Colombian Environmental Ministry authorized the research by the agreement for Access to Genetic Resources and Derived Products no. 220/2018 (RGE 0287-6). To obtain P2Et, pods and fruits of Caesalpinia spinosa (Molina) Kuntze (divi-divi or tara) were collected in the wild, in the municipality of Villa de Leyva-Province of Ricaurte (Department of Boyacá, Colombia) in the polygon delimited by the following geographic coordinates: between 5°37 and #39;95 and #39; and #39; north and 5°39 and #39;17 and #39; and #39; north; and between 73°32 and #39;19 and #39; and #39; west and 73°34 and #39;63 and #39; and #39.Annual average temperatures were recorded between 14.7 and 27.5 °C in the month of March 2013 and, identified by Luis Carlos Jimenez, from the Colombian National Herbarium (voucher specimen L 523714).P2Et, was standardized and chemically characterized as previously described (Sandoval et al. 2016). On the other hand, the leaves of Petiveria alliacea were collected in Quipile, Cundinamarca, Colombia, at a temperature of 24 °C; the plant material was identified by Antonio Luis Mejía from the National Herbarium of Colombia (voucher number COL 333406).The Anamu-SC extract was obtained through supercritical fluids at Corporación Universitaria Lasallista at 60 °C, 400 bar, flow 30 kg/h, and with 15% ethyl acetate as a cosolvent; the procedure was previously described (Ballesteros-Ramírez et al. 2020). For each assay, both extracts were diluted in 95% ethanol (Merck, Darmstadt, Germany) obtaining a 25 mg/ml fresh solution and homogenized by means of vortex.Working solutions were stored for a maximum of 1 month at 4 °C. hAMSC-CM preparation hAMSCs were plated on 75 cm 2 culture flask until 80-90% confluency was reached.Then, four washes with Ringer's lactate, and a final one with MEM (without phenol red and without supplementation, Corning, Corning, USA), were done to remove any traces of platelet lysate.Cells were then cultured in 5 ml Dulbecco's modified Eagle medium (DMEM)/F12 supplemented with 1% penicillin/streptomycin and 1% l-glutamine for 24 h.hAMSC-CM was collected, centrifugated at 85 RCF for 10 min, and filtered in 0.22 μm syringe filters; then, it was aliquoted and stored at -20 °C until it was used. MTT assay hAMSCs, A375, and MCF7 cell viability was determined using the MTT assay (Alfa Aesar, Ward Hill, USA) after treatment with Anamu-SC, P2Et, or hAMSC-CM, alone or in combination with natural products.3 × 10 3 cells were seeded in 96-well plates and allowed to attach overnight.They were treated with serial dilutions of the natural products ranging from 500 to 7.81 μg/ml for 48 h; afterwards, the medium was discarded, and cells were washed once with Ringer's lactate and MTT was added (final concentration 0.33 mg/ml) and incubated for 4 h.MTT was discarded, and formazan crystals were solubilized in dimethyl sulfoxide (DMSO, Fisher Scientific, Pittsburgh, USA) to measure the absorbance at 540 nm (Liu et al. 1997) (EPOCH microplate reader, BioTek, Winooski, USA).Absorbance values were normalized to negative controls and Doxorubicin 10 nM served as positive control. Indirect coculture and viability by Alamar Blue hAMSCs and tumor cells were cocultured in a transwell system (Corning, Corning, USA), and tumor cell viability was determined by Alamar Blue (InvitrogenTM, Waltham, USA) to model their interaction within the tumor microenvironment.A total of 19 × 10 3 hAMSCs were seeded on 0.4 μm pore-size transwell inserts directly on the membrane or embedded in 100 μl of fibrin gel (Apical side), and the same amount of A375 or MCF7 (1:1) was seeded on 24-well plate dishes (Basal side) and allowed to attach overnight.Then, cells were treated (both sides equally) with Anamu-SC or P2Et for 48 h using the corresponding IC50 for tumor cells; 10 nM doxorrubicin was used as a positive control and fresh medium as a negative control.Fibrin gels were prepared following the protocol described in (Gaviria et al. 2020).Cell viability was determined by Alamar Blue as instructed by the manufacturer.Briefly, wells were washed once with Ringer's lactate, and then 500 μl of fresh medium and 50 μl of Alamar Blue were added to each well and left in incubation for 3 h.After that, 100 μl of each well was transferred to a new 96-well plate to measure the absorbance at 570 and 600 nm (EPOCH microplate reader, BioTek, Winooski, USA). Colony forming units assay The clonogenicity of tumor cells treated with natural products was determined using the corresponding IC50, conditioned media, or a combination.For this purpose, 250 × 10 3 cells per well were seeded in 6-well plates.The following day, treatments or serum-free medium were added as a positive control for 48 h.Cells were then detached and reseed (1 × 10 3 cells per well) in triplicate in 6-well plates; they were maintained for 10 days, allowing colonies to form in the control group.Colonies were fixed and stained with 0.5% crystal violet (Merck, Darmstadt, Germany) diluted in 80% methanol (Franken et al. 2006).Colonies were segmented and counted using Fiji. Migration assay Tumor cell migration was assessed by the wound healing assay (Grada et al. 2017).A total of 3 × 10 5 cells were plated on 24-well plates, forming a confluent monolayer using 2.5% fetal bovine serum (FBS) medium.The next day a scratch was done using a 200-μl micropipette tip, the medium was removed, and all wells were washed once with Ringer's lactate.Serum-free DMEM/F12 medium with 1% l-glutamine or hAMSC-CM was then added with the respective treatments using sublethal doses of the extracts (IC50/10); doxorubicin 1 nM was used as a control.Two pictures per well were taken at 0, 24, and 48 h.The images were then analyzed using Fiji software to determine the percentage of migration inhibition. Confocal microscopy MCF7 cytoskeleton arrangement was observed by confocal microscopy using the same treatments as in the wound healing assay.A total of 20 × 10 3 cells were seeded on fibronectin-pretreated glass slides (Gibco, Grand Island, USA).The next day, the respective treatments were added in duplicate for 24 h.The cells were washed with blocking solution [phosphate-buffered saline (PBS) and 2% SBF], fixed with 4% paraformaldehyde (Sigma, Burlington, USA) for 10 min and permeabilized with triton X 100 0.1% (ICN Biomedicals, Santa Ana, USA) for 5 min; then, (1) % Migration Inhibition = Measured area Initial area × 100% they were labeled with Alexa Fluor™ 594 Phalloidin (2 μl/ ml in PBS, InvitrogenTM, Waltham, USA) for actin filaments, and DAPI 600 nM (InvitrogenTM, Waltham, US) as nuclear staining.After each labeling they were washed twice with blocking solution.Finally, they were placed on slides with a drop of Prolong (Molecular Probes, Eugene, USA) and allowed to dry overnight to be observed under the microscope using a FV1000 laser scanning microscope from Olympus (Tokyo, Japan).Cells were visualized using an UPLSAPO 60× NA 1.35 objective and fluorescence emission was obtained using 50 mW diode laser line 405 and multiargon FV5-LAMAR 30 mW 543 nm laser line.A total of 640 × 640 images were obtained using X, Y, Z laser scanning and a z-projection was constructed for each field.The experiment was performed in duplicate, and from each condition, at least three different fields were analyzed; on this occasion, only hAMSC-CM from sample 4 was used. RT-qPCR for secretome cytokines expression Total RNA of hAMSC was extracted using TRIzol LS reagent according to the manufacturer's instructions (Life Technologies Corporation, InvitrogenTM, Waltham, USA) after 48 h treatment with both natural products at a concentration of 60 μg/ml, and RNA quality and quantity were assessed by NanoDrop spectrophotometry (NanoDrop Technologies, Wilmington, USA).Then, complementary DNA (cDNA) was synthesized with the SuperScript III Reverse Transcriptase (InvitrogenTM, Waltham, USA) following the manufacturer's protocol.Afterwards, the real time polymerase chain reaction (RT-PCR) was carried out using 600 ng of cDNA, iTaq Universal SYBR Green Supermix (BIORAD, Hercules, USA), and 250 nM forward and reverse primers in a total volume of 20 μl (Supplemental Material Table S2). Reactions were done in duplicates using the QuantStudioTM 3 Real-Time PCR system (InvitrogenTM, Waltham, USA). Multiplex The supernatants of the three hAMSCs were collected for protein quantification using the multiplex assay.A total of 3.5 × 10 3 hAMSCs per well were seeded on 12-well plates, and after 2 h, the TGF-β group was treated with the purified cytokine (Jiménez et al. 2023) at 5 ng/ml during 24 h.TGF-β was then removed, and the extracts were added at 60 μg/ml (average of all IC50s) for another 24 h; subsequently, all the wells were washed five times and FBS free media was added to produce the hAMSC-CM, which, after 24 h, was collected, centrifuged, and transferred to 1.5-ml tubes to be stored at -80 °C until use.Quantification of proteins was done using the MILLIPLEX Human Cytokine/Chemokine Magnetic Bead Panel (HCYTOMAG-60-10) following the manufacturer instructions.Data were processed using Belysa ® (Merck, Darmstadt, Germany). Statistics GraphPad Prism 8 software was used for statistical analyses (Boston, USA).Normality was assessed with the Shapiro-Wilk test, and comparisons were made using the Kruskal-Wallis test with Dunn correction for non-normal distributions or, for normal distributions, Student's t-test with Welch correction, assuming nonhomoscedasticity.For the wound assay, the mixed effects model with related values and multiple comparisons with Tukey correction was used.Significant differences were considered when p < 0.05. Natural products and the hAMSC-CM act additively to decrease the viability of tumor lines in contrast to the coculture The antitumor effect of Anamu-SC and P2Et has been previously proven in several studies (Urueña et al. 2015;Gomez-Cadena et al. 2016;Sandoval et al. 2016;Hernández et al. 2017;Ballesteros-Ramírez et al. 2020;Lasso et al. 2020). Here, we observed a decrease in the viability of breast cancer and melanoma tumor cells in a dose-dependent manner over the course of 48 h of treatment (Supplemental Material Fig. 1a), obtaining the following IC50 values: Anamu-SC on A375 was 79.9 μg/ml, on MCF7 was 41.06 μg/ml, and P2Et on A375 was 61.09 μg/ml and 70.82 μg/ml on MCF7.This effect is less noticeable in A375 when assessed by Alamar Blue using the corresponding IC50s directly, but for MCF7, viability remains decreased by about 50% (Fig. 1a). Treatment with hAMSC-CM decreases the viability of both tumor cell lines evaluated by both techniques, showing a behavior similar to that of the extracts (Fig. 1a and Supplemental Material Fig. 1).Notably, adding either extract to the hAMSC-CM further enhances the viability reduction, particularly with P2Et, evident in both MTT and Alamar Blue assays.We performed indirect cocultures of hAMSCs and the tumor lines using 0.4-μm transwell systems, facilitating their indirect interaction by seeding the hAMSCs both in twodimensional culture and in three-dimensional fibrin gels.In this model, only a decrease in viability is significant in the 3DP2Et coculture condition compared with the control; however, the effect is much less pronounced compared with the use of the hAMSC-CM.Moreover, MCF7 viability is significantly increased in three-dimensional coculture, with a similar trend observed in the two-dimensional coculture.Other coculture groups exhibit similar behavior to the control. To confirm the viability of tumor cells treated with extracts, hAMSC-CM or their combination, the incorporation of propidium iodide was determined as a marker of cell membrane damage.As with MTT and Alamar Blue, a significant decrease in viability was observed in the groups of extracts plus hAMSC-CM at 48 h (Supplemental Material Fig. 1b).Taking together these results, we propose that hAMSC, particularly their secretome, potentiates the cytotoxic effects of the natural extracts. hAMSC-CM does not affect the ability of natural extracts to reduce colony formation of A375 cancer cells The reduction of the colony-forming capacity of different tumor cell lines treated with P2Et has been previously studied (Castañeda et al. 2012;Urueña et al. 2013) but not with Anamu-SC.Here, we investigated whether the hAMSCs could affect the activity of natural products.In this work, this effect is also observed using sublethal concentrations of the extracts (IC50/10) after 48 h of treatment in A375 and MCF7 cells.P2Et significantly reduces the number of both A375 and MCF7 colonies, and Anamu-SC significantly reduces the clonogenic potential of A375 and completely inhibits the formation of MCF7 colonies.On the other hand, hAMSC-CM by itself does not affect colony formation of either A375 or MCF-7, and when the treatment is combined with the natural extracts, it preserves the colony-reducing effect, clearly for A375, although there is a tendency to reverse this effect on MCF-7 (Fig. 2b). hAMSC_CM decreases the migration capacity of tumor lines The motility of tumor cells and their ability to migrate collectively was evaluated by wound assay.MCF7 achieved wound closure 22% more than A375 on average while hAMSC-CM significantly decreased the wound closure rate in both tumor lines (Fig. 3a).Additionally, in MCF7, there is no significant difference between the three different times evaluated, suggesting a complete inhibition of collective migration.Both natural products, evaluated at sublethal concentrations (IC50/10), decreased A375 migration but not in MCF7 at 24 h (Fig. 3a).When combining hAMSC-CM with extracts, the effect of the former seems to prevail in all conditions, as no significant differences were found between the hAMSC-CM groups with and without natural products. Anamu-SC has previously been shown to have the potential to alter the cytoskeleton at the level of actin microfilaments in A375 cells (Urueña et al. 2008).Here, the same effect is observed on MCF7 cells (Fig. 3b); however, these alterations to the cytoskeleton are not observed with P2Et.In agreement with the migration results, hAMSC-CM seems to have the capacity to alter actin microfilaments, and this effect persist when natural products are added, even with P2Et, which by itself does not significantly affect the cytoskeleton. P2Et could reverse a protumor-like state in hAMSC Motivated by the hAMSC-CM effects described above, we sought to partially characterize its secretome and assess the effect of natural products on it.Notably, immunomodulation is a core feature of MSCs, as they act as signaling cells secreting trophic factors such as cytokines and growth factors.Natural products' effect on MSC differentiation and proliferation of MSCs (Saud et al. 2019) has been studied, but little has been studied on their impact on the hAMSC secretome.In this work, we selected some of the cytokines that have been reported to be part of the hAMSC secretome and that also play a role in tumor progression.Figure 4a shows the variation in gene expression in the three different hAMSC samples.It is possible to evidence variations in the response to natural products in the three samples, especially in the expression of TGF-β in hAMSC-4, which is greater than 30-fold with respect to the control versus two-to threefold in the other two samples.P2Et was able to significantly reduce the expression of IL-6, one of the main cytokines present in the tumor microenvironment favoring tumorigenesis and tumor progression (Kumari et al. 2016), and RANTES, known to favor tumor progression and metastasis (Karnoub et al. 2007), while not significant differences were detected in the levels of released IL-6 and RANTES as evaluated by Multiplex (Fig. 4a).The relative expression of TRAIL, a potent inducer of apoptosis, was not significantly modified.Angiogenesis is one of the hallmarks of cancer, favored in hypoxic environments such as the TME and induced by growth factors such as vascular endothelial growth factor (VEGF, Carmeliet 2005).The expression of this growth factor was confirmed in the three hAMSCs; however, no significant differences were found after treatment with natural products, although there was an increase in expression in hAMSC-4 treated with Anamu-SC, which was also increased in the secretome.While P2Et did not significantly change the synthesis of the evaluated factors, hAMSC treated with Anamu-SC showed a significant increase in G-CSF, TNF-α, and VEGF. Discussion The use of medicinal plants is already well known and accepted in multiple scenarios and has been promoted by the WHO (Farnsworth et al. 1985).Currently, there are several published studies where the effect of promoting the proliferation and differentiation of hAMSCs given by some plant derivatives with potential use in regenerative medicine can be evidenced (Khalilpourfarshbafi et al. 2019;Buhrmann et al. 2020).Up to the date of this work, little research has been done on the use of products derived from Petiveria Alliacea or Caesalpinia spinosa in hAMSC cultures.Reports of the use of products rich in flavonoids such as Naringin, also used as antineoplastic (Memariani et al. 2021), indicate concentration-dependent increases of BM-MSC proliferation through ERK activation and promoting osteogenic differentiation via expression of Runx2, OXS, CON, and Col1 (Wang et al. 2017;Lavrador et al. 2018;Saud et al. 2019).A similar effect was observed with Icariin, a flavonoid glycoside with anticancer properties (Qin et al. 2015).In resveratrol, a polyphenolic natural product with antioxidant, anti-inflammatory and antitumor properties have been used at low concentrations to favor MSC proliferation via activation of the ERK1/2 and MAPK pathway.In addition, it can participate in osteogenic, adipogenic, and neurogenic differentiation; although, in some cases, an inhibitory effect of adipogenic differentiation has been reported (Peltz et al. 2012;Saud et al. 2019;Hu and Li 2019). While indirect coculture with hAMSCs did not significantly alter tumor cell growth (Fig. 1a), hAMSC-4 increased the viability of tumor cells in some cases.Notably, cocultured tumor cells better withstand treatment with the extracts.In a study by Preisner and collaborators, they investigated the impact of hAMSCs on four melanoma cell lines and two primary melanoma cultures, finding that both tumor cells and hAMSCs acquire a protumoral phenotype favoring migration, invasion, and angiogenesis but few differences in the proliferation of both hAMSCs and melanoma lines (Preisner et al. 2018), similar to what was found in this work.Similarly, Koellensperger et al. studied the interaction of hAMSCs and different breast cancer lines, including MCF7, finding no significant differences in viability, similar to the results of this work, but did find changes in gene expression in both populations compared with monoculture; besides, secretion of MMP-1, MMP-3, MMP-10, HGF, IL-6, IL-8, IL-12, and VEGF by MCF7 was detected, while hAM-SCs showed minimal changes, accompanied by an increase in migration of both MCF7 and hAMSCs and an increase in the invasive potential of hAMSCs but not of MCF7 (Koellensperger et al. 2017).As discussed, MSCs are sensitive to the microenvironment, and as such, their phenotype is related to it.In general, MSCs present in the TME acquire a phenotype that supports tumor growth (Berger et al. 2016;Hass 2020) either by the release of trophic factors, as shown in this work (Fig. 4 and Supplemental Material Fig. 3), or differentiation to other cell types such as cancer-associated fibroblasts (Barcellos-de-Souza et al. 2016). On the other hand, hAMSC-CM, similar to the extracts at their IC50 concentrations, significantly reduced tumor cell viability.Importantly, this effect was additive when hAMSC-CM was combined with extracts, particularly evident with P2Et, and extended to the clonogenic potential of tumor cells (Figs. 1a and 2).When viability was evaluated by propidium iodide (PI) incorporation, it was also visible that the combined treatment significantly reduced the viability of both tumor cell lines after 48 h, showing an additive effect (Supplemental Material Fig. 1b).Other works have shown that MSC-conditioned media could induce immunogenic cell death (Lin et al. 2017), a mechanism previously observed in tumor cells treated with P2Et (Castañeda et al. 2012), and hAMSC-CM cultured at high density for three days triggered induced cell death in MCF7 cells and high expression of INF-β via JAK/STAT1 pathway (Ryu et al. 2014). Treatment with hAMSC-CM significantly reduced collective cell migration in tumor cells (Fig. 3), aligning with reports from other groups for A375 and MCF7 (Ahn et al. 2015;Visweswaran et al. 2018).Treatment with the extracts individually showed a lower antimigratory effect than the hAMSC-CM group, suggesting that the effect observed in the combined treatment is mainly due to hAMSC-CM.In agreement with what was observed, a study by Clarke et al. showed that conditioned medium obtained from an immortalized MSC line inhibited the migration of MDA-MB-231 breast cancer tumor cells, an effect partially attributed to the expression of TIMP-1/2, which suppresses the activity of MMP-1, MMP-2, and MMP-9 (Clarke et al. 2015).The decrease in MMP-2-and MMP-9-mediated migration has also been demonstrated in A375 and MCF7 lines (Li et al. 2014;Ji et al. 2015) in both cases in NF-κb inhibition.In the same context, it has been observed that MSCs can secrete IL-1Ra by preventing activation of the NF-κB pathway (Volarevic et al. 2010) and was confirmed here when assessed by multiplex (Fig. 4), albeit at low levels. Cytoskeleton alterations induced by Anamu-SC were previously reported on A375 cells (Urueña et al. 2008).Here, we observed a similar effect in MCF7 cells; additionally, hAMSC-CM seems to also alter the MCF7 cytoskeleton, and this effect is preserved in the combined treatment with Anamu-SC, and P2Et, where the natural products alone do not present this effect.These findings align with the decrease in tumor cell migration (Fig. 3b). TNF-β altered the secretome profile of hAMSC, increasing the concentration of growth factors and cytokines, such as FGF-2, IL-6, IL-10, PDGF-AA, RANTES ,and VEGF, but also decreased the expression of antitumoral factors such as TRAIL (Fig. 4b and Supplemental Material Fig. 3).Researchers have aimed to revert the protumoral phenotype in different stromal cells within the tumor microenvironment, including MSC; however, limited research exists about the potential use of natural products.A previous work showed that the use of polyphenol-rich compounds can decrease the release of proinflammatory cytokines such as IL-6 or IL-8 in MSCs previously exposed to an acidic environment (Di Pompo et al. 2021); similarly, in our work we found that P2Et can decrease several protumoral factors after treatment with TGF-β.Conversely, Prakoeswa and collegues (Prakoeswa et al. 2020) reported that resveratrol treatment increases MSC production of protumor factors EGF, HGF, PDGF, and TGF-β.In the present work, it was also observed that levels of factors such as FGF-2, IL-6, IL-10, RANTES, or VEGF increased when hAMSC was treated with Anamu-SC, and the use of Anamu-SC after TGF-β stimulation showed minimal impact. In view of the alterations that have been observed in the phenotype of MSCs due to the direct or paracrine interaction with components of the TME (Kapur and Katz 2013;Maumus et al. 2013;Zimmerlin et al. 2013), this suggests the potential of obtaining trophic factors from MSCs under controlled conditions, without the risk of facing the variability and possible induction of protumor characteristics when exposing the cells to the TME.It was depicted in this work that hAMSC can protect tumor cells from the cytotoxic effect of the natural products when they were in coculture (two dimensional and three dimensional), perhaps by the release of tumor supporting cytokines induced by the crosstalk, but the effect of the conditioned medium showed antitumor activity and worked additively with the plant extracts in decreasing tumor cells viability, clonogenicity, and migration.Currently, researchers are aiming to target different components from the TME as these have shown to play a key role in tumor progression.It is here where P2Et, besides having antitumor properties, also showed promising results on reverting the protumorigenic state of hAMSC in this system. Fig. 1 Fig. 1 Natural products Anamu-SC and P2Et affect the viability of tumor cell lines and hAMSC.a Viability of A375 and MCF7 treated with Anamu-SC, P2Et, hAMSC-CM or combination.Complete media was used as control (Cnt).Tumor cell lines were treated (basolateral zone) with each extract or in indirect coculture (Co) with hAMSC (apical zone) in a 1:1 ratio; hAMSC in fibrin gels were also tested in cocultures (Co 3D).Complete media was used as a negative control (Cnt) and doxorubicine 10 nM (Doxo) as positive control.The median and interquartile range are shown, N = 3. b Normalized Fig. 2 Fig. 2 Natural products Anamu-SC and P2Et decrease the clonogenic potential of tumor cell lines.Tumor cell lines A375 (a) and MCF7 (b) were cultured and treated with the IC50 of each extract, hAMSC-CM Fig. 3 Fig.3hAMSC-conditioned medium decreases tumor cell lines migration.a Individual and combined treatments with hAMSC-CM at 24 and 48 h.Media with standard deviation are shown.Significant differences are considered for values of p < 0.05 (* with respect to the value of 0 h and # with respect to the untreated control; differences between treatments are indicated with a bar).Three experiments were performed for each treatment in triplicate and two images were acquired per well (N = 3).b Representative images of MCF7 cytoskeleton obtained by confocal microscopy (N = 2).The experiment was performed in duplicate.From each condition at least three different fields were analyzed, and only hAMSC-CM from sample 4 was used; scale bars are 30 μm ◂
6,935.6
2024-02-24T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Asymptotic Expansion of the Elastic Far-Field of a Crystalline Defect Lattice defects in crystalline materials create long-range elastic fields which can be modelled on the atomistic scale using an infinite system of discrete nonlinear force balance equations. Starting with these equations, this work rigorously derives a novel far-field expansion of these fields: The expansion is computable and is expressed as a sum of continuum correctors and discrete multipole terms which decay with increasing algebraic rate as the order of the expansion increases. Truncating the expansion leaves a remainder describing the defect core structure, which is localised in the sense that it decays with an algebraic rate corresponding to the order at which the truncation occurred. Introduction The field of continuum solid mechanics has been highly successful in providing robust predictions of material behaviour at a wide range of length-scales. In crystalline materials in particular, it is recognised that the predictions made by the equations of linear elasticity are valid with tolerable errors even when resolving features such as defects whose characteristic size is close to that of the interatomic spacing. This said, when considering processes which involve the genuine thermodynamic, electronic and chemical properties of such defects, the fundamental discreteness of matter becomes crucial and no single continuum model can be sufficient to completely capture the fine detail of a material's behaviour at this scale. Moreover, it is exactly these fine details which determine phenomena such as a material's yield strength and behaviour under cyclic loading, both of which are crucial to understand for engineering applications. As a result, a range of theoretical techniques have been developed over the last 60 years which seek to predict and compute defect behaviour in crystals, connecting discrete and continuum models of these materials. Broadly, these approaches can be divided into two categories, namely concurrent and sequential modelling strategies. Models in the former class combine discrete and continuum models into a single system which can then be numerically solved simultaneously, while those in the latter class generally involve iteration over separate models acting at different scales. In particular, the last 25 years has seen a great deal of research activity focused on concurrent strategies, with one of the most significant developments in this area being the quasicontinuum method [TOP96] and many variants thereof [LO13]. In contrast, the present work revisits sequential strategies for accurately modelling defects, but with a new perspective that recent progress in the study of multi-scale models has enabled, and our results on the structure of crystalline defects have direct consequences for the design and evaluation of more general models, including both purely atomistic and concurrently coupled models. Starting from a discrete energy for a material defined on an infinite domain [EOS16], we develop a hierarchy of linear continuum PDE systems which can be efficiently derived and solved numerically. The solutions to these PDE systems form a sequence of smooth predictors which describe the far-field behaviour of the lattice strain around a defect to within arbitrary accuracy, and thus provide increasingly accurate boundary conditions to be used on the discrete model when confined to a finite domain. The key idea behind our approach is to exploit the knowledge that the variation in the strain field generally decays smoothly away from the core of any localised defect, and hence the far-field behaviour is more and more accurately predicted by the solutions of the continuum PDE models we define. These far-field approximate solutions are coupled with the properties of the discrete defect core, encapsulated by the spatial moments of the acting forces and expressed as a multipole expansion. The relative simplicity of these moments provides an elegant, computable way to transfer information from the nonlinear discrete problem to the continuum hierarchy. The coupling moreover is "weak" in the sense that a term in the continuum hierarchy only requires information on the multipole terms of strictly lower order. Indeed all terms are defined and are computable in sequence without concurrent coupling. Our approach has connections with classical approaches to modelling defects in continuum linear elasticity using the defect dipole tensor [Esh56,NH63] (also known as the elastic dipole tensor or the double force tensor) and to Sinclair's work on atomistic models of fracture [SL72,Sin75]. More recent related work includes that of Trinkle and coworkers, where lattice Green's functions have been used to improve the accuracy of defect computations [Tri08,TT16], along with mathematical developments in our understanding of the regularity of discrete strain fields advanced by the authors and coworkers [HO12,EOS16,BS16,BBO19]. In particular, [BBO19] explores some of the initial ideas of our mathematical strategy in a simplified setting. The approach presented here unifies many of the ideas involved in these previous works into a single framework and expands them systematically to higher orders. More generally, the powerful structural results we present here serve as a useful tool in any discussion of crystalline defects where high accuracy is required. In particular, we will outline how our results may be used as a foundation for a rigorous numerical analysis of defect algorithms and also provide a path to systematically improve their accuracy. Methodology Our starting point is to consider a total energy E(u) for displacements u of an infinite lattice Λ of atoms. We will make the mild assumption that the total energy of a displacement is expressed as a sum of site energies, i.e. contributions to the total energy arising from the environment of each atom. Under the assumption of frame indifference, this energy (and the site energies which make up the total energy) must depend only upon the relative displacements between atoms. An equilibrium displacementū of the energy E satisfies the force balance equation and it is this infinite system of discrete equations which we study. Since equilibrium displacementsū exhibit decay properties away from the core of point defects and dislocations [EOS16], we can develop these equations around the state of zero displacement to derive approximate equations for the equilibrium displacementū in the far field. This comes in two steps: • As a first step, we can expand the site potential around zero to obtain linear and nonlinear lattice operators, which still depend on finite differences (atomic bonds); and • As a second step, we can use an expansion for the displacement itself to replace finite differences with a gradient and higher derivatives, obtaining continuum PDE approximations to the discrete lattice equations. Note already, that the latter Taylor expansion requires sufficiently smooth continuum displacements and cannot be applied to the discreteū itself, so some care needs to be taken when applying this strategy. The PDE approximations come in the form of a hierarchy of corrector equations. Crucially, each one of these corrector equations has the same form: the continuum linear elasticity (CLE) equation must be solved but with different right-hand sides (forces) that depend on previous terms in the expansion. Such hierarchies of corrector equations have been explored by other authors before, e.g., see [EM07] for a formal hierarchy of similar corrector equations in the defect-free setting. In the present work, we take great care to define all terms rigorously, ensure sufficient regularity, and provide sharp estimates for all resulting error terms. We then combine these continuum correctors with a multipole series which can be obtained from the moments of linearised residual forces. Overall, this enables us to obtain far-field approximations to arbitrarily high order forū, which is characterised by a discrete remainder whose locality (decay) is precisely controlled. While our general approach and many of our theoretical results are generic, some significant technical and conceptual challenges come into play when applying the results to specific defects. Specifically, questions regarding the geometry and the relation of a defect state to a reference lattice, as well as the precise properties of the lattice Green's function after adjusting for the geometry. These challenges can sometimes be overcome adhoc in leading order (as in [EOS16] for edge dislocations) but require a more detailed understanding for higher orders. Our main focus here is the general methodology. We will therefore restrict ourselves to two defect types that are geometrically relatively simple. Namely, general point defects and (straight) screw dislocations. Outline The paper is organised as follows: In § 2, we develop our general theory independent of specific defects. We present our main theoretical results concerning the decay of displacement fields and their approximation by multipole expansions for linearised lattice models in § 2.2 and then outline the derivation of the continuum corrector PDEs in § 2.3. In § 3, we apply our methodology to our nonlinear atomistic models of point defects in Theorem 3.1 and screw dislocations in Theorem 3.5, and explain the implications for the convergence of numerical methods which exploit the result. § 4 presents our conclusions, and discusses the outlook for extending and applying these results in the future. The proofs of our decay estimates for the linearised models are then provided in § 5, a full discussion of the lattice Green's function in § 6, and the proofs of Theorem 3.1 and 3.5 and in § 7. Models and Notation The Atomistic Energy and General Notation. Our results are concerned with modelling of crystalline defects, i.e. local regions of non-uniform atom arrangements embedded in a homogeneous host crystal. In this section we will start with the homogeneous setting itself. In § 3 we will then look at the description of defect configurations and discuss how the homogeneous results can be applied there. The homogeneous crystal is described by a Bravais lattice Λ := AZ d as the reference, where d ≥ 2 and A ∈ R d×d is non-singular, and displacements u : Λ → R N , where we allow N = d in order to model a range of scenarios (for example, in some models for pure screw dislocations, we have d = 2, N = 3, while in anti-plane shear d = 2, N = 1). We denote discrete differences by D ρ u(ℓ) := u(ℓ + ρ) − u(ℓ) for ℓ, ρ ∈ Λ. Later on we will also look at higher discrete differences which we denote by Next we look at an interaction neighbourhood R ⊂ Λ\{0}. We assume throughout that R is finite, R = −R, and that it spans the lattice span Z R = Λ. Based on R we define the discrete difference stencil Du(ℓ) := D R u(ℓ) := (D ρ u(ℓ)) ρ∈R . Again, later we will consider higher discrete differences and in particular apply D k times for which we use the simple notation D k u = D . . . Du. With the discrete difference stencil we can formally write down the energy of u as where V : R N ×R → R is the site energy. We will assume throughout that V ∈ C K (R N ×R ) for some K and satisfies the natural and very mild symmetry assumption Note that this is only a formal definition of the energy as the sum might not converge. All these quantities might also need to be adjusted to inhomogeneous generalisations E def , Λ def , R ℓ , and V ℓ to allow for a desired defect structure. Both of these aspects will be discussed in detail in § 3. In particular, we will make E precise and establish differentiability properties for any u in the discrete energy space H 1 which is defined by For future reference we also define the dense subspace H c ⊂ H 1 of displacements with compact support, Variations of E can then be written as More generally, we use the notation T [a 1 , . . . , a k ] for a multi-linear operator T . For symmetric operators we will shorten the notation further and write We denote the Hessian at zero by It will be convenient to interpret these objects as linear functionals, belonging to (H 1 ) * , acting on the last test function. We will often use a point wise representation based on the ℓ 2 scalar product. For example, where Div A = − ρ∈R D −ρ A ·ρ is the discrete divergence for a matrix field A : Λ → R N ×R . In this notation we can also write down the force equilibrium equations δE(u) = 0 in the pointwise form 0 = δE(u)(ℓ) = − Div ∇V (Du(ℓ)) . Lattice Stability and The Green's Function We assume throughout that the Hamiltonian H = δ 2 E(0) is stable and will equivalently call the lattice stable (see [HO12]), which by definition is the case if and only if there exists a c 0 > 0 such that For stable operators H there exists a lattice Green's function G : Λ → R N ×N such that H[Ge k ](ℓ) = e k δ ℓ,0 for all 1 ≤ k ≤ N and such that see [EOS16]. Here we used the notation |ℓ| 0 := |ℓ| + 2 to write decay rates in the discrete setting in a more compact form which we will do throughout. We will also often just write G k := Ge k . The Cauchy-Born Continuum Model Our atomistic lattice model naturally gives rise to corresponding continuum model based on the Cauchy-Born rule. In the continuum setting, one has an energy of the form The energy density W is given by the Cauchy-Born rule for any M ∈ R N ×d , where c vol = |det A| > 0 is the volume of a lattice cell. We will later see in more detail the usefulness and limitations of the nonlinear continuum model for defect problems. We will also make heavy use of the linearised continuum problem for our corrector equations. These are given through the continuum Hamiltonian H C = δ 2 E C (0). In our pointwise notation the equilibrium equations is where C = ∇ 2 W (0). These are the standard continuum linear elasticity (CLE) equations. The lattice stability (4) in particular also implies the Legendre-Hadamard stability of C (see [HO12,BS16]), so that (7) is elliptic and allows for a continuum Green's function or fundamental solution G C : Notation for Tensors To work with various higher order tensor products throughout, we establish a precise but compact notation: Given a k-tuple of vectors in R d , σ = (σ (1) , . . . , σ (k) ) ∈ (R d ) k we denote their k-fold tensor product The vector space spanned by these tensor products is denoted (R d ) ⊗k , and it is easy to see this space is isomorphic to R d k . We also write when considering the k-fold tensor product of a single vector v ∈ R d . Let S k denote the usual symmetric group of all permutations which act on the integers {1, . . . , k}. This action can be extended to k-tuples and tensor products by defining π(σ) := (σ (π(1)) , . . . , σ (π(k)) ) and π(σ ⊗ ) := σ (π(1)) ⊗ · · · ⊗ σ (π(k)) for any π ∈ S k and σ ∈ (R d ) k . For any σ ∈ (R d ) k , we define the symmetric tensor product by The space spanned by these symmetric tensors is then denoted by (R d ) ⊙k , and is a vector subspace of (R d ) ⊗k . The natural scalar product on (R d ) ⊗k and (R d ) ⊙k is denoted by A : B for A, B ∈ (R d ) ⊗k and, as usual, is defined to be the linear extension of In particular, for u : General Results for the Linearised Equation At the most fundamental level, our results concern the characterisation of the far-field behaviour of lattice displacements u : Λ → R N that are close to equilibrium in the far-field. More precisely, given a stable Hamiltonian H as introduced in the previous section we will characterise the decay of a general lattice displacement u provided that the (linearised) residual forces f (ℓ) := H[u](ℓ) decay sufficiently rapidly as |ℓ| → ∞. In § 2.3 we will then show how to use these results for the linearised operator to obtain characterisations of the far-field behaviour of equilibrium displacements in our full nonlinear interaction model. With this definition we have the following result. Remark 2.2. If H[u] satisfies the assumptions for some α < −1 instead of α ∈ N 0 , then there is no logarithmic factor needed in the result, i.e., (10) The same holds true for the Theorems 2.4 and 2.6 below, if α < −1. Remark 2.3. When initially reading both this theorem and the theorems in §2 and §3, we suggest to ignore the logarithmic terms and focus only on the algebraic rates. However, the treatment of the logarithmic terms is an important aspect of both our theorems and proofs since they appear to be intrinsic to the expansion, and not due to suboptimal estimates. Suppose now that we have a general elastic field u with non-vanishing moments (8) but still fast decay of the residual forces. Then, we can decompose u into a truncated multipole expansion -higher order derivatives of the lattice Green's function defined in § 2.1 -corresponding to the nonvanishing moments and a far-field remainder that exhibits the improved decay established in Theorem 2.1. This idea is made precise in the next result. and the remainder decays as It is convenient to have a continuum reformulation of this multipole expansion, to avoid having to work with the discrete Green's function and its discrete derivatives. Towards that end we exploit the connections between continuum and discrete Green's functions and derive higher order continuum approximations of the discrete Green's function. Specifically, in § 6 we will construct a sequence of continuum kernels with the following properties. Theorem 2.5. There are unique kernels G n ∈ C ∞ (R d \{0}; R N ×N ) such that for all ℓ ∈ Λ and j, p ∈ N 0 and such that the G n are positively homogeneous of degree (2 − 2n − d) if n ≥ 1 or d ≥ 3, while in the case n = 0, d = 2 we have G 0 (ℓ) = A log|ℓ| + ϕ(ℓ), where A ∈ R N ×N and ϕ is 0-homogeneous. Furthermore, we find G 0 = G C . The G n , n > 0, are higher order corrections resolving the atomisticcontinuum error. We will give precise definitions of all the G n in § 6. In addition we want to point out that the G n are practically computable via Fourier methods. A formula for that is also given in § 6. Returning to the multipole expansion, if p = 1, 2 in Theorem 2.4, then one can replace the lattice Green's function G with the continuum Green's function G C . However, for a higher order continuum description, higher order G n need to be used. If we also use Taylor expansions to get actual derivatives we get a pure continuum expansion. The Full Far-Field Expansion Theorem 2.4 lays out a path on how to construct good far-field approximations for a solutionū of the atomistic equations δE(ū) = 0. Instead of looking at the solutions directly it suffices to construct an approximate solution of the equationsû such that the remainder r defined byū =:û + r has small linearized forces H[r] in the far field. For this approach to be useful, it is desirable thatû is both easy to understand analytically and practically computable. Our goal is to construct smooth continuum approximations through the addition of successive corrector terms u C i up to the desired order. We writē u =û p + r p = u C 0 + u C 1 + · · · + u C p + r p and aim to achieve |H[r p ]| |ℓ| −d−p−1 0 , so that with Theorem 2.4 we havē ; that is, the remainder is highly localised around the defect core. The precise statements for both point defects and screw dislocations are given in § 3. The full rigorous construction of the u C i is given in the proofs in § 7. We do however want to formally outline this construction here. The first step is a Taylor expansion of the energy around the lattice or, more precisely, the potential V around zero, giving Separating out the linear terms and inserting the ansatzū =û p + r p = u 0 + u 1 + · · · + u p + r p gives The next step then is to Taylor expand the discrete differences in H[u i ] and δ k+1 E(0)[û p ] k leading to continuum differential operators. In particular, the leading order term for H[u i ] is c vol H C [u i ] followed by higher order differential operators. Formally, u i is of the order |ℓ| 2−d−i 0 with each derivative or discrete difference adding one order of decay. This allows us to group all the resulting terms into S i , based on their order of decay. We thus obtain We can therefore use the PDE to define u C i . When we look at all the details of this construction in § 7, we will see that the u i as used on the right hand side of (15) has to include the multipole terms of that order, which we can write as In particular, it is then important to know that with the help of Theorem 2.6 we can use the their smooth continuum variant instead of the discrete one. The S i in (15) are in general non-linear and higher order differential operators. Crucially though, S i only depends on the terms u 0 , . . . , u i−1 and not u i itself. That means that the equation defining u i is always the same second order, elliptic continuum linear elasticity equation With u C i defined this way, most of the terms on the right hand side of (14) cancel out and a precise estimate of the higher order errors gives the desired estimate for H[r p−1 ]. In § 7, we give a precise definition of the S i in the corrector equation (15). The grouping of the terms into different S i depends on the dimension d. As an example, for d = 2, the first three S i are given by is the Cauchy-Born energy density for A ∈ R N ×d and H SG is a linear differential operator describing a strain-gradient term in linear elasticity. It is defined by If on the other hand d = 3, then Far Field Expansion for Crystalline Defects We now demonstrate how our general structural results can be directly applied to obtain precise characterisations of the discrete elastic far-fields surrounding crystalline defects. Our results apply directly to points defects and screw dislocations; edge and mixed mode dislocations require additional ideas due to their more non-trivial lattice topology and will therefore not be discussed here. In addition, we briefly outline how these characterisations give rise to novel algorithms for simulating such defects. Point defects We consider point defects first. We briefly review the setting of [EOS16] to motivate the formulation of our main result in this context. First, we assume that the point defect has a reference configuration Λ def ⊂ R d , d ≥ 2 which is locally finite and homogeneous outside some defect radius R def , meaning Let R ℓ denote a finite interaction range for each site x ∈ Λ def and assume that there is a family of site energies V ℓ ∈ C K (R dR ℓ ), ℓ ∈ Λ def . Moreover, assume N = d and that there is a homogeneous interaction range R and site energy V ∈ C K (R dR ) for all sites of Λ, and that The potential energy under a displacements u : Λ def → R d and u : Λ → R d are then, respectively, given by With these definitions E def is then well defined on H c and have a unique Allowing Λ def = Λ in B R def admits defects such as vacancies and interstitials, while allowing inhomogeneity of V ℓ admits impurities and foreign interstitials. A point defect can be thought of as a finite-energy equilibrium of E, that is, equilibrium displacementsū def ∈ H 1 (Λ def ) such that A notationally convenient approach is to simply projectū def to the homogeneous lattice. That is, we defineū : This is of course only one of many possible projections, which we have made only for the sake of notational convenience. Our subsequent results are essentially independent of how this projection is performed. Most importantly, because we haveū =ū def outside the defect core we obtain that for |ℓ| large enough. Here δ ℓ (ℓ ′ ) := δ ℓℓ ′ . With a small amount of additional work one can in fact show that This motivates the setup for our next result. and such that the u C i satisfy the PDEs in equation Remark 3.2. In particular, up to order p = d, we have the pure multipole expansion Effects from the nonlinearity and higher order derivatives are only noticeable in terms beyond that. Remark 3.3. In the point defect case it is likely possible to reduce the number of logarithms in the estimate somewhat for all orders. We do not want to explore this in detail but want to point out the log-free estimate for low orders which directly follows from (23) and estimates on the lattice Green's function based on Theorem 2.5. To be precise, for p < d we havē Screw dislocations Now let us consider screw dislocations. Again, our modelling follows the setup in [EOS16] and [BBO19]. We consider a straight screw dislocation with periodic behaviour along the dislocation line so that we can project to the lattice to a two-dimensional lattice on the normal plane to describe the behaviour. Hence we have d = 2 and N = 3 meaning u : Λ ⊂ R 2 → R 3 , though N is left arbitrary in the following to include for example the case in [BBO19] where N = 1. Again we have a finite interaction range R and a site energy V ∈ C K (R dR ). The potential energy is then formally given by However that sum will usually not converge so we follow [EOS16] and consider the energy differences instead, where u CLE is the continuum linear elasticity solution. More precisely, u CLE solves where b ∈ R 3 , b e 3 , is the Burgers vector of the screw dislocation,x ∈ R 2 is the reference position of the dislocation core and Γ := {x ∈ R 2 : x 2 = x 2 , x 1 ≥x 1 } a branch-cut chosen such that Γ ∩ Λ = ∅. We want to point out that the precise positioning ofx is not crucial and does not have physical meaning as the difference between two shifted solutions is in the energy space H 1 . Equation (29) was missed in [EOS16] but is in fact crucial for the results there to be true. It encodes the assumption that the system has zero net force and thus avoids spurious solutions of the type g(x) = u CLE (x) + G C (x −x). However, the standard construction of a solution u CLE , which can be found, e.g., in the book by Hirth and Lothe [HL82], already takes it into account. Therefore, the results of [EOS16] and later works that build on it remain correct provided such a solution u CLE is employed. The following observation links it to the atomistic setting. Therefore (29) is equivalent to either of these sums vanishing. Indeed, the property δE(u)(ℓ) = 0 is heavily used in [EOS16] and we will use their results here. With the definition (24) the energy E is then well defined on u CLE + H c and has a unique continuous extension to u CLE + H 1 . And with V ∈ C K we also find E ∈ C K , see [EOS16, Lemma 3]. (30) and such that the u C i satisfy the PDEs (59) and u C 0 = u CLE . Furthermore, the remainder r p+1 satisfies the estimate Remark 3.6. Contrary to the point defect none of these terms are expected to vanish in general, except for a few special cases which are explored in [BBO19]. In particular, the regularity assumption cannot be weakened as in the point defect case. Indeed, our general theory without looking at any special cases requires K ≥ J +2+⌊ p d−1 ⌋. As far as our proof goes the number of logarithms is optimal for d = 2, though probably not for higher dimension. We also expect this to be generic for the theorem itself as indeed the u C i will (in general) contain higher and higher logarithmic terms. However, in special cases these logarithmic terms in the u C i do not necessarily always appear, as explored in [BBO19]. Accelerated Convergence of Cell Problems An immediate application of the defect expansions of Theorems 3.1 and 3.5 is that they suggest a novel family of numerical schemes that exploit these expansions to accelerate the simulation of crystalline defects. Here, we will only sketch one such scheme, but leave a more detailed analysis for future work. Consider the equilibration of a point defect or a screw dislocation near the origin as in Theorems 3.1 and 3.5. We define a family of restricted displacement spaces where atoms are clamped in their reference configurations outside a ball with radius R. Then we can approximate (22), (30) by the Galerkin projection Under suitable stability conditions it is then shown in [EOS16] that for R sufficiently large, where p ∈ {0, 1}. This convergence is an almost immediate corollary of the decay estimate |Dr 1 (ℓ)| |ℓ| −d 0 log p |ℓ| 0 . (For energy minima, [EOS16] can be applied directly while for saddle points the analysis of [BDO20] can be readily adapted.) Our aim now is to accelerate this relatively slow convergence by providing an improved far-field boundary condition. The overarching principle is to 1. replace the naive far-field predictor with the higher-order predictor 2. and to enlarge the admissible corrector space with the multipole moments That is, the corrector displacement is now parametrised by its values in the computational domain B R ∩ Λ and by the coefficients of the multipole terms. We can then consider the pure Galerkin approximation scheme: The arguments of [EOS16] leading to (34) are generic Galerkin approximation arguments, leveraging the strong stability condition. They can be followed verbatim up to the intermediate result (Céa's Lemma) The existence ofv R is implicitly guaranteed through an application of the inverse function theorem, due to the fact that the right-hand side in this estimate approaches zero as R → ∞. To estimate the right-hand side we can insert the exact tensors b i,k from the solution representation of Theorems 3.1, 3.5 into v R , in order to obtain Dū − Dû p − Dv R = Dr p+1 − Dw R , where r p+1 is the core remainder term, and hence We can now define w R to be a suitable truncation of r p+1 to the computational domain B R . The details are given in [EOS16, Thm. 2] and immediately yield the following result. Theorem 3.7. Suppose thatū is a strongly stable solution of (22) or (30); that is, there exists a stability constant c 0 > 0 such that then, for R sufficiently large, there also exists a solutionū R ∈ U Remark 3.8. The scheme (35) cannot be implemented as is since the energy difference functional cannot be evaluated for a displacement with infinite range. However, this highly idealised scheme is of immense theoretical value in that it highlight what could potentially be achieved if this challenge can be overcome. Any practical scheme will necessarily have to engage in the approximate evaluation of the multipole tensors b (i) , for which there are several promising possibilities that we will explore in separate works. A second challenge for practical implementations is the fast and accurate evaluation of the higher-order far-field predictorû p . All of these approximations require suitable controlled approximation to E, somewhat analogous to quadrature rules or other kinds of variational crimes in the classical numerical analysis context. Conclusions and Outlook The main result of the present paper is the fact that the elastic field surrounding a defect in a crystalline solid may be represented to within arbitrary accuracy with three "low-dimensional" ingredients: 1. a series of continuum fields specified through PDEs; 2. a series of multipole moments; and 3. a highly localised discrete core correction. More specifically, we have shown that by increasing the accuracy of components (1) and (2), the core correction (3) becomes increasingly local. While there is a certain amount of interaction between the components (1) and (2), there is no coupled problem that needs to be solved at any point. Indeed, both series are obtained sequentially order by order and the PDE defining the term of a given order in (1) only depends on lower-order terms of the multipole expansion, but not on the multipole term of the same order. Our presentation here is restricted to simple lattices and a limited class of defects. Generalisations do require additional technical difficulties to be overcome, but there appears to be no fundamental limitation to extend the method and the results to multi-lattices and a range of other defects in some form. To conclude, we briefly discuss some of these possibilities and limitations. • Edge and mixed dislocations: Edge and mixed dislocations are technically more challenging as they create a mismatch that affects the two-dimensional reference lattice. To leading order it suffices to correct the CLE solution with an ad-hoc transformation u 0 = u CLE • ξ −1 , see [EOS16], though the analysis also becomes a bit more technical still. For higher orders however more care needs to be taken, not just in the choice of ξ but also in the effect such transformations have on the PDEs. Furthermore, many arguments have additional technical complications due to the need of slip operators to describe the elastic strain. • Cracks: The full extension of our results to crack geometries appears to be considerably more challenging, as the homogeneous lattice is no longer a particularly good global reference. Thus already the discussion of Green's function is significantly more involved [BHO19]. Formally, we still expect our overall strategy to apply and it is interesting to note that due to different orders of decay the first higher order corrector is already needed to even define the model in the first place rather than "just" improve on it. • Energy differences for defect transitions: The precise characterisation of the far-field strain in terms of defect continuum fields and a multipole expansion suggests that some level of cancellation in energy differences, e.g. between a saddle point and energy minimum as observed in [BDO20], could be precisely tracked and characterised. Moreover, such results may then also explain improved convergence rates of numerical schemes for energy differences that are often seen in practice. • Convergence of numerical schemes: A consequence of our analysis, with direct practical value is the construction of improved approximate cell problems that leverage the explicit low-dimensional structure of defect fields that we identified. We have given a hint at how this might be achieved in Theorem 3.7 but much additional work is needed to formulate practical schemes along these lines. The same line of work can also lead to robust new numerical schemes and analysis of existing schemes for the defect dipole tensor specifically (also called the elastic dipole tensor). Such schemes are of important and ongoing interest in defect physics (e.g., [NMP + 16], [DM18]). In particular, our approach to these terms developed here naturally includes the anisotropic case as well as extensions to higher multipole tensors. • Higher-order dislocation dynamics models: A further consequence is that we hope that the expansion of the far-field strain we have obtained here allows us to go beyond traditional dislocation dynamics approaches, which rely upon the leading-order CLE description of these defects. By using the structure of our expansion to studying the effect of applied stress fields on defect cores, we can provide more detailed atomistic input into such models. This suggests a route to better connect dislocation dynamics and atomistic approaches, bridging the scale and language gaps between these two simulation methodologies. • Dynamics and statistical mechanics: Statistical mechanics models, such as free energies or transition rates could in principle benefit from an analysis within our new framework. For example, in the harmonic approximation, the analysis of [BDO20] could be taken as a starting point. It is far less clear whether more nonlinear models could also benefit, and it appears certain that finding similar coarse-grained descriptions of full dynamics of a crystalline far-field would require very different ideas. Proofs -Decay Estimates In this section we want to prove Theorem 2.1 and Theorem 2.4. But first, let us cite the following lemma. Proof. For α = 0 this is [EOS16, Corollary 1]. However the addition of logarithmic terms is trivial. Indeed, one can construct g in the exact same way and can easily carry the logarithmic term through by including them in the weighted norms used in the proof. The decay estimate for the lowest order found in [EOS16] is indeed based on Lemma 5.1 which then allows for a partial summation in the Green's function representation sum of the remainder given in equation (37). The key idea for the higher order decay estimates is that Lemma 5.1 can actually be extended to higher orders based on vanishing higher order moments as long as one only tries to write symmetric parts in divergence form; see Proposition 5.4 below. We will then use this higher order divergence form with more precise higher order partial summation in specific parts of the lattice in the Green's function representation sum of the remainder given in equation (37) to arrive at the new decay estimates. We begin by establishing two further auxiliary results. Lemma 5.2. Given a linearly independent set of vectors S ⊂ R d (which must necessarily have #S = k ≤ d), the set of tensors {σ ⊙ : σ ∈ S k } is also linearly independent in (R d ) ⊗k . Furthermore, σ ⊙ = ρ ⊙ with σ, ρ ∈ S k , if and only if ρ = π(σ) for some permutation π ∈ S k . Proof. Although the proof is straightforward, and likely well-known, we present it for the sake of convenience: Define a scalar product (·, ·) S on R d for which S forms part of an orthonormal basis. This induces a scalar product on the space of tensors by multi-linear extension of (σ ⊗ , ρ ⊗ ) S := k j=1 (σ (j) , ρ (j) ) S . Consider the scalar product If ρ = π(σ) for all π ∈ S k , then for each π ∈ S k , there exists an index i such that σ (i) = ρ (π(i)) , and consequently (σ (i) , ρ (π(i)) ) S = 0. This entails that each of the products summed in the expression above is zero. Since ρ ⊙ and σ ⊙ are both non-zero and orthogonal in this inner product, they cannot be equal. The same argument entails that if σ ⊙ = ρ ⊙ , there must exist π ∈ S k such that ρ = π(σ). On the other hand, if ρ = π(σ), we see that We deduce that σ ⊙ and ρ ⊙ are either identical (if and only if σ is a permutation of ρ) or mutually orthogonal in the S-inner product, which implies the stated result. Lemma 5.3. Let σ ∈ Λ p , then Proof. This identity follows from two observations: First, for a polynomial r in ℓ of degree at most j − 2. Secondly, for any such polynomial. A simple induction then shows the result. We can now turn towards to a crucial result converting a discrete force field into higher order divergence form. We will slightly abuse notation in the following and let the symmetric part sym A of a tensor A ∈ R N ⊗ (R S ) ⊗k denote the symmetrical part in the later indices only, namely sym(A) := 1 k! π∈S k A ·,π(σ) . For higher order tensor fields we will follow the usual convention from the continuum that Div always applies to the last component. Proof. p = 1 is covered by Lemma 5.1. By induction, assume the statement is true for a p ∈ N and we now have q−p > d as well as p+1 vanishing moments. In particular, we have already constructed the desired f (0) , . . . , f (p) . Now | sym f (p) (ℓ)| |ℓ| −(q−p) 0 and q − p > d. We claim that for all j = 0, . . . , p. This is clear for j = 0. By induction, if it is true for j < p, then Note that the decay of f (j+1) is needed not just for the sums to exist but also for the partial summation to be true. This establishes (36). Applying Lemma 5.3, with j = p, and using that I p = 0 we obtain According to Lemma 5.2, the set of the tensors sym σ ⊗ is linearly independent. Additionally, ·,ρ as ρ = π(σ) for some permutation π. Therefore, ℓ sym f (p) (ℓ) = 0 and we can apply Lemma 5.1 to find f (p+1) with the desired properties. To prepare for the Proof of Theorem 2.1, we fix some u ∈ H 1 and write f (0) := H[u]. We extend the lattice Green's function approach developed in [EOS16] to estimate u. The Green's function satisfies for all u ∈ H c . As the right hand side is not invariant under adding a constant to u, this cannot directly translate to general u ∈ H 1 , but the situation looks better for derivatives. Lemma 5.5. Let u ∈ H 1 and assume that |f (0) (ℓ)| = |H[u]| |ℓ| −γ 0 for some γ > 1. Then, for all ρ = (ρ 1 , ..., ρ j ) ∈ R j , j ≥ 1, we have Proof. Due to the decay assumption on f (0) the sum converges absolutely. Furthermore, for u ∈ H c the statement is clearly true. The right hand side is a well-defined, continuous, linear functional on H 1 . The result is straightforward when d ≥ 3 or j ≥ 2 as in this case the left hand side is also a continuous, linear functional because |D ρ G k | ∈ ℓ 2 (Λ) if and only if 4 < d + 2j. To include the case where d = 2 and j = 1 we have to be a bit more careful. Note that, for u ∈ H c z∈Λ As the right hand side is now a well-defined, continuous, linear functional on H 1 , we find for all u ∈ H 1 , that If we now have a u ∈ H 1 where additionally |f (0) | |ℓ| −γ 0 , then the sum z∈Λ f (0) (z)D ρ G k (ℓ − z) converges (absolutely). Consider a smooth cutoff function η R , such that η R : R d → [0, 1] satisfies η R (z) = 1 for |z| ≤ R and η R (z) = 0 for |z| ≥ 2R, as well as |∇ k η R | R −k for k = 1, ..., j. With that, we see as the remaining term can be estimated by That is, (37) holds for all u ∈ H 1 with |f (0) (ℓ)| |ℓ| −γ 0 . Proof of Theorem 2.1. We now use the Green's function representation (37) to estimate D ρ u(ℓ). Let us consider |f (0) | |ℓ| −d−p 0 log α |ℓ| 0 where we include both the case α ∈ N 0 and α < −1 to include Remark 2.2. Then let us assume there are p vanishing moments. Although our argument is related to the lower order equivalent in [EOS16], it is technically more complex as both the higher derivatives and the higher z = ℓ z = 0 order divergence form lead to partial summations which, crucially, have to be performed on only specific and separate parts of the lattice. We will therefore provide full details. This can be done in a very clean way by splitting the sum (37) into four regions; see Figure 5 for a visualisation: Region 1 is the far field, where |z| and |ℓ − z| are comparable. Region 2 is the intermediary area where |z|, |ℓ|, and |ℓ−z| are all comparable. And region 3 and 4 are the areas around z = 0 and z = ℓ, where either |z| can be small but |ℓ| and |ℓ − z| are comparable or |z − ℓ| can be small but |ℓ| and |z| are comparable. Inserting estimates for the residual f (0) and the lattice Green's function G and using this split of the sum indeed gives sharp estimates in absolute value. However, as we will show below, this is only a good estimate if both p = 0 and j = 1, 2. If either p ≥ 1 or j ≥ 3 then the sum in (37) exhibits significant large scale cancellation effects. To get sharp estimates in these cases, we will remove these cancelling terms via separate partial summations in region 3 and 4. The required discrete derivatives are directly available in the case j ≥ 3 or are obtained with the help of Proposition 5.4. To avoid discrete boundary terms, we will not split the four regions sharply but use smooth cutoff functions. The boundary terms in the partial summation are then spread out and can be treated like terms in region 2. We first estimate the far-field term where the logarithmic term is estimated trivially by log α |z| 0 ≤ log α |ℓ| 0 for negative α, while for α ∈ N 0 the estimate instead follows from partial integrations of the resulting one-dimensional radial integral ∞ |ℓ| r 1−j−d−p log α r dr. The intermediary area is even more direct as we can just estimate the functions uniformly and multiply by the number of lattice points in the area Next, T 3 can be estimated by Hence, we have |T 3 | |ℓ| 2−d−j 0 if either p > 0 or α < −1. If, on the other hand, p = 0 and α ∈ N 0 , then we obtain |T 3 | |ℓ| 2−d−j 0 log α+1 |ℓ|. Finally, for T 4 the estimate is Putting all four estimates together, this completes the proof in the special case where both p = 0 and j = 1, 2. For p ≥ 1, we need a better estimate on T 3 . We choose f (m) according to Proposition 5.4. We claim that for 0 ≤ m ≤ p Let us prove (40) by induction over m. Clearly, it is true for m = 0 since the second term on the right-hand side is identical to the left-hand side. Given its validity for a m, with m + 1 ≤ p, we now employ Proposition 5.4 and summation by parts to obtain The last term is concentrated in the annulus where D τ η is non-zero and can be estimated by which completes the proof of (40). Using (40) for k = p, we find Therefore, |T 3 | |ℓ| 2−j−d−p 0 for α < −1 and |T 3 | |ℓ| 2−j−d−p 0 log α+1 |ℓ| 0 for α ∈ N 0 . This finishes the proof for j = 1, 2 and arbitrary p. Finally, for j ≥ 3, we need a better estimate for T 4 . In this case, let us split the difference directions as ρ = (σ, τ ), with σ ∈ R 2 . Then, Note that if at least one discrete derivative falls on η ℓ the estimate is similar to T 2 as |z|, |ℓ|, |ℓ − z| are then comparable for all non-zero terms. We thus get Note that we freely used the estimates of |D j−2 S f (0) | for |D j−2 −S f (0) |, as S spans the lattice and thus these estimates are equivalent. Theorem 2.4 is almost an immediate consequence of Theorem 2.1. The only additional ingredient we need for the proof is the following lemma. Proof. For any b one calculates Note that H[D ρ G k ] = D ρ δ 0 e k . In particular, there are no summability problems as this expression has compact support. With that in mind we can use Lemma 5.3 to calculate Therefore, it is sufficient to show that the linear map is a bijection. By dimensionality (recall that S is chosen to be a basis), this is equivalent to T being one-to-one which follows from Lemma 5.2. Proof of Theorem 2.4. k) : D i S G k , and choose b according to Lemma 5.6. As the moments are linear, the result then follows directly from Theorem 2.1. Note that even though the choice of b in Lemma 5.6 is unique, that does not necessarily mean that the choice of b for Theorem 2.4 is unique. Indeed, it can happen that for certain b the sum N k=1 b (i,k) : D i S G k exhibits sufficient cancellation to be part of the error estimate, for example if the coefficients b correspond to a discrete approximation of the CLE equation. Decay estimates for the far-field continuum equations We also want to have decay estimates for the equations (15) introduced in § 2.3. These can be obtained along similar lines to the discrete case. However, we are purely interested in the far-field which simplifies things a bit and avoids a few additional complications specific to the continuum case. Remark 5.8. It is important to point out that the additional logarithm in the result is a generic aspect of the problem and not just a limitation of our proof. To see that consider α = 0 and p = 1. Specifically, let us take a f with f = div g, |g(ℓ)| |ℓ| −d , and |f (ℓ)| |ℓ| −d−1 . Then our proof below actually shows Even for something as simple as g(ℓ) := E 11 |ℓ| −d one directly gets a logarithm from the integral. That means the logarithm term naturally comes from a summing of the stresses (for p = 1) around the defect core. Of course there are special cases where there is no logarithmic term appearing. For p = 1, α = 0 that would be any setting with enough cancellation such that sup For example, the first corrector equation in [BBO19] satisfies ∂B R (0) g(z) dz = 0 and indeed there was no logarithmic term. Sketch of the Proof. As the equation only needs to be satisfied outside of B R 0 , we can assume without loss of generality that f ∈ C ∞ (R d ; R N ), by multiplying f with an appropriate cutoff function. Our ultimate goal will be to define u := f * G C and follow the same estimates as in the discrete case. The main problem in adapting the discrete proof to the continuum is the need to replicate Lemma 5.1 and Lemma 5.4 which allow conversion to appropriate divergence form for p ≥ 1, and we sketch out the route to the analogues of these results here. We follow the general idea of the proof in [EOS16], defining f 0 := f , f n+1 (ℓ) := 3 d f n (3ℓ), and By direct computation it can be checked that and using the bound on f assumed in the statement and p ≥ 1, one concludes that f n → 0 uniformly in any R d \B δ (0). Similarly, we may show that the sequence g n is uniformly summable over the same sets, and we define g := ∞ n=0 g n . Using the relation (41), we find f = − div g in R d \{0}. It also easy to check that g ∈ C ∞ (R d \{0}) and that g satisfies the decay |g(ℓ)| |ℓ| −d−p+1 log α |ℓ| away from zero. In the continuum setting, a possible consequence of this construction is that g may have a singularity at ℓ = 0. However, since we are only interested in the far-field behaviour, we can simply remove any singularity by redefining g on the interior of B R 0 (0) such that g ∈ C ∞ (R d ; R N ×d ), and then redefine f := div g. This operation does not affect the behaviour outside B R 0 (0), and we have therefore recovered the equivalent of Lemma 5.1. A similar construction and inductive argument to that used in the proof of Lemma 5.4 allows us to generate an extension of |ℓ| −d−p+k log α |ℓ| for ℓ bounded away from zero. To avoid generating a singularity at ℓ = 0 in this case, again we redefine the functions in B R 0 (0) starting at the highest order f (p) . With this ability to transform f to (higher-order) divergence form, one can now follow the proof of Theorem 2.1 almost verbatim to estimate the derivatives of u = f * G C using the split into the four different contributions defined in (39). Note that the singularity of G C is not a problem as G C and its first derivative are locally integrable everywhere, and we only need to take further derivatives of the Green's function in the estimate of T 3 , which is the part away from that singularity where G C is smooth. Indeed, in the estimate of T 4 , we can use the fact that we have assumed estimates for derivatives of f of arbitrary order and only put one derivative on the Green's function instead of two. The lattice Green's function and series expansion In this section, we define the lattice Green's function, and construct a series expansion in terms of continuum kernels proving Theorem 2.5. The general strategy is to expand the inverse of the discrete Fourier multiplier corresponding to the lattice operator H as a series of homogeneous terms about k = 0, and then use tools developed by Morrey in [MJ66] to provide integral representations of these terms in real space which allow us to explicitly characterise the decay and even homogeneity of each term. In the first parts we will also follow some ideas that were developed in [MR02] for the simpler scalar case where N = 1. Fourier transforms We begin by providing definitions of the Fourier transforms we deploy here. For f : Λ → R n , we define the Fourier transform F a [f ] : B → R n , where B is the Brillouin zone of the lattice, via The inverse is given by Note that |B| = (2π) d c vol , where c vol is the volume of a fundamental cell of Λ. We note the following properties. if f decays fast enough such that the sum converges absolutely. We also use the standard continuum Fourier transform defined for f ∈ L 1 (R d ; R N ) and extended to the space of tempered distributions S ′ (R d ; R N ) by duality. As usual the inverse is given by Fourier multiplier series manipulation Next, we formally carry out a series development of the inverse of the discrete lattice operator Fourier multiplier, which we will subsequently analyse in detail. As before consider the Hamiltonian H = δ 2 E(0) given pointwise by and we write C σρ = (C σiρj ) ij ∈ R N ×N . C satisfies the symmetry C iρkσ = C kσiρ as a second derivative and C iρkσ = C i(−ρ)k(−σ) due to equation (2). Then (see [HO12,BS16]) H has a representation as a Fourier multiplier where the matrices A ρ are implicitly defined through the second identity. As before we assume lattice stability (4). In reciprocal space this entails that there exists c 0 > 0 such that where Id is the N × N identity matrix. We note that H ∈ C ∞ per (B), and by virtue of lattice ellipticity, H is strictly positive definite except when k = 0. This entails that the matrix inverse H −1 (k) exists everywhere in B except at k = 0, and moreover, We define and re-summing, we may express H −1 as a series of terms with increasing homogeneity: Explicitly, A 2n (k) is defined by the finite sum This means A 2n is 2n-homogeneous. In cases where the first term in the expansion of H 2 is a multiple of the identity, i.e. H 2 (k) = h 2 (k) Id, the series simplifies to and yet further simplification can be made if H is a multiple of the identity matrix, e.g., in the scalar setting N = 1, which is the case considered in [MR02]. The lattice Green's function and its development at infinity With the previous series development complete, we may introduce and analyse the lattice Green's function, which acts as an inverse to the lattice operator H. If d ≥ 3, we define the lattice Green's function to be and if d = 2, we set where γ is the Euler-Mascheroni constant. To demonstrate that the limit in the latter definition exists, we take 0 < δ < ε and calculate In the above argument, we have used thatĤ −1 − A −2 is bounded in a neighbourhood of k = 0, and the fact that |1−e iσℓr | = O(|ℓ|r) for all suitably small r. Rearranging, we have established a uniform bound independent of δ, and hence the limit exists. We now use the series expansion (42) to define a series of functions G n which approximate G. Note that A 2n−2 is locally integrable and defines a tempered distribution unless both n = 0 and d = 2. Except in this special case, we therefore define In the special case where n = 1 and d = 2 we define in the distributional sense, i.e. With this definition a simple direct calculation also shows that and indeed G C = G 0 . Alternative representation of G n We now connect the definitions made above to alternative representations considered by Morrey in [MJ66, chap. 6.2]. These alternative representations will provide us with the ability to directly deduce regularity, decay, and even homogeneity. Furthermore, this representation is promising for computational uses as only a finite surface integral in Fourier space is required for its evaluation. We begin by setting P = ⌈ d+2n−1 2 ⌉, and h = 2P + 2 − 2n − d, which we note satisfies h ≥ 1. Then, we define for ℓ = 0, where ∆ is the Laplacian, and We remark that these functions satisfy J ′ l (w) = iJ l+1 (w). We also use the following definition from [MJ66, Def. 6.1.4]. We directly want to point out though, that for the purpose of our results here we are only interested in the cases s < 0 and s = 0. So either ϕ is positively homogeneous of a negative degree, or ϕ(ℓ) = ϕ 1 (ℓ) + A log|ℓ|, where A is a constant and ϕ 1 is 0-homogeneous. With this definition in place, we state the following result. Lemma 6.2. G M n is well-defined, smooth for ℓ = 0, and G M n is essentially Proof. This is part of [MJ66, Thm. 6.2.1]. Note that A 2n being matrixvalued does not change the argument. In our setting, the only case that might fail to be positively homogeneous is G M 0 for d = 2. As pointed out above, in that case essentially homogeneous means that G M 0 (ℓ) = ϕ(ℓ) + C 1 log|ℓ|, where ϕ is positively 0-homogeneous and C 1 ∈ R N ×N . We will now show that G n and G M n are in fact identical. Let us first give a short motivation for G M n . The basic idea of the definition (44) is to use the homogeneity of A 2n−2 to isolate the radial component in the Fourier integral and then integrate it out. Naïvely, this approach leads to a integral of the form ∞ 0 r l e irw dr which does not exist for any l and real w. The idea behind the P Laplacians is to ensure that the resulting l is non-negative. If one then adds a small imaginary part to w, the integral is indeed finite and can be computed. This leads to the J l . Lemma 6.3. On R d \{0} the distribution G n is represented by a function which we will also call G n and we have G n = G M n on R d \{0}. Proof. As discussed in the motivation directly before the Lemma, we first let ℑ(w) > 0 and l ≥ 0. Then Take any δ > 0, and let [i, w] be the line segment in the complex plane, for which we note that z ∈ [i, w] implies ℑ(z) > 0. Define which is easily seen to satisfy (zϕ 0 (z)) ′ = e z −1 z . Using the definition of J 0 and the definition and property of ϕ 0 just noted, we then find where we define In (45), the first term is the desired integral for l = −1, the second term is a renormalization of the singular part for r close to 0, and the third term is of lesser importance and will go to zero at the end. We can simplify p δ 0 (w) further, as the exponential integral E 1 satisfies for z ∈ C\R ≤0 , where γ is the Euler-Mascheroni constant, and [z, ∞] is any contour in C\R ≤0 connecting z to positive real infinity. It follows that Now, defining for h ≥ 0 we may calculate inductively that This calculation was restricted to ℑ(w) > 0 to ensure that the integrals converge. But for h ≥ 1 that is true even for real w. So we now can take the limit ℑ(w) → 0 and see that for all w ∈ R, δ > 0, and h ≥ 1, we have that Using the definition in (44), we have obtained the following representation for G M n : We now inspect the three integrals in turn. Clearly, for the first in the sense of tempered distributions. For the second integral, as p δ h is a polynomial of degree h = 2P + 2 − 2n − d, we find (−∆) P p δ h (ℓσ) = 0 except when d = 2 and n = 0, in which case (−∆) P p δ h (ℓσ) = γ + log δ. The integrand in the final integral contains the factor (−∆) P δ(iℓσ) h+1 ϕ h (δiℓσ), which goes to 0 uniformly on compact sets as δ → 0. Since δ was arbitrary, we may pass to this limit, and we do indeed find G m = G M m on R d \{0}. After having constructed the kernels G n and having established their properties, we can now prove the expansion of the lattice Green's function. Proof of Theorem 2.5. We have already established the regularity and homogeneity properties of the G n . In the following, fix j ≥ 0 and ρ = (ρ 1 , . . . , ρ j ) ∈ R j . We then still need to show that for all ℓ, p. We claim it is in fact sufficient to show the result holds for any p but where the error decays with two orders less, i.e. for all ℓ. If this estimate holds, then using equation (47) with p + 1 and the triangle inequality entails instead. If 2n − 2 > −d, note that A 2n η is integrable and has support in B, therefore as well as In all cases we find with as long as m is small enough to ensure that r m is integrable. The smoothness of η, and the properties ofĤ −1 and A 2n discussed in §6.2 ensure that r m is smooth away from k = 0 and is bounded by C|k| 2p−2m+j in a neighbourhood of k = 0. It follows that r m ∈ L 1 if 2m ≤ 2p + j + (d − 1) and therefore (48) implies For G 1−η n , we observe that D α A 2n−2 (1 − η) is integrable for all sufficiently large |α|, and therefore G 1−η n has super-algebraic decay at infinity. The same therefore also holds true for any D ρ G 1−η n . The only remaining claim now is the uniqueness. Assume we have two such series of kernels (G n ) n and (G n ) n . By induction, for p ∈ N 0 let us assume we already know that G n =G n for all n < p, we need to show that G p =G p . First exclude the case where both p = 0 and d = 2. Then we know that G p andG p are positively homogeneous of order (2 − 2p − d). Using both estimates of order p without any derivatives, we obtain for all ℓ ∈ Λ. Combined with the homogeneity, we thus have Given any ε > 0 we can therefore find an R > 0 such that for all x ∈ { ℓ |ℓ| : ℓ ∈ Λ, |ℓ| > R}. As this set is dense in ∂B 1 (0) and both functions are continuous, (49) holds for all x ∈ ∂B 1 (0). As ε > 0 was arbitrary we thus have G p =G p on ∂B 1 (0) and by homogeneity on all of R d \{0}. In the case that p = 0 and d = 2 the argument is only slightly more complex. We write G 0 = A log|ℓ| + ϕ andG 0 =Ã log|ℓ| +φ with A,Ã ∈ R N ×N and ϕ,φ both 0-homogeneous. Then for all ℓ ∈ Λ. As ϕ andφ are bounded, we can divide by log|ℓ| and send |ℓ| → ∞ to see that A =Ã. The identity ϕ =φ then follows the same way as before. We can now rewrite the truncated multipole expansion of Theorem 2.4 in continuum terms. Proof. This is an immediate consequence of the expansion of the lattice Green's function from Theorem 2.5 and a Taylor expansion of the discrete differences in D ρ G n . As a consequence we obtain Theorem 2.6. Proof of Theorem 2.6. The result follows by combining Theorem 2.4 and Lemma 6.4. Proofs -Far Field Expansions for Crystalline Defects In this section we prove the results of § 3. Before we get to the main results, let us start with a preliminary proof. Proof of Proposition 3.4. δE(u)(ℓ) is summable according to [EOS16,Lemma 10]. The same then holds true for H[u](ℓ). The idea of the proof is to use a cutoff η R and sum by parts so that only the far away annulus B 2R \ B R contributes. Here, the linearisation and discretisation errors are small and we can go from the nonlinear forces to the linearised forces and all the way to the continuum forces. So, consider a smooth cutoff function η R , such that η R : R d → [0, 1] satisfies η R (z) = 1 for |z| ≤ R and η R (z) = 0 for |z| ≥ 2R, as well as |∇η R | R −1 . We then have as the linearisation error sums to O(R −1 ). Furthermore, as the discretisation error is also O(R −1 ). Now let us come to the main topic of this section, the proofs of Theorem 3.1 and Theorem 3.5. At the outset, we recall from the setting discussed in §2 and §3 that we assume the site potentials (and hence the potential energy functional) are of class C K , the number of derivatives we want to estimate is J ≥ 2 and the dimension of the problem is d. We now prove the expansion and the error estimates by induction over p ≥ 0 as long as in the point defect case and in the case of the screw dislocation, where we always have d = 2. We also note that all results only need to be proven for |ℓ| ≥ R for some sufficiently large R. The case p = 0 The case p = 0 is a consequence of results in [EOS16] for both the point defect case and the case of screw dislocation. To be precise, it follows from the assumptions of both Theorem 3.1 and Theorem 3.5 that J ≤ K − 2. In this setting, we may apply Theorem 1 in [EOS16], which states that the solutionū in the point defect cases satisfies In the screw dislocation case, we may similarly apply Theorem 5 in [EOS16], implying that In order to combine and summarise these results, we define u 0 := 0 for the point defect, u CLE for the dislocation, so that (51) and (52) imply 7.2 The case p > 0 We now proceed inductively from the case p = 0. Taylor expansion. We follow the construction that we formally motivated in § 2.3 and expand the solutionū by performing a Taylor expansion of the equilibrium equation for the forces for all ℓ sufficiently large, where g = 0 in the screw dislocations case and g has compact support for point defects. Recall from §3 that E inherits the same regularity as the homogeneous site potential V , and is therefore C K . For such a variation of the energy we use the pointwise notation for the corresponding ℓ 2 representative. Specifically, that means A Taylor expansion around the homogeneous lattice u = 0 up to order K ≤ K − 2 gives 0 = Div g + δE(ū)(ℓ) (55) We will determine the precise orderK for this expansion as part of our arguments later. We note that we already have explicit expressions for the first two terms in the sum: A homogeneous (Bravais) lattice is always in equilibrium, δE(0) = 0, and the second term is δ 2 E(0)[ū] = H[ū]. For convenience, we will define T (1) p+1 to be the integral error term and also include the compactly supported Div g, i.e. T (1) so that we may rewrite (55) as Expansion of solution. Next, we make the inductive assumption that we have already obtained u i for 0 ≤ i ≤ p − 1, and that we can further decompose which are respectively continuum-and lattice-valued functions, so that we may writeū where r p is some remainder. These functions will be assumed inductively to satisfy certain properties which we now make precise: • We will assume that the former terms, u C i ∈ C ∞ (R d \B R 0 (0)), i ≤ p−1, have been constructed to solve a sequence of linear elliptic PDEs, and satisfy decay estimates We note that S i are forcing terms which we determine as part of our argument. • We assume that the latter terms in (57) have been constructed to take the form for some constant tensors b (i,k) . In particular, H[u MP i ](ℓ) = 0 for |ℓ| large enough, and we assume u MP 0 = 0. In light of (61), whenever useful, we may also rewrite the multipole contributions in terms of smooth continuum kernels by applying Lemma 6.4. Indeed, by combining the terms of equal homogeneity, Lemma 6.4 implies that we may write for all j ∈ N 0 , and we have the additional error estimate for all j ∈ N 0 . Using the functions u CMP , we therefore have the representationū Using the inductive assumptions made above, we may substitute (58) into the left-hand side of (56), and write The goal is now to split the remainder r p into the next continuum contribution u C p and a new intermediary term s p , and find the correct equation for u C p so the residual becomes even smaller in the far-field. Then we can apply Theorem 2.4 to s p to split this term yet further into multipole terms up to order p and a new remainder r p+1 with the desired improved decay. Taylor expansion error. With the assumptions above, we turn back to the Taylor expansion (55), and establish the choice of order,K. Our aim is to fully resolve all orders of decay up to and including |ℓ| −d−p These orders of decay come from estimating a k-fold tensor product of Dū with the decay mentioned, and one further additional power arising from the discrete divergence. In particular, to ensure that the order of decay matches or exceeds that of |ℓ| −d−p 0 , we requireK ≥ p d−1 + 1, or in the case of a point defectK ≥ p−1 d + 1. Indeed, this is satisfied by assumption on K, if we setK := K − J − 1. We also note that the expression of T Proof. For ℓ large enough, − Div g(ℓ) = 0, hence we have In the case of a screw dislocation we can use the estimates (53) and (54) for u 0 and r 1 to see that |D jū | |ℓ| −j 0 for 1 ≤ j ≤ J. By distributing the outer discrete derivatives over the tensor product and using this estimate, we deduce that This proves the result in this case asK = K − J − 1 ≥ p + 1 and d = 2. 0 collects all the terms that contain contribution from s p or w MP p . To estimate derivatives of w MP p , we can use (64). For s p we can combine (62) and (67) to obtain for j = 1, . . . , J. This allows us to estimate the second error term. Proof. We note that T (2) p+1 can be written as a sum of differences, For the k = 2 term, we use (68) and apply the discrete product rule to obtain We note that the shifts ρ appearing in the first estimate which arise from the discrete product rule remain below some finite radius R = R(J). Using the estimates for w MP p and s p gives the desired overall estimate. For k ≥ 3 a similar argument provides the same estimate, as any additional factor just improves the estimate by at least |ℓ| −1 0 log|ℓ| 0 1. Continuum approximation. We want to construct a sequence of PDEs to resolve (69) to sufficient accuracy and thus describe the far-field behaviour ofū. To that end, we note that a finite difference of any smooth function v can be Taylor expanded to give for some θ ∈ [0, 1]. By iterating this process of Taylor expansion, a similar series approximation can be constructed for higher-order finite differences. To start, we may use this expansion to approximate the action of H on a smooth function. We note that H was defined by (3) as The symmetry assumption (2) implies ∇ 2 V (0) ρρ ′ = ∇ 2 V (0) (−ρ)(−ρ ′ ) . Therefore all the odd orders cancel and using (71) we get that there exist tensors C 2n ∈ R N 2 ×d 2n for n ≥ 1 such that for any smooth function v, where T (3,1) collects the Taylor error terms. In coordinates, these tensors are Where we used that based on the definition of the Cauchy-Born energy density W in (6). Construction of u C p . We recall that so far, we have Taylor expanded the site potential and substituted a truncated seriesū p on the right-hand side of (55) to obtain (69), i.e. We note that the remaining series on the left and right-hand sides of this equation involve only the action of discrete difference operators on smooth continuum functions, and so we may now perform a discrete-to-continuum approximation to handle these terms. On the left, we insert (73) from the discussion of H C as the leading order into the full expansion (72) to see that there exist tensors C n ∈ R N 2 ×d n for n ≥ 3 such that for a smooth function v, We can perform a similar construction for the higher-order terms in the Taylor expansion on the right-hand side of (69), so that in general, there exist tensors C j in the natural tensor space which is isomorph to R N k+1 ×d n for each j = (j 1 , ..., j k ) that satisfies 1 ≤ j m ≤ M k − 1 and k m=1 j m = n for some n ≥ k + 1. This allows us to approximate δ k+1 E(0)[v ⊗k ](ℓ) as Each of the tensors C j is a sum of tensor products formed of components of ∇ k+1 V (0) and a n-fold tensor products of vectors from R. The final term T (3,k) (v) = T (3,k) ∇v(ℓ), . . . , ∇ M k v(ℓ), ∇ M k +1 v collects all the terms that include at least one Taylor error for v. And overall, after inserting the specific v, we can combine these errors to and for notational convenience in the following argument, defineũ i := u C i + u CMP i for i < p andũ p := u C p , which satisfies ∇ jũ i ≤ C j |ℓ| 2−d−j−i 0 log i |ℓ| 0 for all j ∈ N 0 while overall p i=0ũ i =ū p . Let us consider multi-indices i = (i 1 , . . . , k) for choosing multiple u i in a nonlinear term and j = (j 1 , . . . , j k ) for the number of derivatives on each term. We write |i| 1 = m i m for the sum of the components. Note that as long as we choose the M k large enough, the definition of the S q , q ≤ p, in (78) is independent of their precise choice. Any additional terms from choosing M k even larger only appear in T (4) p+1 . Lemma 7.3. The following estimates holds: for all j ∈ N 0 . In summary, we may use the definition of S q and the remainder terms T In view of the definition of u C i as a solution to (59), it follows that this equation reduces to the remainder equation which we study in the following section. Series remainder estimate. Note that (82) is not a linear equation for s p , as s p also appears on the right hand side in T (2) p+1 . In the leading order estimates in [EOS16] this was resolved through techniques coming from regularity theory which we rely on for p = 0. For the higher orders, i.e. p > 0, we can use the (suboptimal) estimates that are available from the previous order. That turns out to be sufficient as the appearance of s p in T Conclusion. To conclude the induction step we still have to look at that the possible addition of lower order terms from (83) as they change the multipole terms from one iteration to the next. Instead of discussing under what conditions these terms vanish, we allow for them to be non-zero and consider the more detailed consequences. Indeed, this is fine as long as u MP 0 = 0 remains true as this is part of the results and as long as u CMP i , i = 1, . . . , p − 1, remain unchanged which is important since they are part of the continuum equations (59). The first just follows from the fact that I 0 [v] = 0 due to the divergence form of T p+1 combined with sufficient decay. For the second part, let us use the continuum expansion of the multipole terms to write We know that v CMP i is (2 − d − i)-homogeneous. Combining the decay estimate (70) for s p with the decay estimates for r p+1 andw MP p+1 , we find for |ℓ| large enough. This is only compatible with the homogeneity if v CMP i = 0 for i = 1, . . . , p − 1. That ends the induction step and thus the proof of Theorem 3.1 and Theorem 3.5. As a last step we see that for a point defect, for any d, we have u C 0 = 0. We then also find u C 1 = · · · = u C d = 0 as S i = 0 for 1 ≤ i ≤ d. The first non-trivial terms arises when u CMP 1 appears in the nonlinearity, namely S d+1 = 1 2 div ∇ 3 W (0) ∇u CMP 1 2 .
19,233.2
2021-08-10T00:00:00.000
[ "Physics", "Materials Science" ]
The Effects of Internet Use and Internet Efficacy on Offline and Online Engagement While existing research has explored the relationship between Internet use and civic engagement, this study is among the first to examine the effects of general Internet use, social network site use, and Internet efficacy on online and offline civic participation using the 2010 Pew Internet and American Life Project ‘Social Side of the Internet’ survey (N = 2,303). Results show that general Internet use and social network site use enhance web and wireless participation. However, neither increases offline participation. Individual Internet efficacy enhances both online and offline participation, but group Internet efficacy decreases offline participation. Theoretical and practical implications of the findings of this study for engagement are discussed. Introduction Since Putnam's (1995a;2000) provocative proposition about the steady decline of social capital and civic engagement, scholars have engaged in lively debate about whether we really are experiencing a civic decline (Schudson, 1999).This debate has also touched on whether mass mediacontribute to this decline (Putnam, 1995a;2000), or whether they stem it (Norris, 1996;Shah, McLeod, Yoon, 2001;Shah, Schmierbach, Hawkins, Espino, Donavan, 2002).It has also been suggested that there is a need for research focusing on the types of mediated interpersonal communication citizens are engaged in to better understand their varying levels of both information exposure and participatory behavior (McLeod, Scheufele, & Moy, 1999;Scheufele&Nisbet, 2002). Research investigating the effects of Internet use on aspects of participation and engagement has provided mixed results.Two perspectives have emerged, suggesting either positive or negative relationships.Effects associated with general Internet use include enhanced network heterogeneity (e.g., Brundridge& Rice, 2009;Hampton, Sessions, & Her, 2011;Moy & Hussain, 2012) and engagement (e.g., Jennings &Zeitner, 2003;Kang & Gearhart, 2010;Nam, 2012).However, these linkshave been debated and findings have been discredited because enhanced political engagement may be limited to those who already participate in politics. Investigations into other aspects of engagement have found mixed support as different uses have been linked to different variations of participation.However, all such investigations have demonstrated that reliance on SNSs does have the ability to impact aspects of engagement. The effects of Internet self-efficacy on civic and political forms of participation has received limited attention.Regardless, there is reason to speculate that this form of efficacy could have important implications for online participation.Although the link to online participation may appear obvious, the link to offline participation cannot be readily assumed.Recently,such In the time since, the role of the mass media has been at the center of the debated state of civic engagement and Putnam's work has been criticized on both theoretical and empirical grounds.First, some scholars maintained that Putnam's measurement of civic engagement, which was an additive score of all voluntary associations an individual belongs to,did not capture the varied ways in which people participate in associations (e.g., Schudson, 1999). Second, Putnam's gross measure of television use as total viewing time has been criticized for neglectingto differentiate among types of media and content consumed, which may produce differential effects (McLeod, Kosicki, & McLeod, 2002).Finally, Putnam was also criticized for emphasizing the negative effects of media use and ignored decades of mass communication research documenting the positive impact of media use on citizen participation (e.g., McLeod et al., 1999). Internet Use & Civic Participation When the Internet became of age in the late 1990s, scholars began to investigate the potential effects of Internet use on civic engagement.Again, research evidence was mixed as some found Internet use to stimulate civic participation (e.g., Shah et al., 2002) while others concluded that Internet use did not translate into participation, especially in the political arena (e.g., Davis, Elin, &Reeher, 2002).In fact, Internet use has even been found to foster passivism and inactivity (Nie&Erbring, 2000). Similar to research on television, investigations into the relationship between Internet use and participation has provided mixed results.Again, there tends to be two perspectives on the influence of the Internet as findings have suggested both positive and negative relationships. One perspective on Internet use speculates that general use has the ability to enhance network diversity as it offers a place to engage in discussion with a wide array of people (Hampton et al., 2011).Network heterogeneity is generally seen as a heartening effect of Internet use and diverse network development has been suggested to contribute to increased political participation (Kwak, Williams, Wang, & Lee, 2005).Similarly, heterogeneous networks may lead to increased exposure to alternative views and disagreements (Brundridge& Rice, 2009). In this context, citizens can essentially "transcend geographic boundaries and redefine their sense of community" allowing for engagement in discussion (Moy &Hussain, 2012, p. 230). demonstrated. For example, Jennings andZeitner (2003) found a positive association between Internet use and political participation.As expected, similar results have been found between Internet use and online political activity (e.g., Nam, 2012).Kang and Gearhart (2010) found positive link between civic and political engagement and those who used city websites for specific purposes.However, among these positive results a negative relationship between overall Internet use and political behaviors has also been noted, indicating the link between Internet use and enhanced political participation tends to simply reinforce the offline trends of those already engaged.Similarly, Nam (2012) found Internet use did not effectively attract new participants to be politically active but did enhance political participation of those who already participate in politics. Other negative consequences of Internet use are concerned with aspects of cynicism, apathy, ignorance, and disengagement.There is also concern that the personal control afforded by the Internet "creates the possibility that people will exercise an increasing tendency for selectivity" (Brundridge& Rice, 2009, p. 145).Furthermore, Internet use is associated with widening the knowledge gap.That is, Internet use does not enhance political participation for all groups.Nam (2012) found disparities of political participation as the more educated and affluent were more likely to participate in online and offline political activities. Although research addressing the effects of Internet use onparticipation appears to be well documented, these linkages need to be more closely examined to better understand individual and contextual factors.DiMaggio,Hargittai, Neuman, and Robinson (2001) assert that "the Internet has no intrinsic effect on social interaction and civic participation" and "use tends to intensify already existing inclinations toward sociability or communication involvement, rather than creating them ab initio" (p.319).That is, although there are positive effects of Internet use, more care should be taken to better understand the circumstantial nature of such effects. Social Network Sites & Civic Participation More recently, research has begun to focus on other forms of media including SNSs use.The advent of social media and especially SNSs in the early 21 st century has generated a great deal of enthusiasm among the academics on the potential of social media in fostering civic engagement.Much research seems to point to uniformly positive effects of social media in generating social capital and civic engagement (e.g., Ellison et al., 2010;Valenzuela et al., 2009) with the rare exception of Baumgartner and Morris (2010) that found social network sites did not live up to their high expectations of informing youth and increasing their political engagement. Unlike general Internet use, SNSs provide users with a centralized discussion network of connected others.Although empirical investigations directly investigating the effects of SNSs on participation are limited, a positive relationship between SNS use and network heterogeneity has been noted (e.g., Kim, 2011).As previously discussed, diverse networks are suggested to contribute to increased political participation (Kwak et al., 2005). Existent research has begun to differentiate among the effects of different types of SNS use. For example, existent research on the use of Facebook has demonstrated a positive link to social capital (e.g., Ellison et al., 2010;Ellison et al., 2007;Valenzuela et al., 2009). Similarly, Valenzuela et al. (2009) found intensity of Facebook use to positively predict civic participation while Facebook group use predicted both political and civic participation.Their study further assessed the effects of belonging to certain types of groups within the online network and found involvement in political and student groups to positively predict political participation while involvement in Facebook groups for on-campus and student organizations was linked to enhanced civic participation.These findings indicate that specialized use of the network may differently impact participation. Beyond specialized use of SNS, there is also the chance that the use of SNSs may reinforce people's propensity to be engaged.That is, the use of SNSs may simply be another participation avenue for those whom are already engaged offline.For example, Vitak et al. (2010) found political activity on Facebook to be a positive predictor of general political participation.Although the same study found intensity of SNS use was positively related to political activity on Facebook, a negative relationship was found between intensity of Facebook use and general political participation.Importantly, the aforementioned studies have been conducted with college students, a group which represents the most prominent SNSs users (Rainie, Lenhart, & Smith, 2012). Research using samples representing a more general population have also suggested that features of SNSs may be associated with increased engagement, but results of these initial research attempts have also been inconsistent.For example, Zhang, Johnson, Seltzer, and Bichard (2010) found reliance on SNSs to be a significant predictor of increased civic participation, but not political participation.Johnson, Zhang, Bichard, and Seltzer (2010), who differentiated between SNSs and YouTube, found reliance on both SNSs and the videosharing site to be significantly related to both online and offline political participation among politically interested Internet users.Although investigations of SNS use and engagement have not consistently investigated the same uses of SNSs with the same population, there is substantial evidence that SNS use does have the ability to impact aspects of democratic participation. Existent findings on the effects of SNS use on participation should be carefully scrutinized due to the dynamic nature of the digital environment.For example, Groshek and Dimitrova (2011) found no significant impact of SNS use on voting intention, voter learning, or campaign interest in the 2008 U.S. presidential election.However, in the time since this data was collected the use of SNSs has grown exponentially.As such, acceptance and social significance of engaging in these online networks has also changed.More recently, the use of SNSs during the 2010 Swedish election campaignwas found positively linked to offline political participation (Dimitrova, Shehata, Strömbäck, & Nord. in press).In fact, SNS use enhanced offline political participation even when controlling for the influence of political interest, knowledge, and a host of other variables.Although social media use was a strong predictor of participation, so was political interest and past offline political participation, indicating that traditional predictors of participation may simply be reinforced in the SNS environment.Putnam (1995a) questioned if technology drives a wedge between individual and collective interests.As we can see there have been a multitude of conflicting findings across the realm of media.Thus, a more appropriate question may have asked which uses of technology drive a wedge between individual and collective interests. Internet Efficacy & Civic Participation Discussion of efficacy is necessary when discussing factors that may impact the levels of an individual's engagement.Self-efficacy is defined as one's belief in their own "capabilities to organize and execute the courses of action required to produce given attainments" (Bandura, 1997, p. 3).That is, efficacy reflects the level of belief one has in their ability to achieve something.Self-efficacy is explicitly concerned with what individuals believe they can do with the skills they possess.Efficacy beliefs influence what courses of action people choose to pursue, their goals and commitments to them, and even the amount of effort put forth (Bandura, 2009). The overarching concept of efficacy can be further divided into specialized types, such as Internet efficacy.Internet efficacy can be defined as one's beliefs about their own ability to use the Internet with focus on what they believe they can accomplish (Eastin&LaRose, 2000). In line with earlier distinctions between low and high self-efficacy (e.g., Bandura 1997), individuals with low Internet efficacy are those that have low levels of confidence, satisfaction , and/or comfort in their ability to use the Internet (Eastin&LaRose, 2000). Individuals with high Internet self-efficacy represent the conceptual opposite as they possess skills that lead to enhanced levels of confidence, satisfaction, and/or comfort.As such, those with low levels of Internet self-efficacy should be less likely to engage with the Internet in the future while those with high levels of Internet self-efficacy should be more likely to use the Internet. The link between one's personal efficacy and forms of participationhas been well documented.Political efficacy, which is closely related to personal efficacy, can be defined as one's sense that their participation can actually make a difference in politics (DelliCarpini, 2004).Although the link between perceptions of Internet self-efficacy and participation islimited, existent research indicates that this form of efficacy may enhance one's participation.For example, Nam (2012) explored the effects of individual political efficacy of the Internet and found this individual characteristicwas a significant predictor of political participation in both online and offline and environments.These results indicate that an individual's feeling about their ability to use the Internet for political benefit and empowerment may be capable of translating into both online and offline forms of participation. Research Questions Based on the literature review concerning the relationship between social capital and civic participation, we can see that research has produced conflicting findings.Therefore the following research questions were posed to assess the effects of social capital variables on offline civic participation after controlling for influences of demographic variables: RQ1a: What are the effects of community satisfaction on offline participation?RQ1b: What are the effects of social interpersonal trust on offline participation? The following research questions were posed to assess the effects of social capital variables on online civic participation after controlling for influences of demographic variables: Private engagement was an additive measure of 17 items.Respondents were asked if they were "currently active in any of these types of groups or organizations, or not" such as such as sports or recreation leagues (25.1%), hobby groups or clubs (19.5%), professional or trade associations (23.3%), parent groups or organizations (13.4%), youth groups (10.1%), veterans groups or organizations (8.6%), consumer groups (26.8%), farm organizations (4.9%), travel clubs (6.2%), sports fantasy leagues (7.0%), gaming communities (5.0%), national or local organizations for older adults (20.5%), political parties or organizations (17.6%), labor unions (8.3%), fan groups for a particular TV show, movie, celebrity, or musical performer (5.5%), fan groups for a particular sports team or athlete (9.7%), and fan groups for a particular brand, company or product (3.4%).The scale was dummy coded (0 -not active,1active).Respondents were asked about their different ways of participation in those organizations such as taking a leadership role, attending meetings or events, contributing money, or volunteering one's time to a group one was active in.The intensity of their active participation in those organizations was also dummy coded (0 -no,1 -yes).An individual's intensity of participation in each organization was the sum of one's participation in each organization combined with their different ways of participation.All 17 items were combined to form the privateengagement index. Social engagement was an additive measure of 10 items.Respondents were asked if they were "currently active in any of these types of groups or organizations, or not" such as community groups or neighborhood associations (22.2%), church groups or other religious or spiritual organizations (45.3%), performance or arts groups (12.2%), social or fraternal clubs, sororities or fraternities (9.7%), literacy, discussion or study groups (12.5%), charitable or volunteer organizations (25.4%), ethnic or cultural groups (5.5%), support groups for people with a particular illness or personal situation (19.1%), alumni associations (17.8%), and environmental groups (8.8%).Just as was done for private engagement, the scale was dummy coded (0 -not active,1 -active) and the intensity of their active participation in those organizations was also dummy coded (0 -no,1 -yes).Again, an individual's participation in each organization was the sum of one's participation in each organization combined with their different ways of participation.All 10 items were summed to form the social engagement index. Online civic participationwas divided into two groups representing distinct types of online activity.Web and wireless participation was an additive measure of four items.Respondents were asked whether in the past 30 days they did the following for various groups they were active in: sent or received email with members of a social, civic, professional, religious or other groups (57%), visited the website of a group (64.5%), read the electronic newsletter or email updates of a group (56.5%), and sent and received text messages with members of a social, civic, professional, religious or other groups (43.0%).These four itemswere dummy coded (0 -no,1 -yes) before being summed to form the web and wireless participation index. SNS participation was an additive measure of five items.On the same 2-point scale, respondents were asked whether in the past 30 days they did the following for various groups they were active in: contributed to an online discussion or message board for an organization (22.2%), posted news about a group on a social networking site like Facebook (27.9%), read updates or message on a social networking site like Facebook about a group (63.9%), posted news on Twitter about a group (23.8%), and read updates and posts on Twitter about a group (65.5%).These five items were dummy coded (0 -no,1 -yes) before being summed to form the SNS participation index. Independent Variables Independent variables included social capital, Internet use, Internet efficacy, and demographic variables that served as control variables.Social Capital.Two items comprised the social capital measures.Community satisfactionrepresented one's overall satisfaction with their community.It was measured on a 4-point scale (1 -excellent, 2 -good,3 -only fair, 4poor) that was reverse coded.In general,respondents rated their communityas good (M = 3.23, SD = .78).Interpersonal trust was a single item measure of whether the respondent agreed that "most people can be trusted" (50.6%) or"you can't be too careful" (49.4%).This item was dummy coded (0 -you can't be too careful,1 -most people can be trusted). Internet Use.Two different measures of Internet use were used in this study.General Internet use was an additive measure of two items.Respondents were asked how often they used the Internet and email at home or at work on a scale of 1 (several times a day) to 7 (never).The scale was reverse coded before being summed to form the general Internet use index.On average, respondents used the Internet and email from home about once a day (M = 5.65, SD = 1.69) and used the Internet and email at work one to two days a week (M = 3.86, SD = 2.82).Respondents were also asked whether they used MySpace, Facebook, or LinkedIn (57.7%) and Twitter(10.6%) on a 2-point scale that was dummy coded (0 -no,1 -yes)before being summed to form the SNS use index. Internet Efficacy.Two types of Internet efficacy were used in this study.Group Internet efficacywas an additive measure of nine items.Respondents were asked for those civic groups they were active in, thinking about how those groups used the Internet, whether the Internet had a major, minor, or no impact at all on the ability of these groups to "recruit new members" (M = 2.39, SD = .71)"impact local communities" (M = 2.38, SD = .70)"impact society at large" (M = 2.53, SD = .67)"communicate with members" (M = 2.64, SD = .64)"find people to take leadership roles" (M = 2.18, SD = .70)"organize activities" (M = 2.50, SD = .69)"raise money" (M = 2.41, SD = .71)"draw attention to an issue" (M = 2.56, SD = .67)and "connect with other groups (M = 2.53, SD = .69).This variable was measured on a 3-point scale (1 -major impact, 2 -minor impact, 3 -no impact)and reverse coded so the higher number indicatedmore impact before the nine items were summed to form the group Internet efficacy index. Similarly, individual Internet efficacy was an additive measure of respondents were asked whether they thought that the Internet had played a major, minor, or no impact to "find social, civic, professional, religious or spiritual groups that match your interests" (M = 2.04, SD = .82)"invite friends and acquaintances to join social, civic, professional, religious or spiritual groups you are active in" (M = 2.00, SD = .79)"keep up with news and information from the social, civic, professional, religious or spiritual groups you are active in" (M = 2.33, SD = .77)"organize activities for the social, civic, professional, religious or spiritual groups you are active in" (M = 2.13, SD = .81)"contribute money to social, civic, professional, religious or spiritual groups you are active in" (M = 1.78,SD = .77)"volunteer your time to social, civic, professional, religious or spiritual groups you are active in"(M = 1.84,SD = .76)and "create your own social, civic, professional, religious or spiritual groups" (M = 1.78,SD = .81).This variable was measured on the same 3-point scale as group Internet efficacy.Again, it was reverse coded before the seven items were summed to form the individual Internet efficacy index. Data Analysis Strategies Hierarchical regression analyses were performed to answer the research questions of this study to determine which variables were significant predictors of both online civic participation (web and wireless civic participation and SNS civic participation) and offline civic participation (social engagement and private engagement).Demographic variables were entered as the first block, followed by social capital variables, general Internet use and SNS use, with group Internet efficacy and individual Internet efficacy being entered as the final block. Results Before addressing the aforementioned research questions and hypotheses, the influence of demographic variables on offline civic participation (i.e., private and social engagement) and online civic participation (i.e., Web/wireless and SNS participation) were addressed. Concerning the influence of demographic variables on online civic participation, older people were more likely to engage in web and wireless participation (β = .08,p < .05)but not SNS participation.More educated individuals were more likely to engage in web and wireless participation (β = .05,p < .05)but less likely to engage in SNS participation (β = -.05,p < .05).Wealthy people were more likely to engage in SNS participation (β = .08,p < .01)but not web and wireless participation.Caucasians were more likely to engage in web and wireless participation (β = .05,p < .05)but not SNS participation.Neither gender nor ideology had any influence on either web and wireless participation or SNS participation. RQ1a and RQ1bexamined the influence of social capital variables on offline participation.RQ1a addressed the effects of community satisfaction on offline participation. According to Table 1, satisfaction with one's community had no significant influence on private or social engagement.RQ1b asked about the effects of interpersonal trust on offline participation.After controlling for influences of demographic variables, the more trust people had in generalized others, the more they participated in those organizations that worked closer to the private interest of people (private engagement) (β = .06,p < .01)and the more they were involved in those organizations that aim at serving the community and society at large (social engagement) (β = .06,p < .01). RQ2a investigated the effects of community satisfaction on online participation while RQ2b addressed the effects of interpersonal trust on online participation.As seen in Table 2, after controlling for demographic influence, neither community satisfaction nor interpersonal trust had any significant influence on weband wirelessparticipation or SNS participation. The third research question explored the impact of Internet use variableson offline participation.After controlling for influences of demographic and social capital variables, RQ3a examined the effects of general Internet use on offline civic participation.As seen in Table 1, general Internet use had no significant influence on either private engagement or social engagement.RQ3b addressed the impact of SNS use on offline civic participation.In the same way, SNS use was not a significant predictor of either private engagement or social engagement. The fourth research question examined the influence of Internet use variableson online participation.After controlling for influences of demographic and social capital variables, RQ4a examined the effects of general Internet use on online participation.According to Table 2, general Internet use had significant positive effect on webandwireless participation (β = .10p< .001).However, general Internet use did not exert significant influence on SNS participation.RQ4b examined the effects of SNS use on online participation.Results indicate that SNS usehad significant positive effect onweb and wireless participation (β = .05,p < .05).However, SNS use did not exert significant influence on SNS participation.H1 predicted that individual Internet efficacy would be positively related to offline civic participation.H1 was confirmed.After controlling for influences of demographic, social capital, and Internet use variables, results showed individual Internet efficacy had a positive effect on both private engagement (β = .10,p < .001)and social engagement (β = .10,p < .001).RQ5 investigated the impact of group Internet efficacy on offline participation.As seen from Table 1, the more people thought the Internet had a major impact on the ability of the groups to do their jobs, the less likely they were to engage in private-oriented participation (β = -.07,p < .01)and less likely they were to engage in social-oriented participation (β = -.08,p < .001). General Internet Use, SNS Use, and Online &Offline Participation General Internet use is found to increase web and wireless participation but not SNS participation.General Internet use is not a significant predictor of offline participation, either, contradicting results of Jennings and Zeitner (2003), Shah et al. (2002), Davis et al., (2002) and Nam (2012).SNS use is found to enhance web and wireless civic participation but not SNS civic participation.SNS use does not exert any influence on offline private engagement or social engagement, contradicting the findings of Ellison et al. (2010) and Valenzuela et al. (2010), but in keeping with results from Baumgartner and Morris (2010) that there is much hype surrounding the role of SNSs in stimulating democratic participation.In sum, there are limitations of what SNSs can do in politics and civic engagement.The notion of SNSs and other forms of social media as panacea for democratic renewal is grossly exaggerated. Internet Efficacy and Online &Offline Participation This study is one of very few that examines the role of Internet efficacy in online and offline civic participation.Internet efficacy, people's perceptions of Internet's ability in participation and problem solving, has been found to enhance both online and offline political participation (Nam, 2012).This study is arguably one of the first to look at both individual Internet efficacy and group Internet efficacy (perceptions about their civic groups' ability to use the Internet for various purposes) and their influences in online and offline civic participation. Findings of this study suggest that individual Internet efficacy has a positive impact on both offline and online participation, in keeping with results of Nam (2012).However, group Internet efficacy has a negative influence on both offline private and social engagement and has no significant influence on either web and wireless participation or SNS participation.This seems to suggest that people's sense of group Internet efficacy may give them a false sense of being active in civic participation while it actually is not the case.This also indicates that there are qualitative differences between online civic participation and offline civic participation and one mode of participation is not necessarily transferable to another. Antecedents of Online and Offline Civic Participation To some extent, significant predictors differ between online and offline civic participation, indicating the meaningful differentiation between the online and offline modes of participation.As expected, interpersonal trust enhances both offline social engagement and private engagement.However, interpersonal trust appears to make no difference in web and wireless civic participation or SNS civic participation. Online and offline participation also have different demographic antecedents.Liberals are less likely to be involved in offline participation but ideology has no significant influence on online participation; Caucasians are less likely to engage in offline participation (private and social engagement); however, Caucasians are more involved in web and wireless participation.Income is a positive predictor of both private and social engagement and SNS participation.As expected, education enhances both private and social engagement, increases web and wireless participation, but decreases SNS participation.Females are more active in social engagement but not active in either of online participation.In short, gender, education, race, and ideology make a difference in predicting online and offline participation. Conclusion Our overall conclusion is that general Internet use and SNS use have no impact on offline civic participation and the influence of SNS use is limited to web and wireless participation.Future research also needs to explore different and/or unique motives for using different SNSs from a uses and gratifications perspective (Kaye, 2010) and examine how different or unique motives influence civic participation.The small amount of variance accounted for in offline and online participation points to the weaknesses of this study.Future research can explore whether people's reasons to be active in social, civic, and spiritual groups and the reasons people leave those groups can moderate effects of Internet use and SNS use on civic and political participation.Thisdataset contains many innovative ways to assess different forms of participation in various social, religious, cultural and other organizations.Future studies may consider weighting types of involvement differently to explore the nuances and qualitatively different nature of various forms ofparticipation.And finally, future studies need to further delineate the relationships between online and offline participation, examine both the strengths and weaknesses of online participation because online participation, especially social media participation, is more about sharing information and socializing with more entertainment purposes, and may not live up to its hyperbole. RQ2a: What are the effects of community satisfaction on online participation?RQ2b: What are the effects of interpersonal trust on online participation?Empirical investigations on the relationship between media use and civic participationhave been well documented.However, research addressing the effects of Internet use on participation has provided mixed results.Further,distinguishing between varying forms of Internet use has been limitedlyinvestigated.The following research questions assess the effects of both general Internet use and SNS use on offline civic participation after controlling for influences of demographic and social capital variables: RQ3a: What are the effects of general Internet use offline participation?RQ3b: What are the effects of SNS use on offline participation?To investigate the impact of different forms of Internet use on online participation, the following questions examine the effects of general Internet use and SNS use on online civic participation after controlling for demographic and social capital variables: RQ4a: What are the effects of general Internet use on online participation?RQ4b: What are the effects of SNS use on online participation?participation islimited but has shown promising results.However, there is need to further assess how one's feelings about their ability to use the Internet for civic purposes impact both online and offline participation.Therefore, we pose the following hypotheses concerning the impact of individual Internet efficacy on participation.Additionally, two research questions investigate the effects of group Internet efficacy on civic participation after controlling for influences of demographic, social capital, and Internet use variables: H1: Individual Internet efficacy will be positively related to offline participation.RQ5: What are the effects of group Internet efficacy on offline participation?H2: Individual Internet efficacy will be positively related to online participation.RQ6: What are the effects of group Internet efficacy on online participation?METHOD Data Data for this study came from the 2010 'Social Side of the Internet'surveyfrom the Pew Internet & American Life Project (Pew, 2010).The theme of the data centers on the role of social network sites in civic group formation and participation (Rainie, Purcell, & Smith, 2011).The fieldwork of this national representative telephone survey with the random-digit dialing techniques was conducted from November 23, 2010 to December 21, 2010 by the Princeton Survey Research Associates International.The interviews were conducted with adults aged 18 and above to both landlines (n = 1,555) and cell phone (n = 748) with a total of 2,303 respondents.The response rate is 11% for the landline sample and 15.8% for the cellular sample.Measures Dependent Variables Dependent variables included offline civic participation (social engagement and private engagement) and online civic participation (web and wireless civic participation and SNS civic participation).Based on the adaptation from the work of Mascherini, Saltelli, and Vidoni (2007), offline civic participation was divided into private engagement and social engagement.Private engagement refers to individuals' participation in those organizations that worked closer to their private interests.Social engagement refers to individuals' participation in those organizations that aim at serving the community at large. Internet efficacy would be positively related to online civic participation.H1 was confirmed.After controlling for influences of demographic, social capital, and Internet use variables, results showed individual Internet efficacy had a positive impact on both web and wireless participation (β = .26,p < .001)and SNS participation (β = .06,p < .05).RQ6 explored the influence of group Internet efficacy on online participation.According to Table 2, group Internet efficacyhad no significant influence on either web and wireless participation or SNS participation.TABLES 1-2 ABOUT HERE Discussion The purposes of this research were to examine (1) the effects of both general Internet use & SNSs use, (2) the effects of individual and group Internet efficacy on online and offline civic participation, and (3) the differential predictors and antecedents of online and offline civic participation.The findings of this study shed light on the differential roles of general Internet use and SNS use and provide rare insight on the effects of individual and group Internet efficacy on online and offline civic participation as well as the differences between online and offline civic participation, including their different predictors and antecedents. This study makes rare contribution to be the first to uncover the positive influence of individual Internet efficacy on online and offline participation and negative impact of group Internet efficacy on people's offline engagement.The limitations of this study lie in the inherent disadvantage of doing secondary analysis of existing dataset though Pew Research Center's Internet & American Life Project consistently provides quality survey data for scholarly use.One major weakness of secondary analysis is that users are constrained in the types of research questions that can be examined and are limited to the existing variables because there is no way to go back for additional information(Wimmer& Dominick, 2006).There are many good measures of online and offline civic participation in the December 2010 "Social Side of the Internet" survey dataset.However, the general Internet use measure asks how often respondents use Internet or email at home and at work and the SNS use measures are simple "yes" or "no" questions about respondents' Facebook, Twitter, or LinkedIn use.Neither measurement includes what content respondents use.Future research should engage thorough explication of SNS use, disaggregate them into context-specific uses and assess thepotential differential effects of SNS useand general Internet use and specific types of Internet and SNS contenton engagement(e.g., Dimitrova et al., in press).Because the Pew Research Center's Internet & American Life Project examines strictly Internet's influence in social and civic life, future research should not examine Internet use in isolation but include both traditional media use, general Internet use, and SNS use to get a more robust picture of the influence of Internet use in both civic and political participation. Table 1 Hierarchical Regression Analyses Predicting Offline Civic Participation -PrivateEngagement and Social Engagement Note: The beta weights are final standardized regression coefficients. Table 2 Hierarchical Regression Analyses Predicting Online Civic Participation -Web and wireless participation and social network participation Note:The beta weights are final standardized regression coefficients.
8,111.6
2015-10-15T00:00:00.000
[ "Sociology", "Computer Science" ]
Many-objective population visualisation with geons This paper explores the use of geometrical ions (called geons) to represent solutions in the approximated Pareto front generated by multi- and many-objective optimisers. The construction of geons based objects (GBOs)for solutions to a 3- and 5-objective problem is outlined, and the visualisation is embedded in a tool that has been tested with expert users. The findings suggest that our approach is promising, with all users successfully engaging with the given tasks and 4 out of 6 managing to complete some of the tasks they were assigned. Results indicate that the use of geometry, rather than colour as is often used to convey properties of Pareto front approximations, is a useful way of embedding multi-objective data. INTRODUCTION As many-objective evolutionary algorithms (MaOEAs) have matured, it has become clear that identifying tools that can be used to visualise their high-dimensional solution sets and support decision making is an important endeavour.In the last decade a range of methods have been proposed that enable visualisation of solutions in terms of the whole set of objectives, and which use dimension reduction to embed the solutions into two or three objectives [19,24]. In this work we propose a novel way of visualising many-objective populations using geometrical ions (geons).Biederman in his theory, Recognition By Components (RBC) [2], argues the case for a set of theoretical primitives, called geons that can be combined to form the structural description of a generic visual object.We built 36 unique geons as 3D models, using 4 attributes: edge, symmetry, size and axis (for a more in depth explanation please see Section 2.2).A combination of two or more geons are being used in a 3-dimensional virtual environment to represent a more complex visual object.For the rest of this paper, this visual object will be referred to as Geon Based Object (GBO).Representing each dimension of a multidimensional dataset by a geon, our approach can visualise a pareto front solution with five objectives with a single GBO.This enables the observer to assess all five objectives simultaneously. The remainder of this paper is organised as follows: Section 2 presents an overview of many-objectives visualisation followed by an introduction to the RBC theory and existing work with geons.Next, we present our approach to mapping the data to a geon, including the impact on the application design.Section 3 gives an overview of the methodology employed for the experiment including a deep dive into the applied experimental design.Section 4, is dedicated to the results and Section 5 (Discussion) focuses on the meaning behind the results.Finally in Conclusion and Future work, we summarize our work and discuss possible directions for our approach. BACKGROUND 2.1 Many-objective visualisation This work considers solutions to multiand many-objective optimisation problems.Such problems are evaluated according to multiple measures of fitness, the objectives, between which a trade-off exists.A solution optimising one objective incurs a poor score according to another objective and it is not possible to simultaneously optimise all of the problem objectives, and the role of an EA is to locate a good approximation to the Pareto front, the set of feasible solutions that offer the best trade-off between the problem objectives. A multi-objective problem comprises = 2 objectives, while for a many-objective problem > 4. In addition to the increased difficulty in optimising many-objective problems, part of the reason for this distinction is that solution sets to many-objective problems cannot be visualised using standard two or three dimensional visualisation methods such as scatter plots.Various approaches have been proposed to visualise the Pareto front approximations arising from many-objective optimisers (see [19] for a comparative review).Broadly, methods are grouped according to those that visualise solutions based on the full set of objectives, and those that use dimension reduction to project solutions into two or three dimensions for visualisation with a standard tool such as a scatter plot.Of those relying on the full set of objectives, some enable the recovery of the individual objective values by inspection, and some do not. The use of various dimension reduction methods has been proposed for visualising Pareto front approximations.Their purpose is to project the solution into two dimensions, and methods used to accomplish this include principal component analysis (PCA) [13], multidimensional scaling [24] and self-organising maps [16]. Geon Based Objects (GBOs) According to the Recognition By Components (RBC) theory, object recognition happens in a set of hierarchical steps.It starts with an early edge extraction stage that considers surface characteristics (luminance, texture or colour).The next stage is the identification of non-accidental properties [2] (collinearity, curvilinearity, symmetry, parallel curves and vertices termination at common point) together with parsing regions of concavity.The identified regions combined with the non-accidental properties form a structural description of the perceived object.This structural description is then matched against an object representation in the memory.Partial representation is possible at this stage if enough features are activated to create an overlap between the viewed image and the object representation in the memory. Biederman theorised that each identified concave region can be represented by a set of simple primitives called geometrical ions (geons) [2].Geons are typically symmetrical volumes without sharp concavities characterised by an axis and cross section typically at a right angle of the axis.The variation of non-accidental properties of 4 attributes of generalised cylinders (edge, symmetry, size and axis), generate 36 uniquely identifiable geons.The unique combination of geons and their positional relation to each other creates a GBO that can be easily recognised.Generally, the recognition process of the GBOs is fast (under 100ms exposure [2]) and invariable over viewing position.Low image quality has little impact on the recognition process with relatively strong recognition even from novel points of view or if the object is partially obscured. The RBC theory was used by [11] to train a neural network to recognise CAD models in a database.The network uses the basic shape of geons and their relationship to each other as an input.A similar approach was used by [18] to train an unsupervised neural network in recognising a range of visual objects.The geons were used as an intermediary step in the training process.The experiments run by Biederman [2] used line drawing examples of visual objects composed of geons to test his theory.Both experiments [11], [18] used generated three-dimensional versions of the geons, but the focus was on neural network training using geons. Irani [8], [9] had a more human focused approach in data representation using the RBC theory in a three-dimensional environment.The goal of the research was to enhance the semantic content in diagrams using perceptual syntax, a comparison between traditional UML diagrams and a geon based version of UML diagrams.A limited set of three-dimensional geons were generated and used in order to represent a set of UML diagrams.Although the luminance property of a material is enough to identify the shape [2], additional textures and colours were used in order to display the complex relationship of the UML diagrams.The results [9] were encouraging, with both experts and novices displaying a smaller error value in the geon based version of diagrams, as opposed to traditional UML diagram approach. METHODOLOGY To understand how and why people interact with a novel tool that allows the user to visualise complex data relationships, we employed a mixed approach integrating a quantitative post-experiment tool to reflect upon the experiment with a qualitative approach during the experiment.Our Geon Visualisation tool uses a novel technique to visualize data and due to that, it is important to understand the usage process and keenly guide the development from an early barely usable stage towards a tool fit for purpose.Qualitative research [6] allows the researcher to look into specific usages and reasons why a participant decided to do a certain action, instead of looking into trends and quantifiable findings over larger groups of users which would be less useful in an early stage.A key part in this process to interpret users actions and answers to investigate the reasons behind their choices. For our qualitative tool we are using the Think Aloud method [3,22] where users are initially promoted to verbalise their thought process during a given set of tasks.The benefit of Think Aloud is that users are not prompted additional questions that might distract and disturb their attention from the task they performing and it is, according to Van Someren et al., often the case that participants come into a natural rhythm of verbalising their thoughts without much prompting, making the technique moderately easy to employ.To reinforce verbalisation "Why?" questions might be beneficial such as "Why did you move from this space in the dataset to the other one?" or "Why do you think, this is the best solution?".When employing Think Aloud we try to minimize disturbing the participant from their task while gaining some information on the user's introspection in the moment.Thus, we are aiming to reduce retrospective memory errors and justifications for certain actions. To complement Think Aloud and to give participants some time to think and reflect we added a post experiment questionnaire that includes an adapted version of the Post-Study System Usability Questionnaire (PSSUQ) [12] with an additional 4 open questions to gather some qualitative feedback to conclude the session.The PSSUQ is a validated and established usability questionnaire developed by IBM to measure general usability of their developed software.It uses a Likert scale to quantify the overall usability of a given software artefact based on the users current experience.As we are developing a novel visualisation tool it is important to understand if the tool at any point in time is fit for purpose and if users can interact with the software.For an early protoype still undergoing massive changes, the PSSUQ by itself is not sufficient for identifying why the software does not perform well or what elements need be improved.To address those questions the previously discussed qualitative approach can give more detailed information and identify reasons for why participants had issues with the software or approached a task in a given way.To allow the participants to reflect on their experience after our PSSUQ four additional open questions were added employing the idea of the "START-STOP-CONTINUE" method [4] used in education. Data mapping In our approach, the goal was to map data containing a set of solutions for multi-and many-objective problems to a set of GBOs that would enable the viewer to visually explore and understand the individual solutions.Data was generated for two instances of the benchmark test problem DTLZ1 [5] -one for an instance comprising three objectives and one comprising five objectives.The algorithm executed for 50,000 function evaluations in both cases, and returned 100 non-dominated solutions representing the final approximated Pareto fronts.We note that while better approximation sets can be achieved with bespoke many-objective optimisers for the = 5 case, demonstrating the efficacy of the proposed visualisation approach did not require highly optimal solutions to the given problems. In his RBC theory, Biederman argues that the representation of an object would be a structural description of the components that form the object and their relationship to each other.These relationships include the relative size of the components, orientation and the locus of their attachment.The typical configuration of a geon based object (GBO) is with a main geon that acts as the body of the object and the rest of the geons are attached to the main geon or to other geons (already attached to the main geon).The main geon is the largest in size, all the geons attached to it are smaller in size (medium size) and the geons attached to the medium size geons will be even smaller.Typically the largest geon sits at the center of the GBO and the medium size geons can connect left, right, top and bottom of it.The next layer can be the smallest geons that connect at the top and the sides of the medium size geons.With this approach we could technically display up to 17 dimensions into one GBO.This can easily be extended by adding more sizes to geons.An important factor to take into consideration is the fact that humans are slow at recognising quantitative differences for size or curvature [14] [7], therefore we need to have a significant difference between relative sizes in order to ensure fast recognition. For the datasets used in the visualisation, we used the central (largest) geon to represent the average rank [1] of all the objectives for that particular solution.Based on the number of objectives (3 or 5) we used medium geons and small geons respectively to represent each objective.The location of the geon representing the objective stays constant for the entire dataset, therefore a viewer will always know that the Objective 2 for example is on the left hand side of the main geon.For a more detailed explanation of how we mapped the data for the 5 objective solution, please see Figure 1.For the 3 objective solution the mapping is similar except is missing 2 of the dimensions. In order to visualise each objective's value as a geon, we first normalise the data and then we split the data into 4 quantiles.Once each objective has a range assigned to it, we generate a geon for that particular range and we place it at the location corresponding to each objective's location in the GBO.For Figure 1, objective 1, 2, 3 and 5 are in the same range, therefore represented by the same shape, squashed lemon.Objective 4 has a different range and is represented by the wedge shape geon.The central geon (largest in size) has yet a different range from the previously mentioned geons and is represented by the lemon shape geon.The data used for this study were the solutions to a manyobjective problem using five objectives.Each solution was represented through a single GBO and positioned onto a 2D plane using PCA to reduce the dimensionality of the data.The Data set was centered around a Polar Coordinate system offsetting the X-Y-axis by [−0.5, −0.5], see Figure 2. As all solution coordinates are in the range of [0.0 − 1.0] the solutions center around the polar coordinate system's origin giving the user a visual guide.The positioning of the data has no corespondent to the values displayed, for example we cannot find the best value for objective 2 around central point but a pattern is visible in the arrangement of the GBO with clusters of similar shape being observed in the data.The distribution of solutions follows the expected pattern for the DTLZ1 problem.Pareto optimal solutions for this problem lie on the hyperplane =1 = 0.5.As can be seen, the solutions have been arranged in a triangular shape, with a corner corresponding to each of the objectives. Figure 3 illustrates the corresponding plot for the Pareto front approximation of the 5-objective problem.This illustrates a similar arrangement of solutions, with a region optimising each of the five objectives being clear. Application Design For the development of the application we used Unity3D [20] version 2020.3.0f1.The initial build was for PC but later on we switched to a webGL [10] version deployed to a server, due to participants' difficulty to control the application through a streaming service.All the geons' meshes have been generated programmatically in Unity3D based on a set of four distinct attributes for surface definition: Edge, Symmetry, Size and Axis.These attributes are at the core of Biederman's RBC theory and geon's generation and result in 36 distinct primitives. At the beginning of the application the user has an option to load any dataset that we added to the application, in the future we hope to have a system where the user can load their own data.The data is being processed at runtime, by normalising all the values, calculating the average rank for each solution, calculating the coordinates using the PCA dimensionality reduction algorithm and the application generates the geons shapes based on the values that they represent and finally they are assembled into GBOs based on the assigned location for each objective.Please see Figure 4. Once the GBOs are being displayed, the user has an option to left click on a geon and display for 3 seconds information about that particular geon.The information is the attribute the geon represents, the range and the normalised value.This information is useful for geon comparison and for users to identify the particular attribute that geon represents.Please see Figure 5. Depending on the similarities between various data elements, the coordinates generated by the PCA algorithm sometimes result in GBOs overlapping.This creates a problem, as the user struggles to identify different GBOs, especially when they similar structural representation.In order to overcome this issue, we introduced a value that controls the distance between the GBOs.We keep the GBO's relative position to each other but the space between them can be adjusted by the value mentioned above.The GBOs can be crunched into a small area or expanded over a wider area, while maintaining the relative distance between the visual objects.The value adjustment is done through a UI slider that is part of the menu.The menu's position is on the left side of the screen and it covers approximately 20% of the screen.The menu can be toggled on and off by clicking on the right mouse button.Toggling the menu on also freezes the camera rotation, enabling the users to click on the geons and explore their values. The flying model [25] is used as the navigation metaphor inside the virtual environment.This is typical of the first person controllers found in gaming environments but with the additional benefit that the user can explore the space freely by flying and not being limited to ground level locomotion.The forward/backward movement are mapped to W and S keys respectively and they are also mirrored onto the Up/Down arrow keys.The side movement is mapped to A and D key respectively and mirrored onto Left/Right arrow keys.Mouse movement allows the user to rotate the camera inside the virtual environment. Experimental Design For the first experiment only experts or participants familiar with complex datasets were selected.The experiment was set up in the following way.The participants were invited to virtual meeting call where one participant was meeting an experimenter.The entire meeting was recorded.To start the experiment a short brief was given to the participant detailing the structure.Next, an initial short background questionnaire was filled in by the participant using an online survey tool and the participant was asked to stop and respond after completing the general part about.The collected items in this part were age, education, first language as well as current job role to gain background information on the participants expertise. Next, the experimenter read out the experiment protocol describing the next steps and explaining the software controls and what the users will see once they see the software.A brief description of what Geons are and what type of data is represented was given.The experimenter gave a quick example of how the controls work and showed a reduced dataset to the participant. Next, the experimenter gave the participants ten minutes to complete five tasks.Stating that completion was not important but that the participants should verbalize how they approach the task and what their thought process was.The tasks were finding good/bad solutions for individual objectives as well as globally good solutions. During those 10 minutes the experimented occasionally reminded the participants to verbalize.They also asked the participants why the performed certain actions or what they intended to do.In some cases the initial explanation of how the Geons are constructed was repeated as well as details on the controls. After completing the ten minutes, even if not all tasks were completed the experimenter asked the participants to complete modified PSSUQ in the same online survey tool and stop before completing the remaining 4 open questions.After the participants reported back they thanked the participants thus far and did a short debrief after which they asked the participants to take some time for completing the last four open questions. To reduce the time spent on the questionnaire and not overload the participants we reduced the granularity of the Likert scale from seven to five points removing the two elements around the Neutral axis.This reduces the complexity and expressiveness of the PSSUQ Figure 5: Floating text being displayed for 3 seconds when the user clicks on a specific geon.In this case the geon represents objective 1 and we can observe the name of the information displayed, the range of the value that the geon represents and the normalised value of that particular entry in the dataset. which is only used for indicating the general usability of an early prototype. RESULTS A total of six participants were recruited for the pilot study and the participants were not paid.The participants were split in two groups: the first group of three users interacted with the visualisation tool through screen sharing, thus video compression and lag in action were present and it made participant's interaction with the geons very difficult.We decided to offer the second group composed of the other three participants to use the tool through a web-interface, thus having no lag or image compression, resulting in better interaction with the visual objects. The pilot study had two main goals, first to establish the cognitive process needed for understanding, evaluating and exploring multi dimensional data and second goal was to run a usability study of our application design in order to facilitate a better interaction metaphor.The two goals are closely linked as the user interaction with the application is crucial at building the mental model needed for solving the given tasks. Cognitive process In order to understand the cognitive process, five tasks were given to each participant.The experimenter was revealing each task in turn upon completion of the previous task.Examples of tasks were: "The solution with the best score for objective two" or "The solution with the worst score for objective four" or "A solution that provides a good trade-off between all the objectives ". All the participants attempted at least two tasks and no participant managed to attempt all five tasks.For details on the participants' tasks attempts and successes please see Figure 6.For the 6 participants, the overall average value for task attempts is 2.83 tasks and the average value for completed tasks is 1.83 tasks. The two participants who managed to complete successfully all the attempted tasks (participant 2 and 5), showed good understanding of the dataset mapping to geons.The initial start was slow due to the novelty of the interface and the participants not being familiar with the cognitive process.Once the mental model was formed, the participants managed to complete tasks in relatively short amount of time compared to the first task and they verbalised quite effortlessly the steps needed for task completion.One of the major hurdles in completing all five tasks was the application controls, especially positioning of the camera in the virtual environment, which majority of the participants found it quite difficult. For the two participants who did not manage to complete any of the tasks successfully, the evaluation of the task analysis shows partial understanding of the mapping process from data to geon.These participants were able to find the lowest values for a geon (which tells us that value to geon transformation was understood) but the location of the geon in the overall structure of the GBO was wrong, basically they were looking at the wrong geon in the GBO.This clearly shows participants' lack of understanding of the GBO construction process. PSSUQ -Usability study The PSSUQ questionnaire is widely used to measure users' perceived satisfaction with an application.It uses a set of 16 standard questions and calculates a set of scores using a Likert Scale of 7 points, for our questionnaire we used a 5 point Likert scale.The overall score is calculated as the average of questions 1 to 16. System Usefulness (SYSUSE): the average scores of questions 1 to 6. Information Quality (INFOQUAL): the average scores of questions 7 to 12. Interface Quality (INTERQUAL): the average scores of questions 13 to 15. The scores for our visualisation can be seen in Figure 7.The smaller the value the better.An ideal value would be 1 corresponding to Strongly Agree.The neutral option is value 4 and a value of 7 will indicate a Strongly Disagree. Interestingly, one of the questions "I believe I could become productive quickly using this system" has achieved an average score of 1.83, which is significantly better than any average score in .This score is significant as the participants seem to see the potential of the geon based visualisation.This was also confirmed by the additional qualitative feedback given in the four additional questions. DISCUSSION In a novel visualisation, simply presenting the data to a user is not enough, the understanding of the mapping process from data to geon together with the ability to interact with the tool are vital steps in building a coherent mental model for the user.This was evident from the study results, no participant attempted or completed all five tasks (see Figure 6).Some of the participants had partially formed mental model of the visualisation paradigm and others taking longer to learn the controls and simply running out of time.First part of this section is dedicated to the cognitive model and its importance in the visualisation and the second part discusses the user's engagement with application and its significance. Cognitive model In our overall goal of deconstructing the expert's cognitive model used for multi-dimensional data analysis, we decided to employ a bottom-up approach [21], where the focus is on the task analysis.The goal of this pilot study was to observe the experts' cognitive process during the execution of a set of simple tasks and their interaction with the tool during that process.These findings are the building blocks of the cognitive process that enables users to explore, understand an evaluate a dataset using our approach. Our observations suggest that those participants that manage to build a coherent mental model of geon's representation had no issues performing the given tasks.The participants who lacked full understanding of the mapping process from data to geon made some assumptions based on their previous experiences with other visualisation applications. One of the assumptions was that size is a key characteristic in the data mapping to a geon.The expectation was that two geons that look the same in shape but have different values should have different sizes.It is a fair assumption, but our approach mapped the range value of the data to the geon's shape and the size is simply applied according to the RBC theory in order to facilitate easy shape recognition.The actual value of the data entry was only made available as "on demand" additional information simply to be used as a comparison between geons of the same shape.Our hypothesis was that participants will use their gaze to gain instant understanding of the value range and for deeper understanding they need to interact with the geon by clicking on it and displaying additional information.Therefore, a better way of explaining the mapping process is needed.Even though the experimenter had brief each individual, in depth, about the mapping system and what the geons actually represent, the novelty of the system and the sheer volume of new information that users had to absorb in short amount of time lead to incomplete mental models.In addition, the participants found the controls and navigation quite challenging further reducing the cognitive capacity to remember and apply all the new knowledge. This was evident from feedback we received from some participants, they found it difficult to remember the range each geon represents.Even though they had the ability to click on a geon and quickly see the range value, the limited space in working memory meant that as soon as the participant was looking for a new range they completely forgot the shape of the geon that represents that shape.Most of the working memory was preoccupied with exploring the virtual space and the application controls, leaving little room for additional information to be retained.Suggestions from some of the participants was to include a key table for geons and the range that they represent in order to deal with this particular issue.Probably a more hands-on approach in introducing the mapping process, will enable the user to form a better mental model of the relationship.Enabling the participants to assign a geon to each range value before data is being displayed, we believe can help with building a better mental model of how geons work.At the moment, the geons assignment to a value range is done randomly, by software, during the loading process.This means that participants need to first gain understanding what value range each geon represents and only after that they can start the data exploration process.Enabling the users to actively choose the geons, before the data is displayed, will ensure that a model of that the geons represent is formed before the data exploration stage starts, leaving more room for the cognitive process needed to explore the represented data. Points of view An interesting observation in our study was the GBOs' positioning in the virtual environment enabling users to look at them from a dual point of view macroscopic and microscopic, as seen in the visualisation of a complex neuronal pattern [23].In our approach we refer to a macroscopic point of view as a top down approach with the camera situated at the top of the displayed data, visually capturing most of the GBOs.This point of view combined with the coordinates generated by the PCA algorithm results in GBOs forming patterns or clusters.These clusters presented points of interests for the participants in their exploration.Some of the participants verbalised their visual search patterns with phrases like "central point" or " "this group", indicating that the macroscopic point of view is part of the cognitive process for data exploration. The majority of the geons that form the clustered GBOs tend to have close values, visually the GBOs look similar to each other with some small variations of one or two geons.From our observations, the users tend to explore the grouped GBOs as a group, selecting geons from one GBO and moving on to another for comparison.This type of exploration can be confusing without a point of reference.In order to help the participants with the orientation and navigation at macroscopic level, we used a textured image of concentric circles (see Figure 3 or Figure 2).The central point is centered at 0 on both x and z axis, similar to the central point coordinates for the PCA algorithm.This is important in the mental mapping of the GBOs in the virtual environment and it is crucial in building a "mental history" of the places visited by the user when exploring the data. Generally, the best view to observe an individual geon is the canonical orientation [17] [2] [26], generally a three-quarters front view, maximising the geon's features and enabling quick recognition.We need to keep in mind that the Biederman's study on geon recognition was done a single GBO at the time with no other shapes in the background or other type of noise that might interfere with the recognition process.In our study, at microscopic level (close up look of the GBOs), the participants place the view instinctively on a similar canonical orientation but with the camera being slightly up, looking down on the GBOs.This orientation basically minimises the background noise which are other GBOs.Simply pointing the camera slightly down gives a user the best view of the geons' features and minimises the interference from other objects. The controls should be intuitive and seamless to enable the users to place the view in the best possible spot in order to observe the geon's features.There is strong evidence of this in the participants' feedback asking for better controls and suggesting that the camera should pivot around points of interests when the users selected a geon on the screen. The textured pattern on the floor continues to play an important role even at microscopic level.According to [15] a strong sense of depth can be achieved from the texture floor, as the only points of reference in the virtual environment are the other GBOs.The textured floor can also help with the "mental history" of visited places through an egocentric view, placing the user amongst the data itself rather than observing it from outside. CONCLUSION & FUTURE WORK The visualisation of many-objective solutions is not a trivial task and remains a challenge overall.In this paper we presented a novel way to visualise a many-objective population using geons and implicitly GBOs.We ran a small study with the main focus on better understanding the cognitive process employed to multi-dimensional data exploration.We explored the process through a bottom-up approach of task analysis executed by experts and observing their behaviour.The results were encouraging with some participants achieving a full working mental model being able to complete the majority of the tasks successfully.A secondary goal of the study was to analyse the application's suitability for the task at hand by asking the participants to complete a PSSUQ usability study with encouraging results for our approach. For future work we plan to extend our pilot study, that was mainly focused on experts, to include a higher number of participants including non-experts.We also plan to iterate through the interaction metaphor in order to allow the participants for better controls and minimise the cognitive load when exploring the virtual space.Which in turn should enable us to refine the cognitive model for exploring a set of many-objective solutions through the geon based objects.The GBOs positioning in the virtual space remains an interesting problem, with current application employing a PCA algorithm to generate a set of two dimensional coordinates.The question remains if a more suitable approach exists?Maybe the GBOs positioning should be focused more around the user, in order to enhance the egocentric perception of the visual elements. Another possible area to explore will be a more complex set of objectives (maybe 10 objectives).The solution number can be an interesting challenge, currently in our study we used a dataset with 100 solutions.How would we display a dataset with a higher number of solutions, let's say 1000?What's the interaction metaphor for a more complex data set, both in objectives and in solutions?How is the cognitive process affected by the sheer volume of information that the user needs to deal with?All very interesting questions that we plan to answer in the future. Figure 1 : Figure 1: Geon placement in a 5 objective GBO.The main geon has the largest size and sits at the centre of the GBO.The rest of the objects that represent Objective 1, 2, 3 and 4 are all medium size and they attach right, left top and bottom respectively.The fifth objective is represented by the smallest size geon. Figure 2 : Figure 2: Geon visualisation of the three objective DTLZ1 solutions in Polar Coordinate System Figure 3 : Figure 3: Geon visualisation of the five objective DTLZ1 solutions. Figure 4 : Figure 4: The steps taken at runtime to process the data and transform it into GBOs. Figure 6 : Figure 6: The chart shows the number of attempted tasks and the number of completed tasks for each participant.No participant have attempted all 5 tasks.Two participants manage to complete all their attempted tasks successfully and two participants attempted two task with no successful completion. Figure 7 : Figure 7: The average PSSUQ scores from all six participants with one exception the Information quality score.One of the participants did not answer enough questions to be able to calculate a score.The information quality score is the average of only 5 participants. Figure 7 Figure7.This score is significant as the participants seem to see the potential of the geon based visualisation.This was also confirmed by the additional qualitative feedback given in the four additional questions.
8,300.6
2021-07-07T00:00:00.000
[ "Computer Science", "Mathematics" ]
Characterisation of low background CaWO 4 crystals for CRESST-III The CRESST-III experiment aims at the direct detection of dark matter particles via their elastic scattering off nuclei in a scintillating CaWO 4 target crystal. For many years CaWO 4 crystals have successfully been produced in-house at Technische Universität München with a focus on high radiopurity. To further improve the CaWO 4 crystals, an extensive chemical purification of the raw materials has been performed and the crystal TUM93 was produced from this powder. We present results from an α -decay rate analysis performed on 344 days of data collected in the ongoing CRESST-III data-taking campaign. The α -decay rate could significantly be reduced. Introduction CRESST-III (Cryogenic Rare Event Search with Superconducting Thermometers) [1] aims at the direct detection of dark matter (DM) using cryogenic calorimeters. The standard CRESST-III module consists of a scintillating 24 g CaWO 4 single crystal as a target. It is operated at ≈10 mK temperature and is equipped with a transition edge sensor (TES) read out by a SQUID (Superconducting QUantum Interference Device) for a precise measurement of the energy deposited by a particle interaction within the crystal. In addition to the CaWO 4 crystal, a light detector (also equipped with a TES) is read out in coincidence. This enables discrimination between electromagnetic interactions (background-like events), α-decays (background events, less relative scintillation light) and nuclear recoils (signal-like events, least relative scintillation light) due to the different relative fraction of scintillation light produced. CRESST-III detectors reach thresholds as low as 30.1 eV, allowing a very sensitive measurement of particle recoil energies [1]. One key point for the excellent performance of these detectors is the quality of the target crystals, including a high radiopurity of the CaWO 4 material, to minimise backgrounds resulting from natural decay chains. Especially β-decays can cause events in the region of interest for DM searches. To assure a high quality of the CaWO 4 crystals, they have been produced inhouse at Technische Universität München (TUM) for many years [2]. In this way, every step of the production is controlled and optimised. The crystal TUM40 operated in CRESST-II showed an excellent performance and a lower background compared to commercially purchased crystals operated in the same CRESST run [3]. To further improve the radiopurity, an extensive chemical purification of the raw materials and the CaWO 4 powder has been developed at TUM. HPGe screening of the powder shows promising results for an improved radiopurity, however, the sensitivity of this method is limited and only limits on the radiopurity could be stated [4]. From this purified powder, the crystal TUM93 has been produced in 2019. In total three CRESST-III target crystals were cut from the ingot and mounted into CRESST-III modules named TUM93A, TUM93B and TUM93C. The crystal TUM93A was cut from the top of the ingot and is, due to segregation effects during crystal growth, expected to be the most radiopure crystal among the three detector crystals [5]. All modules are currently being operated in the ongoing CRESST-III data-taking campaign started in November 2020. A radiopurity analysis focusing on α-decays detected in ≈344 days of this data-taking campaign is presented in this work. For this analysis, a new approach for energy reconstruction has been developed and is presented in the following. SuperC. Max pulse height Temperature Resistance 0 T C Figure 1: Working principle of a TES. The TES is heated into its transition in the socalled working point (WP). A particle interaction results in a temperature increase ∆T which in turn results in a resistance increase ∆R. The maximum resistance increase is defined by the resistance difference between the normal conducting resistance and the WP resistance. Analysis The output of both the phonon detector (PD) and the light detector (LD) are recorded with a continuous data acquisition to enable a dead-time free stream of data which is further processed offline. In this way, the analysis can be adapted to the specific need of e.g. the lowenergy DM analysis or, as in this case, the analysis of α-decays with energies of several MeV. Still, the reconstruction of such highly energetic events with CRESST-III detectors and standard analysis approaches is not possible, due to the optimisation of the detectors to lowest energies. One reason for this is the working principle of the TES used for the signal readout of both the PD and the LD. A TES is a thin W-film operated at a temperature between the superconducting and normal conducting phase (see Figure 1). Energy deposition in the crystal heats the TES (∆T) and results in a resistance change (∆R) proportional to the energy deposition. To maximise this resistance change, and lower the detector threshold, a steep transition is required. When the energy deposited in the crystal heats the TES completely into its normal conducting phase (like for α-decays), a maximum resistance change and in turn a maximum pulse height is observed which stays constant until the TES cools back into its transition region. In addition, such high energy depositions cause a fast rise in the resistance which cannot be followed by the SQUID electronics, which is losing magnetic flux quanta and changes the absolute baseline voltage of the stream. Figure 2 (left) shows an example of an α-event recorded in the detector TUM93A. The pulse is flat at the top as the TES is in its fully normal conducting state and the baseline level is lower at the end of the pulse compared to the baseline level before the pulse due to the flux quantum loss (FQL). These pulses cannot be reconstructed with standard pulse reconstruction methods as they cannot handle the FQLs. Hence, the new reconstruction method was developed which uses the length of the flat part of the pulse (its saturation time), which is determined by the time the pulse needs to reach 90 % of its maximum voltage. The saturation time is indicated by the blue line and is used to reconstruct the energy deposited in the crystal, as it gives a measure of how long the TES needs to come back to its operating temperature. Together with a correction for the SQUID FQLs in which the difference between The pulse has a changing baseline level due to flux quantum losses in the SQUID. In addition, the pulse is flat at the top as the TES is completely normal conducting in this time period. Right: Calibrated scatter plot for the data set of TUM93A. For both, the LD and the PD, the reconstruction was performed using the saturation time. Two bands are visible, the e − /γ-band on the left and the α-band on the right. the baseline level before and after the pulse is determined, the energy of α-decay pulses can be reconstructed in both the PD and the LD. In the next step some data selection criteria are applied to the data: Coincidences with the muon veto and the artificial heat pulses sent to the detector for stabilisation and monitoring are excluded. In addition, electronic artefacts like SQUID-resets are removed from the data set and events with too slow a change in resistance are excluded from the data set to prevent the wrong reconstruction of too low energetic pulses. No additional data selection criteria are applied to avoid the possibility of removing α-decay events from the data. The resulting scatter plot of the reconstructed energy in the LD against the reconstructed energy in the PD is shown in Figure 2 (right). The e − /γ-band, also reconstructed with the saturation time method, is visible as the steep band on the left, as the relative light output is higher for electromagnetic interactions. The α-decay band is nicely separated from the electromagnetic background. The α-spectrum is calibrated using four lines present in the data selected from a wide energy range. As a cross-check the end of the e − /γ-band at 2.6 MeV is used. The 180 W decay line at 2.52 MeV, 226 Ra at 4.88 MeV, the 210 Po surface background line at 5.30 MeV and the 218 Po line at 6.11 MeV are fitted by an exponential function as the saturation time has an exponential dependence on the deposited energy. The pulse model on which this assumption is based is published in [6]. The measurement time is corrected for dead times caused by muon veto coincidences and the artificial heat pulses sent to the detector for its stabilisation. Results The calibrated α-spectra for the detectors TUM93A (6.53 kg·d exposure), TUM93B (6.89 kg·d exposure) and TUM93C (6.87 kg·d exposure) are shown in Figure 3. Prominent features are the 180 W decay at 2.52 MeV and the two 210 Po lines at 5.41 MeV (full energy detected by the crystal) and at 5.30 MeV for decays where the daughter nucleus escapes from the surface of the crystal and does not deposit energy in it. The strong presence of both peaks compared to other energy areas of the spectra hints towards surface contamination of the CaWO 4 crystals with 222 Rn and with 210 Pb, which decayed to 210 Po. A background model is currently being developed for a more detailed study of the spectra of all three crystals. Even though the spectra seem to be dominated by surface contamination, a conservative α-decay rate from natural decay chains in the TUM93 crystals was calculated by summing up all events in the energy region from 3 MeV up to 10 MeV, shown in Table 1. The rate difference in the three crystals, even though they were cut from the same ingot, has two origins. First, during crystal growth impurities are less likely to be built into the crystal lattice compared to the crystal atoms. Hence, the impurity concentration in the melt increases and in turn also along the growth axis in the crystal. This process is called segregation. In addition, the high presence of the 5.30 MeV 210 Po line indicates a comparably high surface contamination which can be different for each detector crystal. The highest observed rate in TUM93B could also hint toward a mix-up of the crystals TUM93B and TUM93C during detector mounting. Comparing these conservative limits to the α-activity of e.g. the crystal TUM40, which was studied in detail in [3,7] with an α-decay rate from natural decay chains of 3.080 mBq kg this yields a minimum impurity reduction factor of >5.97 for TUM93A, >3.18 for TUM93B 031.5 Table 1: Conservative α-decay rate of isotopes of the three natural decay chains ( 238 U, 235 U, 232 Th) in an energy range of 3 MeV to 10 MeV. All events are assumed to be of intrinsic origin even though there are hints that the two main contributions are from surface contamination with 210 Po. Detector α-Activity µBq kg TUM93A 516 ± 62 TUM93B 919 ± 79 TUM93C 761 ± 76 and >3.85 for TUM93C. These results show a significant impact of the chemical purification on the α-decay rate in TUM93. The e − /γ-band activity and the activity of single α-decaying isotopes are currently being studied with the help of simulations.
2,577.4
2023-07-04T00:00:00.000
[ "Physics", "Materials Science" ]
Assessment of major requirements for accessing credit among paddy farmers in Jigawa state, Nigeria This paper examined major credit requirements of financial institutions in providing credit to paddy farmers of Jigawa state, Nigeria. Data were collected in 2019 from three selected paddy farming local government areas of the state. A total of 120 respondents were randomly selected through a multistage sampling technique, and a questionnaire. The binary logit model and the marginal effect were applied in the analysis. The results indicated that paddy farmers' educational level, family size, and guarantor requirements were statistically significant on access to credit, with their P-value signifies 0.041, 0.060, and 0.000, respectively. While, farm size, administrative process, collateral requirement, interest charge, and principal repayment duration were insignificant on accessing credit. Failure to address these problems may continue to worsen the Nigerian government's effort on food self-sufficient and poverty alleviation. The study suggests similar research to consider more years to see the impact in the long term. The study further recommends credit providers to modify the guarantor requirement and to delegate a staff who can translate and guide the applicants on how to fill the credit application forms. Introduction Access to credit is vital in promoting crop processing, purchasing of farm equipment and inputs, technology adoption, among others. Many countries established microfinance institutions for providing financial services to farmers (Linh et al., 2019). The provision of credit to agribusiness entrepreneurs characterised by collateral, application procedures, and unfavourable interest rates among others (World Bank, 2013). However, It was reported about 1.7 billion households and adults from China, India, Bangladesh, Indonesia, Mexico, Pakistan, and Nigeria were unable to have sufficient access to financial institutions (World Bank, 2017). Farmers face problems of fulfilling the credit requirements from financial institutions. While inadequate infrastructure and irregular weather made financial institutions view agriculture as a high-risk business (Isaga, 2018). It was believed that any obstacle that prevents accessing funds will affect GDP, hamper national food security, and preclude the nation from achieving Sustainable Development Goals (Abraham, 2018). These problems affect the access to farm inputs such as fertilizer and will swell the rank of poverty (Abdullahi et al., 2016;Mustapha and Said, 2016). Nigeria established many financial institutions since the 1970s to provide credit to farmers. However, only 4% of total lending to the entire sectors was allocated to the agricultural sector by commercial banks. Meanwhile, there are numerous complaints from paddy farmers on the inability to access credit in Nigeria (Gabriel, 2018). Jigawa is one of thirty-six states, located in the northwestern part of Nigeria with about twenty-seven local government areas. It has a total land area of approximately 22,410 square kilometers with an estimated population of 5,041,500. Subsistence farming and animal husbandry are the major occupations of the people. There is an increase in land cultivation for paddy farming from 70,000-100, 0000 hectares with the production of 9 tons per hectares annually. Recently, most paddy farmers in Jigawa state were unable to repay the credit collected from the banks (CSL Stockbrokers, 2020). The growth of population, income, and urbanization have substantially increased the global demand for rice, particularly in sub-Sahara Africa (SSA). Nigeria is one of the countries that produce and consume rice. It plays a significant role in food security and poverty reductions (FAO, 2020 to 3038 MT in 2013. However, the increase in the annual rate of consumption is more than production. It was forecasted that Nigeria's rice demand will continue growing to almost 36 million tons MT by 2050 (Adeyemo, 2018). The low production of paddy in Nigeria is connected to the failure of paddy farmers in various states like Jigawa to access credit from financial institutions. Although, many studies established the significant effect of credit on improving farmers' production (Abraham, 2018;Makate et al., 2019). However, only a few studies analysed farmers' access to credit (Saqib et al., 2018;Ali and Awade, 2019). Therefore, this study assessed the effect of major credit requirements from financial institutions on accessing credit among paddy farmers in Jigawa State, Nigeria. This study is vital in providing information on the major obstacles that hinder paddy farmers from accessing credit. Data source This study was based on primary data obtained from paddy farmers in Kaugama, Auyo and Ringim areas of Jigawa state, Nigeria around July-September 2019 through a questionnaire. The instrument consists of information on paddy farmers' demographic profile, farm inputs, and credit requirements. Moreover, the descriptive statistics were analysed. Sampling and sample size technique The multistage sampling technique was used in selecting the sample. It was designed when the elements of the population are spread over a wide geographical region. The paddy farmers of this study were scattered into various villages. The first stage was the selection of paddy producing villages. In the second stage, three villages namely Kaugama, Auyo, and Ringim areas were randomly selected from 11 paddy producing villages. In the third stage, the list of paddy farmers was obtained from respective village heads and extension personnel and stratified according to their farm size category. Finally, a total of 120 respondents were selected as a sample size. Analytical model The Binary logit model was selected because it is homoscedastic and the probabilities of dependent variables such as access to credit are dichotomous, which is 1 if access and 0 if not access. it can also permit multiple explanatory variables to be analysed simultaneously. The marginal effect of each independent variable was provided to prove the relationship with access to credit. Thus, the model specification was given as follows (Babcock et al., 1995). Then, the probability of accessing credit (j = 0 or 1) Thus, the multinomial logistic model was expressed as follows: Thus, access to credit was modelled as a function of demographic, farm factors, and credit requirements. Thus, for demographic factor, farm factor and administrative process were shown as follows: Where = access to credit, (1 = Yes, 0 = No), β is a vector of parameters that relate the independent variables to the dependent variable. DMR signifies demographic profile which includes age, gender, level of education, family size, and income. Then FRM indicates farm factors that consist, farm size and farming experience (years). While ADM indicates major administrative requirements that paddy farmer is expected to fulfil before accessing credit from financial institutions. These are administrative processes include filling the application forms and processing for approval, guarantor requirement, collateral requirement, interest rate, and duration of principal repayment and ε i shows error terms. (1) Year Domestic Production (000) Results and discussion This section is divided into two parts. The first part explains the socio-demographic characteristics of paddy farmers. The second part presents the results of the binary regression model and marginal effect, followed by a discussion of the significant effects and implications of variables on paddy production. Table 2 presents the sociodemographic characteristics of paddy farmers' in Kaugama, Auyo, and Ringim areas of Jigawa state. Most of the paddy farmers (about 92%) in the study area are male, while only 8.3% of the respondents are female. Also, about 52.5% of the respondents are within the age bracket of youth. The average age of the respondents is 37 years old, which shows, the farmers are still within economic productive labour force age. Also, about 42.5% have paddy farming experience of at least 6-10 years. This is associated with the increasing demand for domestic paddy due to price increase and the banning of imported rice. Furthermore, the educational level of about 34.2% of respondents is a secondary certificate, while about 29.2% attended only primary school. It has been found that only 10% of the respondents are graduates. This shows there is low educational qualification among paddy farmers. The size of the farm is averagely within the small size category as only two hectares per farmer. This small size of land contributes to low output as average production was about 9793 Kg per each farmer. Socio-demographic characteristics of paddy farmers Binary logit regression and marginal effect results of each model were presented in Table 3. The first column of each model signifies P values of Binary logit result, while the figures inside the bracket, under P values, and signifies the standard error coefficient of each variable. The first figure in the second column (ME) indicates the P-value of marginal effect. While the figure inside the bracket indicates the change that exists in accessing credit because of a variable. Thus, the Variance Inflation Factor (VIF) results were all below 5 value. Nevertheless, models VI involve summation regressions of all variables, demographic, farm factors, and administrative requirements. Concerning demographic factors, the level of education is significant at 10% and 5% in model I-V and model VI, respectively. The significance of the educational level (0.041) in model VI shows that an increase in paddy farmers' level of education by 5% will lead to an increase of farmers' access to credit by 29%. This shows the farmers with a good qualification tend to understand the documentation and administrative procedures of credit applications better and provide higher chances of getting guarantor and collateral requirements. Hence, this finding is on the same line as the results of some studies (Yunus et al., 2014;Yunus et al., 2015;Yunus and Said, 2016;Saqib et al., 2018). Moreover, family size is statistically significant in all models at 5%. This is associated with the culture of early marriage and uncontrolled birthrate. Financial institutions were suspected the credit receives may be channeled towards family expenses and social status. This result is similar to a statistically significant guarantor requirement in model II. It shows that an increase in 1% of guarantor requirements, may lead to an increase in credit access by 40%. This positive impact of the guarantor requirement is associated with strong trust, risk-taker from the financial institutions which save the organisation from bad debt. The credit received can be assured to be paid the receiver or by the surety. This statistical finding is also the same as the finding of Assogba et al., (2017). While, administrative process, collateral requirement, interest charge, and duration of principal repayment were insignificant in accessing credit in the study area. These may be related to high returns of harvested crops, low-interest charges, and support from both state and federal government. Also, the collateral requirement does not affect the farmer's credit application. This is because the financial institutions in collaboration with traditional authority authenticate the validity of the land documents before accepting as a collateral requirement. Moreover, farmers do not bother about the period to repay the principal. The findings of insignificant interest rate, collateral, and duration of repayment principal were contrary to the results of Khanal and Regmi, (2017) and Saqib et al., (2018). Conclusion The study found the guarantor requirement was positively significant in accessing credit from financial institutions in the study area. The inability of many paddy farmers to access credit contributes to paddy farming to remains subsistence in the study area. The situation may worsen the Nigerian government's efforts on food self-sufficient and poverty alleviation. The study recommends to credit providers to modify the guarantor requirement and to delegate a staff who can translate and guide the applicants on how to fill the credit application form. The government can use this study as a challenge to the adult education programme side. Future studies are to be conducted on the data to see the impact in the long term.
2,646
2021-02-27T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
Identity and dignity within the human rights discourse: An anthropological and praxis approach Copyright: © 2014. The Authors. Licensee: AOSIS OpenJournals. This work is licensed under the Creative Commons Attribution License. The theological discourse mostly focuses on the moral and ethical framework for human rights and human dignity. In order to give theological justification to the value and dignity of human beings, most theologians point to the imago Dei as theological starting point for the design of an anthropology on human dignity. Within the paradigmatic framework of democracy, human dignity and human rights have become interchangeable concepts. This article aimed to focus not on ethics but on aesthetics: man as homo aestheticus, as well as the praxis question regarding the quality of human dignity within the network of human relationships. It was argued that human dignity is more fundamental than human rights. Dignity as an anthropological construct should not reside in the first place in the imago Dei and its relationship to Christology and incarnation theology. Human dignity, human rights and human identity are embedded in the basic human quest for meaning (teleology). As such, human dignity should, in a practical theological approach to anthropology, be dealt with from the aesthetic perspective of charisma, thus the option for inhabitational theology. As an anthropological category, human dignity should be viewed from the perspective of pneumatology within the networking framework of a ‘spiritual humanism’. In this regard, the theology of the Dutch theologian A.A. van Ruler, and especially his seminal 1968 work Ik geloof, should be revisited by a pneumatic anthropology within the parameters of practical theology. Introduction In preparation for this essay, I was so surprised when taking up the work of Ludwig Feuerbach (1904) on the essence of the Christian faith: Das Wesen des Christentums, to discover his struggle to free traditional, and therefore 'orthodox theology', from its God-ideology and to turn theological reflection to the praxis issues of life, to the meaning of our being human within social contexts.He called this focal point: the wellbeing (heil) of humans (Feuerbach 1904:283).To my mind, the latter should be the focal point in a practical theological approach to the human quest for meaning and dignity.To what extent is the quest for human dignity, human rights and human wellbeing a practical theological question?Or does this question point merely to 'humanism'? The August-October 2012 mine strikes in South Africa put anew the question regarding the relationship between human dignity and human rights on the agenda of the human rights discourse.On 02 October 2012 there were pictures of violent people on television using the slogan: 'We demand our right to human dignity.'The next day in the newspaper in a report on crime in the Cape Flats, it was pointed out that gangsters claim it as their right to point guns at the police.From the viewpoint of the politicians, the approach was to warn the police against force and to care for the dignity of the gangsters.From an ethical point of view, some politicians argued that police should not defend themselves with guns.In the meantime, a child of 6 years, Leeana van Wyk, was severely wounded by the shooting of the gangsters.The mine strikers also killed two policemen; 3 days later, police opened fire on a group of protesting miners, some of whom were armed, and 34 strikers were killed.Elsewhere, in the name of workers' rights, striking truck drivers set 17 trucks on fire.Some of the drivers were severely wounded and ended up in hospital, and all in the name of democracy and human rights. 1 Is it possible to demand human dignity as a right?Although human dignity and human rights are closely connected (they are to a large extent interconnected), the basic assumption is that human dignity is a spiritual concept.It points to the quality and value of our being human within the dynamics of relationships.As a spiritual category, human dignity, within the framework of a theological anthropology, is a category sui generis.The quest for dignity is a teleological issue; it is fundamentally about the significance of human life (acknowledgement and fulfilment) and an understanding regarding the beauty of life and the aesthetics of being human.Thus the remark of Valadier (2003:50): dignity 'presupposes a whole anthropology'. The praxis question: Happy Sindane and the human quest for identity and dignity With praxis is meant: the intention of human actions within the dynamics of human relationships.One can call it the intentionality within practical actions; that is, the teleological dimension of practical theological reflection.Praxis describes the qualitative dimension of being functions and is therefore directly connected to the anthropological quest for meaning, dignity and identity.Praxis refers to habitus, which is attitude and aptitude as indicators of the ontological quality of our beings functions. In the 1980s, Ed Fairley (1983:23) already advocated for practical theology as the engagement with habitus; habitus as a disposition and motivational power within the actions of the human soul.The whole of the ministerial praxis is determined by disposition emanating from the notion of salvation (our eschatological identity).Disposition is an ontological category representing a new state of being and the stance of our being human in the presence of God.Thus Fairley's (1983:31) argument to move from the clerical paradigm for ministry into the habitus-paradigm: 'The term "practical theology" occurred originally to describe theology/habitus.'As such, 'practice meant that aspect of the habitus or wisdom in which the divine object sets requirements of obedience and life' (Fairley 1983:27). Habitus, as an anthropological category, is intrinsically an aesthetic category (Fairley 2001) referring to the quality of human identity and human dignity.The following case of Happy Sindane poses, to my mind, the praxis question regarding the value of human life and the aesthetics of being functions. 2It reveals the fact that identity is more basic than rights.Happy Sindane was born to a White father and a Xhosa mother in 1984.Very little is known about his father, Henry Nick (German), who was his late mother's employer.Sindane was thrown into the spotlight in 2003 when he claimed to be a White boy who had been kidnapped by Black people.In 2003, he walked into a police station and told the bemused cops that he was a 16-year-old White boy who had been kidnapped and then raised by a Black family.His quest was quite simple: he wanted to find his true family, to be returned to them, to reclaim his rightful childhood which was stolen from him.It was found by the court that Sindane's claim was a kind of 'lie' in order to find identity.It was described by the ruling magistrate as an 'intentional lie', a kind of provisional truth that was emotionally and psychically necessary, even though it might not have been factual.This 'intentional lie' masked a child's refusal to accept his father's absolute rejection and his mother's disappearance.The lie was a kind of survival strategy in his desperate attempt to gain identity.After DNA tests, he was identified as Abbey Mzayiya, the son of a domestic worker, but chose to use the name Happy Sindane. Sindane made news headlines again in 2013 when his body was found in a ditch early on Monday morning 01 April, less than 2 km from his home.A passer-by found his battered body lying on a rocky, litter-strewn piece of veld in Tweefontein, Mpumalanga.He had been stoned to death.There had allegedly been an altercation over a bottle of brandy at the JZee tavern where Sindane had been drinking.An empty bottle was found near his body.The 58-year-old man with whom Sindane had been reportedly fighting in the tavern was subsequently arrested in connection with Sindane's murder. Sindane's biological siblings, the Mzayiyas of Diepsloot in Johannesburg, who requested to have him buried next to his mother in the Eastern Cape, did not attend the funeral because they were not consulted about funeral arrangements.The real tragedy regarding the death of Sindane is that he died even before he was murdered.His life was taken from him due to a loss of love and a confused sense of identity.The loss of identity 'killed' him; one can even say that the fear for rejection without an intimate space to be accepted unconditionally for who he was, was the real cause of his 'death'.A human being without identity and dignity becomes a 'thing' and mere 'commodity'.At his funeral, Sindane's grandmother, Johanna Masombuka, covered his coffin with a large blanket, a ritual performed in Ndebele culture.Magistrate Marthinus Kruger, who had presided over Sindane's custody matter in 2004, conducted the service, reading from Psalm 23. In the majority of news reports about the events described above, it seemed that the focal point in the public discourse was the rights of Happy Sindane, but who cared for the dignity or beauty of Happy Sindane?Is the quest and demand for human rights perhaps not fundamentally linked to a qualitative question: what is the character of a human being's dignity and identity?Human rights: A luxury within the hell of slums? It seems that within poor communities in Africa, the discourse on human rights is, to a large extent, a luxury.The basic praxis question is how to survive and not to lose your human dignity.Zanotelli (2002:13-15), in the publication The slums, describes the slums of Nairobi as the hell of life.The slums are usually placed below the sewer line.Within this environment, in order to pose the above question on human dignity, you first need to wash away your materialism, rationalism and 'baroque Catholicism' (Zanotelli 2002:15).The next step is to descend into the hell of the slums; you need to undergo the baptism of the poor in order to talk about dignity: You learn to read things upside down.Your worldview, your theology, even your morality, just goes to pieces.When I try to dissuade young girls from going to town for prostitution they tell me there is no other way to survive.'But you are sure to meet Aids!' I insist, 'It's OK! Die of Aids or die of hunger, what's the difference?Or maybe there is!You have a chance of longer life with Aids'.I understood that what I held as morality is, to a large extent, middle class morality.(Zanotelli 2002:14) Problem identification Owing to the democratic ideal, Christian spirituality has been hijacked by political democratisation.It is indeed true: dignity (spiritual realm) and rights (political realm) complement each other.They complement each other because Christianity provides a spiritual foundation for the democratic principles of equality and liberty, whilst democracy offers a practical system of government that suits Christian 'concerns for human dignity and depravity' (Kraynak 2003:105).Although complementary, spirituality (Varga 2007:157) is more fundamental; it makes life sacred and open to the individual (Berger, in Giorgian 2007:170). The challenge to a theological anthropology is to describe the dynamics between dignity and meaning within the parameters of a spiritual hermeneutics in order to understand better how, eventually, identity is related to human dignity and the spiritual quest for meaning and destiny.Meaning is spiritual and, according to Gräb (2006:52), an indication of meaningful self-expression (Selbstdeutung); meaning and destiny then not in the sense of a fixed purpose-driven agenda, but an understanding of the quality of life in order to create a humane, safe environment and space for human interaction.According to Flory and Miller (2007:201-218) 'expressive communalism' displays a kind of immediate artistic expression of meaningful living, what one can call an 'embodied spirituality'.In order to do this, the notion of aesthetics should be first on the agenda of human dignity and processes of democratisation. Within a context of living below the sewer line, social violence, crime and fraud, the relationship between human rights and human dignity indeed become a burning issue.Should human rights be founded in human dignity and not vice versa?If one is deprived of all forms of human rights, can human dignity still prevail, as in the case of Happy Sindane? Hypothesis My basic hypothesis is that in order to detect the meaning of identity and dignity, and to promote human rights, our starting point in the first place should be aesthetics (with the emphasis on the value and meaning of human life) and not ethics (with the emphasis on moral issues and the tension between good and evil).When one starts with human rights, the discourse runs the danger of becoming moralistic in the sense of conditional demands (a moralistic imperative).Hobbes' 'wolf' then dictates the claim for human rights.However, human beings should be assessed in the first place within aesthetics categories (the indicative of being) and not from the perspective of ethical categories (the imperative of being as related to morality and sinfulness).Theology is basically a reflection on the 'Divine Beauty' (Pattison 2008:109-110); theological ethics emanates from theological aesthetics (Murphy 2008:5). My second hypothesis is that dignity is not a value inherent in the person (Hobbes, in Negt 2003:30); dignity is a relational category: So dignity is not an attribute peculiar to persons and their singularity; it is a relationship, or rather manifests itself in the gesture by which we relate to others to consider them human, just as human as we are, even if their appearance suggests nonhumanity, indeed inhumanity.(Valadier 2003:55) I cannot 'claim' dignity; I am dignity and dignify life within the quality of habitus. Homo aestheticus: Eschatology and the beautification of human life Within traditional Christianity, the notion of 'beauty' is often absent in theological reflection (Fairley 2001:6); rather, there is a kind of indifference which treats beauty as the beast, something to be excluded, marginalised or ignored (Fairley 2001:7).In systematic theology, human dignity is mostly viewed as an ethical issue and not as an aesthetic issue.For example, Huber (1996:xvi) links human dignity and human rights to an 'ethics of human dignity'.Indeed, 'Human dignity is a packed-up ethical argument.Its lofty status can be recognized from the way in which it is written into the texts of constitutions' (Ammicht-Quinn 2003:39).Perhaps, the reason why the discourse on human rights mostly focuses on dignity as an ethical category is the close connection between dignity and human failure (sinfulness). In the debate on a theological approach to anthropology, theologians usually point very aptly to the notion of human sin and the connection between the corruptio totalis and human fallibility.In many cases, the reason why Christian spirituality was hesitant to be engaged in the debate on human dignity and human rights, was the use of the notion of the Fall as the starting point for a theological reflection on anthropology.For example, the anthropological notion of creation and the 'image of God' stir up the debate on the doctrine of sin.Hence, the shift in many 'enlightened' theological circles on human rights to withdraw from the doctrine of original sin: The more uninhibited and optimistic the talk was regarding the dignity and abilities of humans, the greater was the need to relativize and secularize the doctrine of original sin.The doctrine appears -in the form of insight into the finiteness and fallibility of humans -merely as a limiting condition of human self-realization, no longer a description of the very essence of humans.(Huber 1996:120) Theological aesthetics does not ignore the reality of human fallibility, but rather takes as its starting point the exclamation of God that the creation of humans was an aesthetic event and regarded in the Genesis narrative as excellent and 'very good'.Good then not as an ethical category, but as a meaning category detecting destiny, significance and purposefulness (telos), as well as an aesthetic category pointing to worth and vocation.Theological aesthetics deals with the quality and value of our being functions and eschatological status before God (coram Deo). In this respect the notion of ethos is most helpful.Ethos refers to virtue and attitude, conduct and habitus, the essential make-up and characteristics of something (human identity).Ethos refers to the aesthetics of identity and dignity.In this case identity represents the unique personal, individual characteristics of a human being (our calling and vocation), whilst dignity reflects personal self-value and self-image as related to meaning and worth.Aesthetics without ethics is not possible.Whilst ethos is connected to the aesthetics of value and meaning, ethics represents the normative framework of life; it gives direction to ethos and represents the imperative within the indicative of aesthetics.Ethics represents the normative framework for meaningful living.However, being and the mode of human existence (So-sein) are, in a spiritual approach to anthropology, more fundamental than doing.The argument of Drewermann (1992) points in the same direction.Being, and the identity-question 'Who am I?' are more fundamental than the question 'What should I do?'.Inner truth and knowledge have priority over behaviour and external actions (Drewermann 1992:755).The task of hermeneutics is to bring the deeper levels of existential anxiety to consciousness and to peace (Drewermann 1991:339).If it fails to do this, a hermeneutics of human self-understanding faces the danger of becoming merely a process of moralising. What then is 'human' in human dignity and human rights, especially if one takes Hobbes' notion about the wolf in human nature seriously: homo homini lupus (Negt 2003:31)?What then is meant by human dignity in a theological anthropology and the meaning of life?What is the link between human dignity and the notion of a Christian spiritual aesthetics? Dignity and beauty from the perspective of eschatology From a judicial point of view, dignity is mostly associated with equality, human rights and the value of people.With reference to Kant (in Ackermann 2013:58), one can argue that dignity refers to autonomy or freedom.Thus, the hypothesis of Ackermann (2013:85) is that dignity connects with concepts such as equality and non-discrimination.In this regard human worth (dignity) becomes a kind of criterion in order to detect respect, non-discrimination and equality. As stated above, in the debate on the interplay between human dignity and human rights, the main starting point for theological reflection is mostly creation and the notion of the image of God.For example, the Italian humanism of the 15th century built the notion of human dignity upon the concept of humans created in the image of God (Huber 1996:117).However, the 'image of God' concept in Genesis 1:26 points more to qualitative representation within the dynamics of relationships than to rights, ethics and morality (Ammicht-Quinn 2003:41). What it means to be truly human is closely connected to the fact that human beings live in the presence of God and are created in the image of God to present the character and identity emanating from the covenantal encounter with God.Despite the close identification between dignity and rights in the anthropological approach of J. Moltmann (1984:23), the following quotation underlines the notion of representation within the dynamics of relationships: 'The image of God is human rights in all their relationships in life.'Thus in God's liberating and redeeming action the original destiny of human beings is both experienced and fulfilled.In the 'image of God' concept, the divine claim upon human beings is expressed (Moltmann 1984:22). According to Kraynak (2003:90), the problem with the notion of the imago Dei resides in the fundamental difference between the biblical and the contemporary understanding of human dignity: 'In the biblical view, dignity is hierarchical and comparative; in the modern, it is democratic and absolute.'A further problem is that the imago Dei refers not so much to inherent dignity, but to representation.It is more a relational category; 'it is also something to be won or lost, merited of forfeited, augmented or diminished' (Kraynak 2003:91). Owing to the emphasis on aesthetics rather than ethics, our point of departure is not the creation paradigm but the recreational paradigm of eschatological thinking.Eschatology and its emphasis on justification views human beings from the perspective of who we already are in Christ (spiritual ontology).Our identity is determined by salvation and grace.We are accepted unconditionally for who we are.It is not what we do that is fundamental for the quality of ethos, but who we are.The indicative of salvation determines the imperative, which emanates from the eschatological character of salvation.'The imperative does not appeal to Christian's good will or ability, but recalls what they have already received in baptism: freedom and a new Lord [the indicative]' (Schrage 1988:176).What is therefore required is not that we do something, but that we be something (some-body) (Schrage 1988:43).The crucial point is the transformation and metanoia of the individual (transformation of stance, conduct and orientation, the telos of life) in terms of intention, motivation and goal.Thus, Schrage's conclusion is that Jesus' ethics was an ethics of intention (Schrage 1988:43). The salvific nature of the kingdom of God determines our ontological stance in both life and death.The character of the kingdom determines human conduct (Schrage 1988:37).This character of the kingdom can be captured by the theological notion of eschatology.Because of eschatology, the will of God cannot be deduced from any universally recognised ontological order as in the case of the so-called ethics of natural law.The status quo cannot be preserved as in the case of the doctrine of creation in ethics.God's will is enacted in the eschatological act of salvation. The important point to grasp in an eschatological approach is that human conduct (habitus) is a consequence, not a condition, of parousia.Within the coming of God's kingdom, this eschatological stance and understanding of consequential pneumatological action is the impetus for meaning.When we do not cooperate and embody this eschatological and pneumatological realm, the indicative of salvation becomes judgement. Ethics is a consequence of eschatology and not a precondition.In this way an eschatological approach undermines perfectionism and legalism.What is most needed is wisdom (sapientia) in order to beautify human life; the aesthetic presence of unconditional love: 'Presence embodies grace' (Augsburger 1986:36). Within a pneumatological paradigm, the human being is not assessed in terms of an opportunistic approach, which implies that all relationships are fine when they only embody God's presence through empathetic responses.In an opportunistic approach the focal point is merely individual need satisfaction and the maintenance of basic human rights.Neither are humans assessed in terms of a pessimistic approach, which implies that human beings are merely sinners and doomed to failure.In a pneumatological approach, human beings are assessed realistically.A realistic approach in spirituality means that as Christians we are already new beings in Christ.In Christ, humans are endowed with the fruits of the Spirit (Gl 6:22-26).Human beings in Christ are 'charismatic human beings'.The reality of the fruit of the Spirit implies a pneumatological ontology: one is (eschatological speaking) therefore love.From a pneumatological point of view, love -as in the case of all the fruit of the Spirit -is now a being function and aesthetic category.'Do you not know that your body is a temple of the Holy Spirit, who is in you, whom you have received from God?' (1 Cor 6:19). Eschatological aesthetics provides the driving and motivational factor for human actions.Eschatology and its connection to the theological notion of grace provide the spiritual, even psychic energy for meaningful living.For Rombach (1987:379), dignity then describes the humane human being (Der menschliche Mensch); the human being shaped by the social processes of identity and meaningful space (Identität = a spiritual networking of meaning as the whole which gives significance to every particular part). From human rights to human dignity (dignitas) Purposefulness as an aesthetic taxonomy of human life To a certain extent the concepts of human dignity and the notion of the democratisation of life have become closely linked to the notion of human rights.According to Huber (1996:114), ideas of human dignity and human rights have been shaped by a long historical development.In this regard respect and equality are interconnected categories. Within the European tradition, talk of human dignity was intertwined with the rank and status of particular persons in society.The concept dignity (dignitas) is therefore a social category related to that of honour (honor) (Huber 1996:115).The turn toward the human being as the centrum of the whole of the cosmos was fed by the renaissance and the humanism of the Enlightenment.Owing to the Kantian influence of human beings as rational beings, the notion of human autonomy put an 'anthropocentric' worldview in the centre of the human dignity debate.Within this worldview 'dignity' has increasingly meant 'the worth of being human': Dignitas became closely associated with humanitas as to be construed as a synonym.To be able to say what dignity is would be to describe the fundamental meaning of being human.(Meeks 1984:ix) Dignity means to be human.'For this reason, dignity has become the key concept in the worldwide struggle for human rights' (Meeks 1984:ix). Within the human rights discourse it is often extremely difficult to differentiate between human rights and human dignity.The discourse has become 'slippery'.Human dignity has even become an in-between issue; it is squeezed in between sanctity and depravity (Witte 2003:119-137), between man as beast and man as an angel (merely divine) (Meilaender 2009).Mostly, the debate focuses on moral and democratic issues within the framework of personal, social and political ethics.Moltmann (1984) distinguishes between human rights as the quest for freedom, justice and equality, whilst dignity refers to how these issues impact on the quality of life of the individual, the unique and particular person: Human rights are plural, but human dignity exists only in the singular ... The dignity of humanity is the only indivisible, in alienable, and shared quality of the human being.(Moltmann 1984:9) Initially, European humanism linked the notion of human dignity to the Christian concept of humans created in the image of God.Human beings became the microcosm of God, containing in them a multitude of choices.One can therefore say that the 'modern age, which began with humanism, is characterized by the conviction that human dignity is anchored in the self, namely in one's rational talents' (Huber 1996:117).It was when the recognition of equal dignity of all human beings was incorporated within the politics of democratisation and institutionalised by international law that the shift from dignity to human rights became a focal point for the discourse on the value and worth of human beings.The reason perhaps is that human dignity, however, requires human rights for its embodiment, protection and full flowering (Meeks 1984:xi).One must therefore admit that without human rights human dignity becomes a fleeting idea without concrete and contextual meaning. Within the tradition of Plato, Aristotle and Kant, dignity became mostly related to intelligibility.Dignity then resides in the human nous or mind.Eventually dignity and rights become qualities of radical rational autonomy: On one common reading, 'dignity' refers to a basic faculty; it denotes the bare capacity for intelligent free choice shared equally by all non-damaged persons.One's rational freedom may be misused, but the simple possession of it is the ground of respect.(Jackson 2003:143) Meilaender (2009:8) distinguishes between two concepts of dignity: human and personal.Human dignity then has to do with the powers and the limits characteristic of our speciesa species marked by the integrated functioning of body and spirit.Personal dignity refers to the individual person whose dignity calls for our respect whatever his or her powers and limits may be.Although human dignity refers to many layers of meaning, Meilaender (2009:89) points to equal respect as a principle and theoretical basis for human dignity. Albeit, one should agree with Meeks (in Moltmann 1984:ix) that 'dignity' is a difficult word to define.It is often used as an exchangeable concept for human rights.'For this reason, dignity has become the key concept in the worldwide struggle for human rights' (Meeks, in Moltmann 1984:ix), a struggle embedded in different cultural contexts and deep ideological disagreements over human rights.The further problem is that dignity defined in many different ways immediately entails a counter-definition of others as inhuman, not possessing dignity. In his book On human dignity, Moltmann (1984:31) also connects the two concepts to one another: 'Through the service of reconciliation, human dignity and right are restored in this inhuman world.'However, the important point in the human rights debate is that human dignity determines the quality of human rights; hence, the reason to separate the two in order to understand their interconnectedness.'Human rights spring from human dignity and not vice versa' (Meeks 1984:xi).Furthermore, one can conclude and say: human rights presuppose a kind of fundamental dignity and therefore a sense of meaning, purposefulness and vocation.However, human dignity as a sense of meaning and vocation has implications for the quality of being functions.Its link with responsibility and purposeful actions presupposes definite personal qualities and character.This is where the interconnectedness between identity, dignity and virtues come into play.Kreeft (1986:192) argues that virtue is necessary for the survival of civilisation, whilst religion is necessary for the survival of virtue.Without moral excellence, right living, goodness, purity, chastity and effectiveness, our civilisation is on the road to decline.Civilisation needs justice, wisdom, courage and temperance. Virtues and meaningful actions It was Aristotle who underlined the importance of virtues for purposeful actions.To this end he identified four basic virtues -prudence, justice, temperance and courage.It is indeed true that Aristotle and Homer's understanding of arete differs from that of the New Testament.The New Testament not only promotes virtues such as faith, hope and love, but views humility (the moral for slaves) as one of the cornerstones in the formation of a Christian character (MacIntyre 1984:245).MacIntyre's (1984:249) conclusion is of importance to the debate on the interplay of values and virtues.In both the New Testament and Aristotle's comprehension, despite differences, virtue has this in common: it empowers a person to attain that characteristic essential for attaining meaning and significance (telos). Virtues motivate people and bring about integrity (Crossin 1998).They represent enthusiasm for life (enthusiasm = literally, God within us) and become a driving force that enables one to establish and nurture life-giving and healthy relationships.They safeguard human dignity and bring about a human space of moral soulfulness.Sound values are part and parcel of spiritual health; vice points in the direction of spiritual pathology and 'moral illness'.Virtues could be viewed as identity made visible in habitus and the quality of human relationships.Virtues display identity (Meilaender 1984) and reveal the character of our being functions; they exhibit the character of the human soul and are tested and displayed within the realm of relationships. Human dignity and human identity: Towards a relational approach The argument up to now has been that identity is a qualitative concept and connected to the value of inter-human and intra-human communication.Human dignity and human identity should therefore be viewed as relational categories.Relation refers to intimacy and interconnectedness; it can, inter alia within the framework of African spirituality, be linked to a kind of ubuntu-philosophy, namely that a human being is only humane through the relationship with another human beings.What is envisaged in an African spirituality is harmony in interpersonal relationships: umuntu ungumuntu ngabantu or motho ke motho ka batho -approximately translated as: 'a person is a person through other people' (Mtetwa 1996:24).Life can therefore only be healed if relationships are healed. It is a fact that the notion of relationality was often questioned as a reliable and valid approach.The critical point is then that human beings often act in relationships with enmity, hatred, anger and violence rather than with unconditional love.Our love is most of times conditional.Hobbes' terse slogan that man is wolf to man (homo homini lupus) is from a sociological point of view indeed relevant.However, it is argued here that from a pneumatological point of view, theology should revisit the slogan: 'man is man to man' (homo homini homo); the term 'human' then stands for the capability to have empathy, solidarity and cooperation (in Huber 1996:118).As a relational, systemic and process category, dignity is closely related to identity as well as to integrity and congruency. The interplay between dignity, identity, integrity and congruency For the aim of this essay, identity can be described as a process of personal identification consisting of the interplay between: • Intra-processes of self-understanding and self-evaluation (Who am I?).Intra-spection as a critical assessment of ability, skill and level of responsibilities.• Inter-processes of role-function and feedback (How do I respond and perform?Mirroring oneself within relationships: level of acceptance or rejection).Interspection as a critical evaluation of the quality of interrelated networking.• External processes regarding norms, values, belief systems, world views and paradigms (The factor of motivation with the questions: What keeps me going? And to what do I commit myself?)Trans-spection as a critical assessment of these norms, values and belief systems that determine responsible behaviour and informed decision-making (normative framework of life).• Contextual issues embedded in culture (What shapes my life and influences the quality of decision-making and life choices?).Interculturality: the mutual exchange between particularity within a specific context, customs and habits and the global structures that determines life on a daily basis.At stake in our global society are inter alia technology, social media and virtual reality. 'Identity', as derived from the Latin idem, indicating the same, conveys the idea of continuity.Identity presumes a continuity between the human I and behaviour; hence, the importance of congruency.Congruency happens when the self is a true reflection and portrayal of the conduct and experiences of the human I (Möller 1980:94).Congruency is about remaining faithful to oneself, communicating authenticity and truth (Heitink 1977:69).It is about the question to what extent one's belief system correlates with actions, lifestyles and behaviour. Identity is a dynamic process.The development of identity, therefore, is not linear, but a zigzag movement between experiences of the human I and the response of the environment.The movement acts like a spiral in which experiences of life during each stage of human development play a decisive role.The factors of discontinuity and continuity, as well as acceptance and rejection, will determine the quality of the identification and therefore of identity.The level of congruency will create a sense of integrity depending on the norms and values internalised. Identity and vocation: The principle of responsibility The answer to the question: 'Who am I?' depends on the quality of the human reaction and on the degree and quality of human responsibility.Our basic point of departure is therefore the core principle that qualifies ethos (attitude and aptitude) in human behaviour: respondeo ergo sum.In a theological anthropology, 'identity' means that people discover that God calls them to respond to their destiny: to love God and their fellow human beings.People should therefore display the quality of their responsibility and the genuineness and sincerity of their obedience to God in the way that they love. The principle of responsibility, which leads in turn to selfacceptance, presupposes awareness.People within a specific stage of development need to be aware that they should display real insight in the specific claim made on their personal functions during this stage.Their development and growth is determined by the extent to which they accept responsibility for the development of their potentials in life. A developmental model in a pastoral anthropology should always deal with the ethical principle of love, because it is an important director in the process of disclosing and discovering inner potentials. We can conclude that identity as an indication of maturity and adulthood presupposes a process of maturation in which different polarities, indicating the critical challenges implied by human life, play a decisive role.Whether identity takes place and diffusion is overcome will determine the quality of adulthood: intimacy, generativity and integrity.Meaning is then interconnected to adulthood and maturity.In terms of Erikson's (1974:28) understanding of the life cycle, fidelity is the cornerstone of integrity, identity and maturity: 'Fidelity is the ability to sustain loyalties freely pledged in spite of the inevitable contradictions and confusions of value systems.' Virtue therefore determines the quality of human identity and human dignity; it describes the humanness within our being human.One could say that humanity and humanness refer to the character of our human freedom, that is, our ability to take responsibility for life and to make responsible decisions that will enhance the quality of life. Towards a pneumatological understanding of dignity and aesthetics To conceptualise dignity is indeed difficult.Dignity is a many layered concept: • Dignity (dignitas) within a hierarchical paradigm points to status, position and authority.• Dignity (dignitas) within an ethical paradigm points to equality, justice and rights within the quest for liberation.• Dignity (dignitas) within an aesthetic paradigm points to meaning, telos (purposefulness, destiny or significance) and intimacy: the basic quest to be accepted unconditionally for who you are; the essence of the humanum.• Dignity (dignitas) as a spiritual and theological category. In theology dignity refers to the value of human life as determined and defined by the eschatological aesthetics of a suffering God.This suffering puts God on the bottom line of life, 'below the sewer line' (Zanotelli 2002:15), within the hell of the slums.The 'ugliness' of dereliction (My God, my God why has you forsaken me?) beautifies life unconditionally, despite life under the bottom line.Spiritual aesthetics therefore determine the quality of human dignity and ethics.Garcia-Rivera (2008:177) remarks as follows: 'This em-pathos, mediated by their own distinct accounts through the beauty of the crucifix, in turn becomes, second, syn-pathos -a plea for divine sympathy with their own suffering'. Conclusion Towards a paradigm shift in a theological anthropology: Humanum determined by pneuma For the connection between aesthetics and a Christian spiritual approach to dignity, a theological anthropology needs the following paradigm shift: from incarnation theology to inhabitation theology.The praxis-question in anthropology should eventually reside in pneumatology and not in Christology.For this paradigmatic shift, the theology of the Dutch theologian A.A. van Ruler ( 1968) is most relevant and should be revisited.Why? Within incarnation theology the human being is nothing (homo peccator) and God is everything (sacrificial grace): Christ is mediator.Christology is about redemption.In inhabitational theology, man, the human being, is becoming 'whole' (homo aestheticus) and therefore 'everything' (humanum); human beings are not excluded in salvation (heil, sjalom), but totally incorporated (Rebel 1981:209) because the humanum is now determined by pneuma. In terms of pneumatology, Van Ruler points to the category of 'reciprocity' (Rebel 1981:145), which means that the human will starts to correspond with the divine will according to the indwelling presence of the Spirit.The human person starts to become theonomous and therefore fully authentic and autonomous; that is, it displays the charisma of the Spirit (the soulfulness of embodied humanism). To build in any concurrency between God and human beings is pagan idolatry (Rebel 1981:99); pneumatology rather creates a theological osmosis (Van Ruler, in Rebel 1981:85) between God and human beings.Sin is secondary, eschatological being is primary.In an Christological anthropology, the object of faith is salvation (heil = becoming whole); in a pneumatic anthropology, the object of salvation is the praxis of humanum (Rebel 1981:140).A pneumatic anthropology describes spiritual wholeness (a spiritual humanism); it determines the quality of human dignity (charisma as pneumabeauty) as well as the rights of man (aesthetic responsibility). 'Man is homo festivus and fantasia homo' (Cox 1969:11).One can even say that the attempt to formulate the Christian faith in rational categories and to define God in terms of a correct doctrine (true confession), turned the Christian faith in the direction of scientia (scientific knowledge, the positivistic knowledge of the mind), rather than to sapientia (wisdom and the devotional knowledge of the heart).As Cox (1969:10) very aptly remarked: 'Scientific method directs our attention away from the realm of fantasy and toward the manageable and the feasible.'However, man as homo aestheticus points in the direction of the sapientia of beauty: the passionate expression of redemptive grace and sensitive benevolence. In Christian spirituality, aesthetics as the wellbeing (heil) of human beings should reckon with two important categories: doxa and charisma.Within a Christian spiritual approach dignity should be linked to habitus as a reflection of the glory of God.'Glory (doxa) is the closest word to dignity in the New Testament' (Kraynak 2003:93).Furthermore, dignity as a spiritual category cannot avoid the notion of grace and unconditional, sacrificial love (the eschatological qualification) and the fruit of the Spirit (charisma) (the pneumatological qualification).Hence our finding that human dignity and human rights reside as spiritual categories in the fact that dignity is enfleshed in human bodies as a result of the indwelling presence of the Holy Spirit; human dignity is a matter of inhabitational theology. Identity (characteristics), dignity (meaning and worth), ethos (habitus and pathos) and ethics (responsibility and human rights) describe an interconnected dynamics of networking relationships.This networking can be described as a hermeneutics of 'soulful human being'.An aesthetics of identity and dignity presupposes the following paradigm shift: from psycho-autonomy (self-determination) to pneuma-relationality -the intimate space of unconditional love as framed by koinonia and diakonia.In terms of this prenumatological and eschatological perspective one can claim: Happy Sindane was beautiful.Unfortunately, within the relational dynamics of life, he was never exposed to the homo aesthicus inherent to a theological anthropology.Perhaps the praxis of theology failed Happy Sindane.From a practical theological perspective I must admit: it seems that the koinonia and diakonia failed to promote the beauty of Happy Sindane.A rhino without a horn has no identity.Sindane died without a horn.His legacy: an intentional lie caused by existential unfulfilment and relational 'ugliness'. Instead of a provisional truth, could it perhaps be possible that a spiritual aesthetics, and the discovery of the link between identity and an eschatological understanding of homo aestheticus, could have provided a praxis framework for an existential truth in order to discover the quality of our being human? The aesthetics of life emanate from this eschatological proposition: in Christ human beings are already a new creation; this is our Christian spiritual identity.Owing to our eschatological identity, being functions qualify knowing functions, doing functions and feeling functions This new spiritual ontology is enfleshed and exhibited in the fruit of the Spirit.Pneumatology beautifies life by means of the charisma of the Spirit and the service (diakonia) of the church.Thus, the reason why one can conclude and say that human dignity, as a theological concept in a pastoral anthropology, is primarily a pneumatological endeavour.Dignified by the Holy Spirit, life and our being human becomes beautiful.As Christians, we should then display the fruit of the Spirit and, in doing so, Christians would start to beautify life and grant others human dignity. We were pushing a goods truck, and suddenly I stood in front of a blossoming cherry tree.I almost fainted with joy of it.After a long period of blindness without any interest, I saw colours again and sensed life in myself once more.Life began to blossom afresh.(p.27)
9,762.2
2014-08-06T00:00:00.000
[ "Philosophy" ]
cDNA Cloning and Expression of a Novel Human UDP-N-acetyl-α-D-galactosamine The glycosylation of serine and threonine residues during mucin-type O-linked protein glycosylation is carried out by a family of UDP-GalNAc:polypeptide N-acetylgalactosaminyltransferases (GalNAc-transferase). Previously two members, GalNAc-T1 and −T2, have been isolated and the genes cloned and characterized. Here we report the cDNA cloning and expression of a novel GalNAc-transferase termed GalNAc-T3. The gene was isolated and cloned based on the identification of a GalNAc-transferase motif (61 amino acids) that is shared between GalNAc-T1 and −T2 as well as a homologous Caenorhabditis elegans gene. The cDNA sequence has a 633-amino acid coding region indicating a protein of 72.5 kDa with a type II domain structure. The overall amino acid sequence similarity with GalNAc-T1 and −T2 is approximately 45%; 12 cysteine residues that are shared between GalNAc-T1 and −T2 are also found in GalNAc-T3. GalNAc-T3 was expressed as a soluble protein without the hydrophobic transmembrane domain in insect cells using a Baculo-virus vector, and the expressed GalNAc-transferase activity showed substrate specificity different from that previously reported for GalNAc-T1 and −T2. Northern analysis of human organs revealed a very restricted expression pattern of GalNAc-T3. The glycosylation of serine and threonine residues during mucin-type O-linked protein glycosylation is carried out by a family of UDP-GalNAc:polypeptide N-acetylgalactosaminyltransferases (GalNAc-transferase) 1 (EC 2.4.1.41). Two distinct human GalNAc-transferase genes, GalNAc-T1 and -T2, have been cloned and characterized to date (1)(2)(3). 2 Analysis of the acceptor substrate specificity of GalNAc-T2 has revealed substrates that this transferase does not utilize (3,4). In the present study we have analyzed the acceptor substrate specificity of GalNAc-T1 and found that neither GalNAc-T1 nor -T2 utilize all substrates identified, thus suggesting the existence of additional GalNAc-transferases. The existence of additional GalNAc-transferases has also been suggested by Hagen et al. (5) by analysis of sequence similarities of expressed sequence tag clones with those of GalNAc-T1 and -T2. O-Glycosylation in yeast has similarly been shown by Tanner and colleagues (6 -8) to be initiated by at least four mannosyltransferases. Families of glycosyltransferases with related acceptor and/or donor substrate specificities may be encoded by homologous genes showing segments of sequence similarity (9,10). Initially, no sequence similarities were found between enzymes having the same donor substrate specificity (11), but as more enzymes have been cloned, several families of homologous glycosyltransferase genes have been identified. Livingston and Paulson (12) originally identified a sialyltransferase motif in a segment of 55 amino acids that has now been found to be conserved within all identified members of the sialyltransferase family (13). Similarly, sequence similarities are also found within ␣3/4-fucosyltransferases (10,14), ␣2-fucosyltransferases (15), ␤6-N-acetylglucosaminyltransferases (16), ␤4-Nacetylgalactosaminyltransferases (17), the histo-blood group A/B transferases, and an ␣3-galactosyltransferase (18). The human GalNAc-transferases T1 and T2 share a segment of 61 amino acids with 82% sequence similarity, and this segment is also found in a deduced homologue, ZK688.8 (see Fig. 1), which has been observed to exhibit GalNAc-transferase activity (5). In the present study we have utilized this potential GalNAc-transferase motif to develop a PCR strategy that identified two novel cDNAs with sequence similarity. Here we report the cloning of cDNA containing the complete coding sequence of one of these and show by expression that the gene encoded a GalNAc-transferase with an acceptor substrate specificity partly different from GalNAc-T1 and -T2. Northern analysis showed that the expression of GalNAc-T3 is highly tissuerestricted in contrast to GalNAc-T1 and -T2. EXPERIMENTAL PROCEDURES Identification of cDNA Homologous to GalNAc-T1 and -T2 by RT-PCR and Restriction Enzyme Analysis-Multiple sequence alignment analysis (DNASIS, Hitachi) of GalNAc-T1 and -T2 was applied to identify areas with highest degree of sequence similarity. Based upon a 61-amino acid segment shared by GalNAc-T1 and -T2 as well as a more recently reported sequence derived from a homologous Caenorhabditis elegans gene (5), a pair of sense and anti-sense primers (EBHC100, 5Ј-TGGGGAGGAGARAACCTAGA-3Ј, and EBHC106, 5Ј-ATTCATC-CATCCATACTTCT-3Ј, respectively, was used in RT-PCR amplifications of poly(A ϩ ) RNA from several sources (see Figs. 1 and 2). The mRNA from human organs (liver, brain, and submaxillary gland) were obtained from Clontech, and mRNA from human cancer cell lines (MKN45, Colo205, and WI38) was prepared as reported previously (3). A restriction enzyme search identified a common BstNI site within the expected 196-bp RT-PCR product of GalNAc-T1 and -T2, which would * This work was supported by the Danish Medical Research Council, the Danish Natural Science Research Council, the Lundbeck Foundation, Ib Henriksens Foundation, the Gangsted Foundation, the Novo Nordisk Foundation, and the Ingeborg Roikjer's Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) X92689. The 196-bp products from RT-PCR of MKN45 mRNA that were resistant to BstNI cleavage were isolated using the prep-A-gene kit (Bio-Rad) and cloned into the pT7T3U19 vector (Pharmacia Biotech Inc.). Plasmids from 40 individual clones were purified using Qiagen-tip 20 column (Qiagen), and the clones were sequenced. Two sets of sequences differing from GalNAc-T1 and -T2 but exhibiting a high degree of similarity were identified, and sequence information from one set of identical clones designated TE3 was used for the isolation of 5Ј and 3Ј sequences outside the GalNAc-transferase motif. Cloning and Sequencing of GalNAc-T3 by Rapid cDNA Library Screening-Rapid library screening was performed by diluting 1 ϫ 10 6 pfu of human salivary gland gt11 library (Clontech) into 40 sublibraries (designated numbers 1-40), each possessing approximately 2.5 ϫ 10 4 pfu. All sublibraries were subjected to phage amplification (approximately 40-fold) by liquid culture phage amplification (19), giving a sublibrary titer of 1 ϫ 10 6 pfu. Phage amplification was performed in 1 ml of LB MgSO 4 maltose medium in a shaking incubator at 37°C for 5 h. After amplification, 20 l of chloroform was added to each sublibrary, cellular debris was pelleted, and the phage supernatants were titrated and used in subsequent screenings. All 40 sublibraries were screened to identify TE3 possessing phage clones. One l of each sublibrary (approximately 10 4 -10 5 pfu) was lysed in a 10 l of volume in the presence of 0.45% Nonidet P-40 and Tween 20, 100 g/l proteinase K at 56°C for 30 min. Proteinase K was heat-inactivated by boiling for 15 min, and 2 l of phage lysate was amplified by PCR using primers EBHC100 and EBHC204 at 0.5 M using 40 cycles of 95°C for 45 s, 55°C for 5 s, and 72°C for 30 s. Thirteen sublibraries found to contain TE3 gt11 clones were further assayed by PCR using EBHC202 (5Ј-GCGGATCCGCAGCAAAAGCCCTCATAGCTTT-3Ј) or EBHC204 (5Ј-GCGGATCCTCTAGCAATCACCTGAGTGCC-3Ј) primers combined with the gt11 vector primers to estimate lengths of cDNA inserts for selection of sublibraries with most 3Ј or 5Ј sequences. Amplifications were performed for 35 cycles of 95°C for 45 s, 55°C for 1 s, and 72°C for 2 min. Two sublibraries generated 3Ј PCR products (EBHC204/gt11 vector) of approximately 1000 bp, and two sublibraries generated TE3 5Ј PCR products of approximately 1200 bp. PCR products were subcloned into pT7T3U19 and sequenced. These PCR products were used to probe and isolate cDNA clones from the corresponding sublibraries. Both strands of the subcloned cDNAs were sequenced (20) using internal primers spaced 3-400 bp apart. Partly overlapping sequence data from cDNA clones were utilized to derive the complete coding sequence. Expression of GalNAc-T3 in Sf9 Cells-A partial cDNA sequence of the putative GalNAc-T3 gene an RT-PCR product (pAcGP67-GalNAc-T3-sol) using primers EBHC219 (5Ј-AGCGGATCCTCAACGATG-GAAAGGAACATG-3Ј) and EBHC215 (5Ј-AGCGGATCCAGGAACACT-TAATCATTTTGGC-3Ј) with BamHI restriction sites introduced was produced and cloned (see Fig. 3). The PCR product was designed to yield a putative soluble form of the GalNAc-T3 protein with an NH 2 -terminal end positioned immediately COOH-terminal to the potential transmembrane domain and including the entire sequence expected to contain the catalytic domain. The PCR product was cloned into a BamHI site of the expression vector pAcGP67 (Pharmingen), and the expression construct was sequenced to verify the sequence and correct insertion into the cloning site. Control constructs included pAcGP67-Gal-NAc-T2-sol prepared as described previously (3), pAcGP67-GalNAc-T1sol prepared similarly by RT-PCR with human submaxillary gland mRNA and designed to mimic the originally identified amino terminus of the soluble bovine GalNAc-transferase protein (1), and pAcGP67-O 2sol containing the histo-blood group O 2 cDNA and prepared as described previously for the blood group A cDNA (21). Co-transfection of Sf9 cells with pAcGP67-constructs and Baculo-Gold DNA was performed according to the manufacturer's description. Briefly, 0.5 g of construct was mixed with 0.05 g of Baculo-Gold DNA and co-transfected in Sf9 cells in 24-well plates. 96 h post-transfection recombinant virus was amplified in 6-well plates at dilutions of 1:10 and 1:50. Titer of amplified virus was estimated by titration in 24-well plates with monitoring of enzyme activities. Transferase assays were performed on supernatants of Sf9 cells in 6-well plates infected with virus at titer 1:1000 to 1:5000 representing end point dilutions giving optimal enzyme activities. Identification of DNA Homologous to GalNAc-T1 and -T2-A set of primers (EBHC100/EBHC106) corresponding to sequences flanking a putative GalNAc-transferase motif (Fig. 1) were used in RT-PCR reactions with mRNA from a variety of human organs and cell lines. A single DNA fragment of approximately 196 bp corresponding to that predicted for GalNAc-T1 and -T2 was amplified from all templates (Fig. 2). Hybridization with oligonucleotides probes specific for GalNAc-T1 and -T2 served as controls for the identities of the products observed. A restriction enzyme (BstNI) that selectively cut the products of both GalNAc-T1 and -T2 was used to detect potentially novel DNA from homologous genes. As seen in Fig. 2, RNA from several organs and cell lines yielded RT-PCR products that were not cleaved by BstNI, indicating the presence of a novel DNA fragment. The BstNI uncleaved RT-PCR product from the gastric carcinoma cell line MKN45 was subcloned and sequenced. Forty independent clones were sequenced, and of these eight clones contained sequences homologous to but different from GalNAc-T1 and -T2. Six independent clones had a novel sequence designated TE3, and two clones had a novel sequence designated TE4. The DNA sequence of TE3 was clearly similar to GalNAc-T1 and -T2 with a sequence similarity of approximately 80%. The deduced amino acid sequence containing the putative GalNAc-transferase motif is presented in Fig. 1. Cloning of Human GalNAc-T3 Using the TE3 DNA Sequence-Cloning and sequencing of the complete coding sequence of GalNAc-T3 was achieved by PCR screening of 40 sublibraries from a human salivary gland gt11 library, which yielded two sublibraries (number 8 and number 1) containing long 3Ј and 5Ј sequences outside the TE3 probe area. This strategy facilitated identification of cDNA clones with long 5Ј and 3Ј inserts and allowed us to compare multiple 5Ј and 3Ј sequences obtained within the isolated cDNA clones to identify and avoid intron containing sequences. Two PCR products of 1000 bp from sublibrary 8 and two PCR products from sublibrary 1 of 1200 bp were selected, subcloned, and sequenced. The sequences of these PCR products exhibited similarity to the sequences of GalNAc-T1 and -T2. One cDNA clone from each sublibrary was isolated, and inserts were subcloned and sequenced. The sequences found in the PCR products were identical to the corresponding sequences in the selected cDNA clones. The 3Ј cDNA clone 8.3Ј possessed a 3-kb insert with a single 900-bp open reading frame followed by multiple stop codons and a consensus polyadenylation box (Fig. 3). The 5Ј end of the insert of clone 8.3Ј apparently contained an intron sequence, and this has been confirmed by sequence comparison of several RT-PCR and cDNA clones as well as a genomic clone. 3 One 5Ј cDNA clone 1.5Ј possessed a 1300-bp open reading frame but was not considered to contain the complete coding sequence, because it lacked a putative hydrophobic transmembrane region. A second screen using an antisense primer EBHC211 (5Ј-ACCGGATCCAGTGTTTAGCTTCCCCACG) (5Ј region of clone 1.5Ј) yielded another 5Ј clone, 12.5Ј, which contained additional 550 bp of 5Ј sequence including a potential transmembrane region. As shown in Fig. 4 (Table I). GalNActransferase activity with the Muc2 acceptor substrate peptide was increased 20-fold, and activity with the HIV-V3 peptide was increased nearly 100-fold. In contrast, expression of Gal-NAc-T1 and -T2 constructs only increased the GalNAc-trans- Background levels of GalNAc-transferase activity in uninfected cell medium was higher than in control infected cell medium, probably as a result of the production and release of endogenous Sf9 GalNAc-transferase due to the larger number of cells in uninfected cultures. Furthermore, background enzyme activity varied significantly among different acceptor substrate peptides. The peptide Muc2 yielded the highest background and HIV-V3 peptide yielded the lowest activity. In an early attempt to express functional pAcGP67-GalNAc-T3, constructs were made that were truncated either at the 5Ј end or 3Ј end (data not shown). Interestingly, constructs lacking the 14 COOH-terminal or 55 NH 2 -terminal amino acids were completely inactive, indicating that both the stem region and the COOH-terminal region are important for maintaining a catalytically active protein, a feature also found for the ␣3-galactosyltransferase (14). Northern Blot Analysis of Human Organs-Northern blots with mRNA from 16 human adult and 5 fetal organs were probed with GalNAc-T1, -T2, and -T3 (Fig. 6). Similar to pre-vious results using the multiple tissue Northern blot, MTN I, GalNAc-T1 hybridized to two mRNAs of approximately 3.4 and 4.1 kb (1), whereas GalNAc-T2 hybridized to a 4.5-kb mRNA. Variable amounts of a smaller 2-3-kb mRNA were also detected with this probe (3). Hybrization of these probes to multiple tissue Northern blots, MTN II and fetal MTN, resulted in slightly different estimated mRNA sizes for all GalNAc-Ts. This discrepancy is probably due to differences in the parameters of gel electrophoresis and the marker positions assigned by the supplier. GalNAc-T3 hybridized to a 3.6-kb mRNA (estimated from MTN I) highly expressed in pancreas and testis, which was weakly expressed in kidney, prostate ovary, intestine, and colon. A very low level of GalNAc-T3 mRNA was also detected in adult placenta and lung as well as fetal lung and kidney. In adult spleen GalNAc-T3 hybridized to a larger 4.2-kb mRNA (estimated from MTN II). DISCUSSION This study presents data on the cloning, sequencing, and expression of a third member of a growing family of polypeptide GalNAc-transferases. A putative GalNAc-transferase motif of 61 amino acids that is highly conserved in sequence among GalNAc-T1, GalNAc-T2, and a C. elegans homologue was used to search for potential additional members of the polypeptide GalNAc-transferase family. The screening strategy included an RT-PCR strategy similar to that reported for the sialyltransferase family (12,23) followed by restriction enzyme analysis as a selection procedure. This method allowed us to eliminate or reduce "background" for the two known GalNAc-transferases, GalNAc-T1 and GalNAc-T2, and clearly distinguish novel RT-PCR products of the same size as those for GalNAc-T1 and -T2. Two novel DNA fragments were identified and sequenced. The present study presents data about one of these. The novel cDNA was shown by expression in insect cells to have polypeptide GalNAc-transferase activity (Table I); it therefore may be classified as GalNAc-T3. The GalNAc-T3 gene encodes a protein with a predicted type II transmembrane domain structure similar to GalNAc-T1 and -T2 as well as all other glycosyltransferases characterized thus far (10). The Gal-NAc-T3 protein shows an overall amino acid sequence similarity of approximately 45% to either GalNAc-T1 or -T2, which is similar to the sequence similarity between GalNAc-T1 and -T2 (3). The lowest degree of sequence similarity is found in the amino-terminal region, including the transmembrane domain as well as the putative stem region. GalNAc-T3 is more than 50 amino acids longer than GalNAc-T1 or -T2 in the putative stem region. More than 80% of the COOH-terminal sequence of GalNAc-T1, -T2, and -T3 can be aligned by sequence similarity including the GalNAc-transferase motif, a number of minor segments of sequence similarity, and most of the cysteine residues (Fig. 5). Despite the relative low overall amino acid sequence similarity between GalNAc-T1, -T2, and -T3, 12 cysteine residues that are evenly spaced within the major part of the proteins are conserved. These may be involved in intramo-lecular disulfide bonding, or they may be directly involved in the catalytic activity of the enzymes. The significance of conserved cysteine residues was originally noted by Drickamer (24) within the sialyltransferase family. The functional importance of cysteine residues involved in intramolecular disulfide bonding as well as possibly the catalytic site of the ␤4-galactosyltransferase was recently demonstrated (25). The number of conserved cysteine residues in the polypeptide GalNAc-transferases far exceeds the number of cysteine residues reported in other glycosyltransferases to date (10). Interestingly, it appears that in vitro measurable GalNAc-transferase activity is increased by the presence of reducing agents (3,26). GalNAc-T3 was found to have a different acceptor substrate specificity than GalNAc-T1 and -T2. Among a panel of acceptor substrate peptides (Table I), GalNAc-T3 was found to glycosylate a peptide derived from the HIV envelope glycoprotein gp120, which did not serve as substrate for Gal-NAc-T1 or -T2 (4, Table I). This peptide was identified as an acceptor substrate during analysis of enzyme activity in total extracts of various cell lines and organs (4,27). GalNAc-T3 also catalyzed glycosylation of mucin-type acceptor sequences such as Muc2 and Muc5, which can also be glycosylated by GalNAc-T1 and -T2. In a previous study we found that the enzyme activity that mediated glycosylation of the HIV peptide also utilized the Muc2 substrate by cross-competitive glycosylation (4). This finding is consistent with the substrate specificity reported here for GalNAc-T3, suggesting that GalNAc-T3 may represent this particular enzyme; however, additional enzymes may also show related specificities. Detailed analysis of individual GalNAc-transferases with a large panel of peptides and structural confirmation of the specific acceptor sites utilized for GalNAc-glycosylation will be necessary to fully understand the specificity of the individual members of the enzyme family. The first step of mucin-type O-glycosylation is mediated by at least three and probably more GalNAc-transferases. The data presented here clearly show that GalNAc-T3 exhibits a different acceptor substrate specificity than GalNAc-T1 and -T2, using short synthetic peptides with no or little predicted secondary structure. The finding that the three GalNAc-transferases share mucin-type acceptor substrates such as the Muc2 and Muc5 peptide sequences may indicate overlap in specificity, but further structural studies of the products formed to identify the sites utilized by each enzyme on these peptides with multiple serine and threonine residues are needed to clarify this. It is clear that in vivo models displaying differential expression of GalNAc-transferases are needed to evaluate the contribution of a given GalNAc-transferase. a One unit of enzymes is defined as the amount of enzyme that transfers 1 mol GalNAc in 1 min using the standard reaction mixture as described under "Experimental Procedures" with 50 g of peptide as acceptor substrate. In contrast to GalNAc-T1 and -T2, expression of GalNAc-T3 appears to be highly regulated and mainly found in pancreas and testis; weak expression is found in a few other organs including placenta. Interestingly, approximately 200 bp covering part of the GalNAc-transferase motif of GalNAc-T3 were recently sequenced from a pancreatic expressed sequence Tag library (EMBL accession number T11328). The lack of Gal-NAc-T3 expression in human liver correlates with the finding that organ extracts from human liver lacked GalNAc-transferase activity utilizing the HIV peptide, whereas expression of GalNAc-T3 mRNA in human placenta is in agreement with GalNAc-transferase activity using the HIV peptide in extracts of placenta (4). One interpretation of these data is that differential expression of different GalNAc-transferases can result in O-glycosylation of distinct sites on a given protein. The biological significance of this is unclear. There are a few studies on O-glycosylation sites, and these are limited to analysis of the functional activity or stability of a protein with or without a single O-glycosylation site (28), because assignments of O-glycosylation sites are difficult to perform (29). The results presented here suggest that cell-, organ-, and species-specific differences in the position of O-glycosylation may occur as a result of differential expression of polypeptide GalNAc-transferases. In searching for potential motifs of Oglycosylation by analyzing serine and threonine residues carrying O-glycans in glycoproteins, one may need to consider the GalNAc-transferase repertoire of the cell of origin (30,31). The existence of a transferase family of unknown size possibly exceeding three members displaying differential acceptor sub-strate specificity and cell/organ distribution suggests that mucin-type O-glycosylation is a much more defined and controlled process than previously recognized. In this respect, previous studies aimed at identifying consensus sequence motifs for O-glycosylation may not have identified such because of the unknown level of complexity. The data reported here suggest that O-glycosylation in terms of sites is less random than previously suggested and that more defined acceptor substrate peptide sequences may be recognized for each of the individual GalNAc-transferases. With the individual GalNAc-transferases expressed as recombinant proteins, it may be possible to determine primary peptide sequence motifs for the individual enzymes, which could be useful for predicting O-glycosylation in vivo by a given cell type.
4,891.6
1996-07-19T00:00:00.000
[ "Biology", "Chemistry" ]
TRPV1 and TRPA1 Channels Are Both Involved Downstream of Histamine-Induced Itch Two histamine receptor subtypes (HR), namely H1R and H4R, are involved in the transmission of histamine-induced itch as key components. Although exact downstream signaling mechanisms are still elusive, transient receptor potential (TRP) ion channels play important roles in the sensation of histaminergic and non-histaminergic itch. The aim of this study was to investigate the involvement of TRPV1 and TRPA1 channels in the transmission of histaminergic itch. The potential of TRPV1 and TRPA1 inhibitors to modulate H1R- and H4R-induced signal transmission was tested in a scratching assay in mice in vivo as well as via Ca2+ imaging of murine sensory dorsal root ganglia (DRG) neurons in vitro. TRPV1 inhibition led to a reduction of H1R- and H4R- induced itch, whereas TRPA1 inhibition reduced H4R- but not H1R-induced itch. TRPV1 and TRPA1 inhibition resulted in a reduced Ca2+ influx into sensory neurons in vitro. In conclusion, these results indicate that both channels, TRPV1 and TRPA1, are involved in the transmission of histamine-induced pruritus. Introduction Histamine is one of the most intensively studied mediators of itch. Histamine acts via four G protein-coupled receptors. Two of the four known histamine receptors (histamine H1 receptor (H1R) and histamine H4 receptor (H4R)) are involved in the induction of histamine-induced pruritus [1,2]. Additionally, the blockade of the histamine H3 receptor (H3R) seems to be involved in histamine-induced pruriception [2]. Although histamine has been known for almost 100 years to induce itch in humans, the exact signal transduction pathways are still not fully elucidated [3]. Key players in the signal transduction of itch are members of the transient receptor potential (TRP) family. Among the six subgroups of TRP channels in mammals, the transient receptor potential vanilloids 1 (TRPV1), TRPV3, TRPV4, ankyrin 1 (TRPA1), and melastin 8 (TRPM8) have been proposed to be involved in itch transduction [4]. Several groups have demonstrated that TRPV1 is important for the signal transduction of histamine-induced itch [2,[5][6][7][8][9]. Histamine-induced pruritus is transmitted via specific mechano-insensitive C fibers [10]. Dorsal root ganglia (DRG), which contain the cell bodies of the sensory afferents, express all four histamine receptor subtypes [2,7,11]. The histamine-induced Ca 2+ influx in DRG neurons is thought to be mediated via the H1R, H3R and H4R, and is related to capsaicin sensitivity [2,[5][6][7][8][9]. Moreover, in mice treated with a TRPV1 blocker as well as in mice lacking the TRPV1 (TRPV1 −/− ) channel, the histamine-induced scratching behavior is reduced [9]. Furthermore, histamine enhances the production of 12-hydroxyeicosatetraenoic acid (12-HETE), a 12-lipoxygenase metabolite of arachidonic acid and an endogenous TRPV1 activator [9,12]. These results strongly indicate that histamine requires the activation of TRPV1 to excite sensory neurons and to induce itch. However, histamine induces a small increase of intracellular Ca 2+ in about 10% of neurons of TRPV1-deficient mice. Additionally, some neurons of wild type mice responding to histamine are not capsaicin-sensitive, and histamine-induced scratching behaviors in TRPV1 −/− mice were decreased but not completely abolished, which suggests the involvement of other receptors in the itch response [2,4,9]. In addition, the different histamine receptors might use different downstream signaling pathways. Apart from TRPV1, sensory neurons show a strong expression of TRPA1. Both play crucial roles in detecting pruritogens and nociceptive stimuli [4]. This study was performed to investigate the involvement of TRPV1 and TRPA1 in histamine-induced itch, and to detect potential differences in the signaling pathways of the different histamine receptors. The role of TRPV1 and TRPA1 in histamine-induced pruritus as well as in histamine-induced Ca 2+ increase in DRG neurons was analyzed after pharmacological blockade of both TRP channels. Additionally, histamine-induced itch was analyzed in vivo in TRPV1 −/− and TRPA1 −/− mice. To determine which mouse strain would be most suitable for our study, the scratching behavior following injection of a H4R-agonist of four different mouse strains was compared initially. Evaluation of Scratching Behavior Scratching behavior was analyzed as an indicator of pruritus. Mice were acclimatized to their environment for 2 weeks before the experiments. Mice were randomly allocated into different treatment groups (n = 6 (H4R-induced scratching behavior in four different mouse strains) and n = 9 (effects of TRP channels on histamine induces itch) for each group). Group sizes were determined from a power analysis with the software G*Power 3.1.9.2. A co-worker blinded to the experimental protocol randomized animals into these groups. One day before each experiment, the rostral part of the neck was clipped with electric clippers. To measure strain dependent differences in the response to H4R-induced itch, the H4R agonist 4-MH (500 nmol/L NaCl) was injected intradermally (i.d.) into the neck. Application of 50 µL sterile saline (0.9% NaCl) was used as vehicle control. The strain with the most pronounced scratching response was used for further experiments. HC-030031 (60 mg/kg), Capsazepine (6 mg/kg) or SB366791 (0.5 mg/kg) were given intraperitoneal (i.p.; 200 µL) 45 min before injection of histamine (25 nmol/L), HTMT (100 nmol/L), 4-MH (50 nmol/L) or ST-1006 (50 nmol/L). For the experiments in knockout mice, histamine (800 nmol/L), 4-MH (500 nmol/L) or ST-1006 (100 nmol/L) were injected intradermally. After injection of the histamine receptor agonists, mice were placed in observation chambers and recorded on video for 30 min. Afterwards, scratching bouts were analyzed in a blinded manner. According to Kuraishi et al. (1995), a scratching bout was defined as a series of scratching movements by a hind paw in the area around the injection site until the paw was licked by the mouse or placed on the ground [15]. Isolation of DRG Neurons Isolation of DRG neurons was performed according to Rossbach et al. et al. (2011) [2]. To collect DRGs, mice were deeply anaesthetized with CO 2 and then exsanguinated. Then, 15-20 DRGs were collected along the whole opened vertebral column. DRGs were enzymatically digested in dispase II (2.5 mg/mL; Stemcell Technologies, Vancouver, Canada) and collagenase from Clostridium histolyticum Ca 2+ -Imaging Changes in intracellular free Ca 2+ concentration in single cells were measured by digital microscopy connected to equipment for ratiometric recording of single cells as described previously [2]. The cultured neurons were loaded with 8 µmol/L Fura-2-acetylmethylester (Biotium, Freemont, CA, USA) in DMEM media, protected from light, for 40 min at 37 • C. The neuron-covered coverslip was inserted into the chamber (Warner Instruments, Hamden, CT, USA) of the imaging system and constantly perfused with 36 • C Lockes' buffer containing (mmol/L) 136 NaCl, 5.6 KCl, 1.2 MgCl 2 , 2.2 CaCl 2 , 1.2 NaH 2 Po 4 , 14.3 NaHCO 3 , 10 D-Glucose (pH 7.3-7.4). The cells were monitored on an inverted microscope (Nikon TE200, Nikon Instruments, Melville, NY, USA) by sequential excitation at 340 and 380 nm. Fluorescence intensities at both wavelengths were measured every 500 ms by using a camera attached to the Lambda LS lamp and a Lambda optical filter changer. Images were obtained using PC-based software, and the Fura-2 ratio (F340/380) was calculated (NIS-Elements AR 5.02.01; Nikon Instruments, Melville, NY, USA). Regions of interest (ROIs) were defined around each neuron according to their neuron typical morphology. DRG neurons from CD-1 mice were exposed to control solution (Lockes' buffer) followed by 4-MH (0.1 mmol/L), ST-1006 (0.1 mmol/L), HTMT (0.1 mmol/L) or histamine (1 mmol/L). For testing the direct effects of the TRP inhibitors, ruthenium red, HC-033301 or SB366791 were administered 15-30 s prior to the stimulus in a concentration of 1 or 10 µmol/L. The histamine receptor antagonists diphenhydramine (H1R) and JNJ-7777120 (H4R) were applied 15-30 s prior to the stimulus in a concentration of 10 µmol/L to test the specificity of the histamine receptor agonists used. To functionally determine to which extent H4R-and histamine-positive cells reacted to the TRP channel agonist, DRG neurons of CD-1, C56BL/6 and BALB/c mice were exposed subsequently to 4-MH (0.1 mmol/L), histamine (1 mmol/L), AITC (1 µmol/L) and capsaicin (1 µmol/L). This last experimental setting aimed to gain more information about strain differences in the reaction profile of the neurons to these substances. At the end of each measurement, potassium chloride (KCl; 150 mmol/L) was applied to confirm the viability of the neurons. The cells were washed with fresh buffer for two min after each stimulus to recover cells prior to the addition of the next stimulus. The 340/380 ratio directly reflects the Ca 2+ influx into the sensory neurons upon simulation. All fura-2 measurements were normalized to the resting baseline ratio F340/F380. If the ratio value increased by more than 10% of the resting level after stimuli application, the neurons were considered as activated by the substance tested. Only the cells that reacted to KCl at the end of each measurement were included into analysis. Statistics All figures for the in vivo data are presented as scatter-dot plots with median ± SD. Data of the in vivo experiments did not follow normal distribution, thus significant differences were assessed with the nonparametric Mann-Whitney U test compared to the control group. Differences in scratching response over time were analyzed with a two-way ANOVA followed by Sidak's multiple comparisons test. Significant differences between calcium peaks induced by the test drugs with or without inhibitor pretreatment were analyzed by the Fisher's exact test. A p value of less than 0.05 was regarded as statistically significant. For the statistical analysis, the program Graph Pad prism version 7 (GraphPad software, Inc., San Diego, CA, USA) was used. Results We first examined which of the four mouse strains (BALB/c, C57BL/6, CD-1, NMRI) showed the most pronounced scratching response to a H4R-agonist (4-MH). The 4-MH at a concentration of 500 nmol/L did not elicit a robust scratching behavior in BALB/C or NMRI mice. A significant increased number of scratching bouts compared to vehicle injection was observed in C57BL/6 and CD-1 mice (Figure 1). In line with Inagaki et al. (2001) [16] and Bell et al. (2001) [1], who identified CD-1 mice reacting with the highest scratching response after intradermal injection of histamine, further experiments in this study were conducted in CD-1 mice. Second, we determined the dose response for the histamine receptor agonists in CD-1 mice. All agonists used in this study were tested in CD-1 mice at dosages from 5 to 100 nmol/L (histamine, 4-MH, ST-1006) or 50 to 500 nmol/L (HTMT) (Figure 2). Doses that elicited a robust scratching behavior in CD-1 mice were used for subsequent experiments. Results are presented as median ± SD. Two-way ANOVA: * p < 0.05 (factor treatment) = significantly different from vehicle (saline) injection, # p < 0.05 (factor time) = scratching response to pruritogen significantly different from 10 min timepoint. Effect of TRP Channels on Histamine-Induced Intracellular Ca 2+ -Increase To determine whether the TRPV1 or TRPA1 channel is involved in H4R-induced neuronal excitation, cells were pre-incubated with the TRPV1-inhibitor SB366791 or the TRPA1 inhibitor HC-030031. Both inhibitors dose dependently reduced the intracellular Ca 2+ -increase after stimulation with histamine, 4-MH and ST-1006. Furthermore, SB366971 also reduced the HTMT-induced intracellular Ca 2+ -increase. In addition, the TRP channel blocker ruthenium red concentration dependently inhibited the intracellular Ca 2+ -increase after stimulation with histamine, HTMT, 4-MH and ST-1006 ( Figure 6). Discussion Among the broad variety of pruritogens, histamine is one of the most comprehensively studied itch mediators. Histamine acts via four G-protein coupled receptors (H1-4R). In addition to the H1R, the H4R seems to be the most relevant histamine receptor in the transmission of histamine-induced itch [1,2,19]. Antagonists targeting the H4R are effective in reducing histamine-and allergen-induced itch in rodents and humans, and are thus discussed as new therapeutic options for the treatment of pruritic skin diseases [20][21][22][23][24]. However, the exact signal transduction mechanisms of histamine-induced itch-especially via the H4R-are still not fully understood. Previous studies stated that only the TRPV1 channel is involved in the signaling mechanisms of histamine-induced itch [4,8,9,25]. In this study, we demonstrated for the first time that in addition to the TRPV1, the TRPA1 channel also seems to be associated with histamine-induced itch transduction via the H4R. This is in contrast to the published consensus that the TRPA1 channel is not involved in histamine-induced itch [4,25]. Various studies demonstrated a reduced scratching behavior in TRPV1 −/− mice in response to histaminergic and non-histaminergic pruritogens, whereas a reduced scratching response was seen in TRPA1 −/− mice in response to nonhistaminergic pruritogens compared to wild type mice [9,25,26]. Supporting the in vivo data of our study, which showed an involvement of the TRPA1 channel in H4R-signaling, a lower number of DRG neurons obtained from CD-1 mice responded to histamine or the H4R agonists (4-MH and ST-1006) after pre-incubation with HC-030031. These findings are again in contrast to a study by Jian et al. (2016) [6], in which HC-030031 could not block the Ca + influx in DRG neurons obtained from 4 weeks old C57BL/6 mice. In this study Ca influx was induced by the H4R agonist immepip [6]. However, immepip has a higher affinity to the H3R (pk i = 9.3) than to H4R (pk i = 7.7) [27]. Thus, an involvement of the H3R cannot be precluded in these data [2]. The utilization of different H4R agonists in this and other published studies, with their specific, possibly unknown off-target effects, together with the varying mouse strains, makes the interstudy comparison of results difficult [28]. Indeed, no study could ever completely inhibit histamine-induced itch with TRPV1 inhibitors alone or in TRPV1 −/− mice [6,9]. This implicates a possible involvement of other TRP channels in histamine signaling, especially the TRPA1 channel. TRPA1 in fact is co-expressed within a subpopulation of TRPV1-expressing sensory neurons (30%) [29]. Further functional in vitro studies showed subpopulations of histamine-sensitive trigeminal ganglion neurons and DRG neurons, which were sensitive to the TRPA1 agonist AITC and/or the TRPV1 agonist capsaicin [30,31]. In trigeminal ganglion neurons, 70% of histamine-responding cells reacted to capsaicin and 39% to AITC [30]. Although not mentioned, an overlap between these TRPA1-or TRPV1-responsive subpopulations seems inevitable. In a study by Zhang (2015) [31], 41% of histamine-positive DRG neurons of neonatal C57BL/6 mice were sensitive for both AITC and capsaicin. Similar results were found in the present study: depending on the mouse strain, we evaluated that 22-77% of neurons reacting to 4-MH also reacted to AITC and capsaicin ( Figure 8). In line with this, the TRPA1/TRPV1 inhibitor ruthenium red significantly reduced the intracellular Ca 2+ increase after application of histamine 4-MH and ST-1006. In a pilot experiment in TRPV1 −/− /TRPA1 −/− mice (n = 3 mice), the H4R-induced scratching response was nearly diminished to a baseline level (data not shown). A possible link between histamine and TRPA1 signaling might be thymic stromal lymphopoietin (TSLP), a known progressor of allergic diseases. TSLP is released by keratinocytes after stimulation with histamine [32]. The interaction of keratinocytes and neurons in the onset and progression of itch has already been addressed [33]. TSLP directly activates sensory neurons and promotes itch via the TRPA1 channel on these cells [32]. TSLP release by keratinocytes is thought to be mediated via the H4R both in human and murine keratinocytes [34]. Briefly, these data implicate a possible link between TRPV1, TRPA1 and HR in histamine signal transduction. Interestingly, Ru et al. (2017) [35] presented data in a skin-nerve preparation of TRPV1 −/− /TRPA1 −/− mice that argue against an involvement of TRP channels in the onset of histamine-and chloroquine-induced itch. In this study, the response of itch-specific peripheral C-fibers of these knockout mice compared to wild type mice did not differ after pruritogen application. According to the authors, these data are not necessarily contradictive to an involvement of TRP channels in itch transmission [35]; their involvement might rather be associated with the inhibition of the inflammatory response and production of pruritogens than with a direct regulation of action potential generation at nerve terminals [35]. In an IL-13-induced mouse model of atopic dermatitis, blocking the TRPA1 led to a reduction of the scratching response [36]. As the H4R activates signaling pathways to induce cytokine and chemokine production, for example of IL-13 and RANTES (Regulated upon Activation Normal T cell Expressed and Secreted) in mast cells, the involvement of TRPA1 channels in histamine-especially H4R-induced itch-cannot be excluded [37]. As a study limitation, it has to be considered that results obtained in TRPA1 −/− and TRPV1 −/− mice were not completely congruent with the results obtained with pharmacological blockade of the TRPA1 or TRPV1 channels in CD-1 mice and vice versa. These heterogeneous and partially contradictory results obtained in knockout mice compared to the chemical inhibition of the TRP channels clearly need to be considered as a limitation of this study. They emphasize the need for further investigation of to which extent the genotype affects the sensation of pruritogens and their signal transmission. Although using two different TRPV1 inhibitors for the in vivo and the in vitro part of this study can be considered as a limitation, the results obtained in both setups are reasonably consistent. Off-target effects of all chemical compounds used, as well as molecular or cellular compensation mechanisms, which may occur in knockout mice, might be possible pitfalls. The theory of compensation mechanisms in connection with TRP channels, for example, has been reported by Petrus et al. (2007) [38]. TRPA1 is known to be involved in noxious mechano-and cold thermosensation. Nevertheless, in the study by Petrus et al. (2007) [38], TRPA1 −/− mice showed normal hyperalgesia, whereas a specific TRPA1 antagonist could reduce cinnamaldehyde-induced nociception in vivo. A specificity study on H4R agonists showed that effects (reduced IL-12p70 secretion from monocytes) caused by 4-methylhistamine could not completely be diminished by the selective H4R antagonist JNJ7777120, while ST-1006-induced effects could be blocked completely [39]. Yet, taking into consideration that 4-MH is a H2R/H4R agonist, the involvement of the H2R in itch has, except for one study with n = 3 mice, not been investigated [1]. Thus, further investigation is needed to determine the biochemical reasons for the different results seen for 4-MH and ST-1006 in vivo and in vitro. Furthermore, some H4R ligands exhibit a functional selectivity on the H4R by stabilizing multiple ligand-specific receptor conformations [40,41]. Although being an H4R antagonist, JNJ7777120 exhibited context-dependent stimulatory effects on the H4R, for example [42]. As already mentioned, mouse strain differences complicate the interpretation of the results presented here. As not every animal species or strain is suitable for every existing experimental set up, picking the appropriate mouse (animal) model for an investigation is of great importance to generate significant and ideally translational data [43][44][45]. As Inagaki et al. (2001) [16] pointed out, major strain-specific differences exist in the scratching response to various pruritogens such as histamine and serotonin. They identified the inbred mouse strain C57BL/6 and the outbred strain CD-1 (ICR) as the most susceptible strains to local histamine application. CD-1 mice reacted twice as intensively to histamine injection (50 nmol/L, i.d.) compared to C57BL/6 mice. Additionally, Bell et al. (2004) [1] mentioned CD-1 mice reacting 30 times more sensitively to histamine than BALB/c mice. In our study, a comparable effect could be shown for H4R agonist (4-MH)-induced itch for the first time. Both CD-1 and C57BL/6 showed a higher scratching response than NMRI and BALB/c mice. Until now, the underlying mechanisms for this difference in response to H4R-stimulation has not been examined. We hypothesized that receptors and ion channels involved in itch induction/transduction might be expressed in different quantities or combinations in the various strains. Our findings to 4-MH sensitivity of 3-4% of DRG neurons in the examined strains are in line with already published data (3-10% sensitivity to H4R agonists; [2,6]). In both CD-1 and C57BL/6 DRG neurons, the majority of 4-MH-sensitive neurons were both AITC and capsaicin responsive, whereas the majority of 4-MH-sensitive BALB/c DRG neurons were only capsaicin responsive. Interestingly, fewer neurons were histamine-sensitive in C57BL/6 (12%) compared to the other strains investigated (16-19%). Still, these results are consistent with values found in literature (11-16%; [5,7]). Within this population, in C57BL/6 mice, more cells reacted to AITC or both AITC and capsaicin stimulation than in the other strains examined. Possibly this difference compensates for the smaller amount of histamine-sensitive neurons. As these results only represent the functional properties of these cells, further studies should possibly address the receptor repertoire on these cells, for example, on the mRNA level. Based on the present data, there is no final explanation for the underlying mechanisms of the strain-specific differences in sensitivity to histamine or H4R agonists. Attention also needs to be given to other cells of the organism, which may be involved in the onset of pruritus, i.e., keratinocytes or mast cells [37,46]. Itch transduction is a complex interaction of receptors, second messengers, and other effector molecules in a variety of cells (for review see Cevikbas and Lerner, 2020 [47]). A further aspect to be considered is the use of female mice in chemical TRP inhibition experiments, whereas for the experiments with the TRP knockout mice both sexes were used. Depending on the mouse genotype, female mice have been shown to be more sensitive to itch stimuli than male mice [44,48]. Evaluation of sex-specific differences was not part of this study; however, due to the low number of animals used (n = 3 per sex) and the absence of statistically significant differences between male and female mice, we decided to pool both sexes. Generally speaking, it would be best practice to use both sexes in each experimental set up, which in turn might increase the number of animals used in these studies and challenge the aspiration to reduce the number of laboratory animals used in research according to the 3R principle by Russel and Burch [49]. Remarkably in the in vitro part of this study, compared to 34-78% in literature, only 15-24% (Figure 8) of the total examined neurons reacted to the TRPV1 agonist capsaicin [2,5,6,9,26]. A physical and functional interaction between both TRPV1 and TRPA1 channels is well characterized [50][51][52][53]. As already discussed, TRPA1-and TRPV1-specific evoked responses undergo functional cross-desensitization in vivo and in vitro [50,54]. Consequently, cells activated by the application of the TRPA1 agonist AITC will respond in a less pronounced way to a subsequent treatment with the TRPV1 agonist capsaicin or vice versa. Conclusions In conclusion, this study presents in vivo and in vitro evidence that in addition to the TRPV1, the TRPA1 channel also is responsible for histamine-induced itch transmission in mice. Furthermore, downstream signaling pathways of the H1R and the H4R seem to be different. Further experiments need to be conducted to determine the crosstalk between TRP channels and histamine receptors, and the subsequent signaling cascade.
5,243
2021-08-01T00:00:00.000
[ "Biology" ]
An architecture of an interactive multimodal urban mobility system . Throughout the world and particularly in urban areas, population growth can be listed as a direct cause of the uprising use of personal vehicles in cities around the world. Such attitude may lead to dramatic consequences, not only economically, but socially and environmentally. To meet these challenges, and to promote the use of multiple means of public transports by citizens, public authorities and transport operators seek (cid:1) within the framework of the implementation of connected cities projects and intelligent (cid:1) to optimize the extraction as well as the exploitation of the multimodal information by developing Interactive Systems of Assistance to the Multimodal Movement (IAMM). However, fi nding the optimal multimodal path for a given person is far from being a simple matter. Indeed, each potential user may have different or unique preferences regarding the: cost and/or duration of his/her journey, number of mode changes, comfort or safety levels desired. In the present study, we propose a multi-agent system which, based on the parameters entered by each user, proposes the optimal paths in the Pareto sense, including different public transport modes, private cars and parking availability. Introduction In these modern times, roads congestion can be considered as one of cities major problems today. This "scourge" strikes almost every major metropolis in the world, affecting both the so-called developed countries, as well as those considered still under development. In addition to its negative impact on citizens accessibility in cities, traffic congestion has a negative impact on cities air pollution and its economic development. In order to cope with these kinds of problems that are increasingly causing complications in various cities around the world, several researches, technical and logistical means are being implemented to investigate potential solutions to reduce their harmful effects. The current trend is to encourage people to use different modes of public transport in cities [1], taking advantage of advances in information and communication technologies [2]. This will reduce the toxic emissions of private vehicles and the duration of journeys in cities by minimizing delays due to congestion or the search for car parks [3]. In this sense, several solutions are focused on determining optimal multimodal paths to go from a starting point to a desired destination inside a city by using combinations of various available means of transport. These algorithms make it possible to consider at least two possible transport modes and to determine the optimal combination for each trip between two points [4]. For example, the iterative algorithm proposed by Angelica and Giovanni [5] considers a public transport network À composed of metro and other modes À that takes into account the possibility of changing modes of transport as well as the associated constraints for solving the problem of the shortest multimodal path. However, during a user's journey, several parameters can change and have an impact on both the duration and cost of each travel arc; including the traffic situation on each roadway, the availability of each car park in question, as well as any unforeseen events (accidents, breakdowns, etc.) in the considered multimodal network [6]. To take account of these variations, the algorithm must have a general real-time view of the state of each considered mode of transport. This will allow it to update multiple parameters, such as the estimated cost and duration for each arc, etc. and to subsequently generate the optimal path for each user. An efficient multimodal traffic management system should therefore allow it users to define paths that best corresponds to theirs requests while taking into account the real time situation of the selected modes of transport. This system can also offer it users a set of preferred criteria [7]. Once a user destination and multimodal network preferences had been set, he receives a list of the paths that are regularly updated along the journey in order to consider the different real time changes in parameters of the multimodal network. Related works Up to date, a variety of academic and commercial solutions have been proposed in order to improve urban mobility by firstly exploiting more than one mode of transport (i.e. taxi, bus, tramway, metro) and secondly by optimizing the lookup for vacant spots in parking areas [8]. However, few solutions have been proposed to route users from their location to a desired destination point and provide parking availability. The Mobility Recommender System proposed by Sergio and Silvia [9] can be considered as one of the most effectives solutions that manages the multimodal urban transport problematic by taking into account constraints related to the availability of car parks. This system accompanies travelers in their decision-making process by generating different ranked lists of possible multimodal door-to-door routes including parking spaces. The paths parameters (cost, duration, etc.) are determined by exploiting heterogeneous data sources. Data that are concerning different modes of public transports are considered static while those that are describing the availability of parking are considered dynamic [9]. The proposed solution couldn't consequently be able to consider the various disturbances that may occur either on the road network or on a public transport line. Concerning the algorithmic aspect of the problem, special attention, in the literature, has been given to the two-criteria problems using exact methods such as Branch and Bound [10][11][12] and Dynamic programming [13,14]. Other metaheuristic approaches have shown a great robustness to find very good multimodal paths in a static or a dynamic environment [15]. These resolution approaches, even if they are effective for small size problems, there is no exact procedure that is effective, given the simultaneous difficulties of the N-P hard complexity and the multicriteria aspect of the problem. By cons, several studies have been devoted to solve the problem separately by studying one of the aspects of multimodal urban transport management. In our work, we cite three main research topics on the theme: -The Query of the shortest path between two nodes in a road network. While studying networks, one of the most encountered problems in this research field is the search of the shortest path, especially in telecommunications and logistics [16]. Thus, if the Dijkstra algorithm is considered one of the oldest and most widespread problem-solving algorithms, a comparative study between a few algorithms used to determine the shortest path shows the efficiency of the algorithm A* [17]. However, in the case of multimodal transports, the studied networks are more complex since they contain different arcs, where each one is referring to an urban transport mode [18]. Therefore, a genetic algorithm is developed to allow the user to plan his itinerary and to update it during his travel (depending on the real-time data). This genetic algorithm is defined in Figure 1. To take into account every change in the arc parameters, the weight of each arc of the road network is updated in real time. To do so, an approach-based on both the history of GPS sensors installed on cars and the regression of the neighboring kth (kernel method)-makes it possible to predict the state of tracks and traffic [19], in order to deduce the wanted paths. Another way to generate the shortest path in the case of urban mobility is to consider the use of a set of cars equipped with GPS, so that each driver can compare the time it took him to go from one point to another with the time it took the other drivers who used other paths, and to deduce the optimal solutions to take for subsequent experiments [20]. Improve the probability of finding an available parking spot According to a study conducted at the University of California-Los Angeles in 2006, 30% of the traffic at peak hours is due to the search for a place in a parking [5]. The PGI are (Parking Guidance and Information) are designed to allow drivers to improve the probability of finding a parking space and to find the cheapest possible solution. In the case of a centralized system that manages a set of on street and/or off street parking, a multi-agent system based on negotiations between drivers and car parks [21], and managing the competitive aspect between users who have chosen the same car park [22], can be developed to achieve the best possible allocation. The optimal allocation of a group of drivers to a set of car parks can be a considered as a deterministic problem, and the allocation of the parking spaces be can be solved by a linear program according to the logic FIFO (first request served) [23], or a heuristic (genetic algorithm) [24]. In this case, the following conditions should be verified: -The reservation of a place can be spontaneous. -The availability of car parking spots in real time is exactly determined thanks to the establishment of Internet-of-Things tools (sensors, image processing) [25]. However, even if the number of available places at a given time can be known using sensors and/or cameras, it wouldn't be sufficient data. In fact, to be closer to reality, the probabilistic parameters of the model should be considered. Once a destination has been determined, the driver doesn't know the parking lot with the highest probability of being free when he arrives. To determine this probability, it would be necessary to gather ahead some few data such as: the user's current position, his destination and the current availability of car parks. Then, based on these elements and previous statistics, the system should be able to estimate the arrival time of the user to the desired parking [26,27]. Traffic congestion management To manage the congestion in real time and to allow users to follow the optimal path to reach their destinations using IoT tools, the vehicles can communicate with both each other's and with infrastructures (Road Station Unit, Base Station, etc.). Thus, vehicles can create a large network where each one is considered a node. Such kind of network is called: Vehicular Ad-hoc Network (VANets). To optimize the dissemination of messages concerning the traffic situation on each arc of the road network, several solutions have been proposed [28] to exploit the information concerning the state of roads infrastructure. We summarize in Table 1. 3 Proposed model Definition In our study, we first consider a multimodal network consisting of the following modes of transport: private transports (i.e. user cars), tramways and trains. Then, in order to switch from a private transport mode to another, users will have to park their cars in the closest parking lot to their next transport mode station, which leads us to consider car parking lots into our study. As for parameters, they are chosen according to each user priorities: the minimization of the cost or duration of travel, the preference for one or multiple transport modes, or the degree of flexibility in terms of the number of changes to be authorized by the user. The process of determining and choosing the multimodal path by a user is described by the Business Process Model and Notation Model presented in Figure 2. 1. A user determines his destination and classifies the parameters according to his priorities and then sends them to the multimodal transport management system. 2. The system then determines the list of all possible paths leading to the user's destination and classifies them according to his proposed parameters. 3. The user validates the most appropriate path, and sends it to the system. 4. The system registers the choice of the user. 5. Re-calculates the routes in order to determine the optimal path, while considering the situation of each arc in real time. 6. Following the occurrence of one of these two events: -The expiration of a system-determined refresh duration. -New user's request. 7. The newly generated paths are compared to the last path validated by the user (according to the priority parameters). If one of the generated paths is more optimal, it is proposed again to the user for validation. 8. Steps 3, 4 and 5 are repeated until the user reaches his destination or requests a process shutdown. System parameters In our model, we consider an urban multimodal transport network composed of a set of car parks added to the following means of transport: the private vehicle, the tramway, the train and the possibility of walking. Each arc of the network can be characterized by the duration a user needs to browse it, and the associated cost to pay (Tab. 2). Some of these data are gathered statically such as timetables of the public transports passage or dynamically as when it requires the driving time on an arc. The objective functions of the model depend on the priority parameters of each user. For example, a person who wants to reach an urgent meeting in the city center will prefer the optimum path in terms of reliability and duration of the trip even if it will make him pay more. A person who does not like to drive his own vehicle in areas where the traffic jam rate is high will opt for the routes whom arcs are public transport. Solution Principe Disadvantages Broadcasting-based [29] Broadcasting of messages. Risk of congestion of circulating messages. Route discovery-based [30] Learn about the state of traffic on an arc before accessing it Response time is relatively large Position-based [31] Method based on the position of each vehicle and the intersections to be traveled to reach its final destination. These intersections are determined either by street-maps or by V2X communication protocols. Each two vehicles must maintain communication between them which generates a large number of exchanged messages that can slow down the operation of the protocol. Cluster-based [32] Vehicles on the same arc and having identical properties form a cluster. Each cluster communicates with neighboring clusters and thus has a global view of traffic in the environment Security issues related to identification, privacy, integrity ... Infrastructure-based The protocols where vehicles communicate with each other, but also with infrastructures (RSU for example) The cost of infrastructure is generally high, and no party is easily convinced to invest in it. The objective of this system is therefore to optimize a multi-objective function while respecting the constraints of both the multimodal network and the users. The system will have as inputs the status of the traffic on reach real time and the availability of each parking lot, and will allow each user having entered his preferences to have the list of paths classified according to his choices. Model formulation Let G = (V, E, M, P) be the network graph under consideration, where V is a set of vertices in the multimodal network. E is a set of edges, corresponding to all possible transportation paths. M is a collection of transportation modes and P is a set of parking spaces considered in the graph. Let P a be a path from o to d (o, d ∈ V). Given a vertices j, the forward star and the backward star of j are denoted, respectively by V þ j ¼ i ∈ V : ðj; iÞ ∈ E ð g f and V À j ¼ i ∈ V : ði; jÞ ∈ E ð g f . Symbol description: x ijm : Binary variable indicating whether the arc (i, j) belonging to mode m is used. The first objective function is to find the optimal paths in terms of duration, while the second objective function is to propose the optimal paths interms of monetary cost. The fisrt constraint ensures that the obtained path goes from the origin o to the destination d. Constraint (2) ensures that the predecessor to a private mode arc, is a private mode arc too. Proposed approach A multimodal route choice can prove to be a much more complex task with respect to a simple car route. In fact, travelers could find themselves suffering from an overload of choices since too many options should be evaluated. The potential availability of real-time information may help to streamline this process by providing necessary parameters of each multimodal arc. Moreover, in the literature on decision support system for traffic and transportation optimization, we encountered some solutions whereas the proposed mobility suggestion contains information about the expected availability of the parking, and the troubles they may occur on the transport network. The solution we propose in this work is composed of three subsystems detailed in Figure 3 and communicating with each other: The User, the Data Generator and the Calculator. The user This entity represents the interface between the set of the users and the two other components of the system: the "Data Generator" and the "Calculator". It is responsible of the user identification task, location of their actual locations, destinations and preferences. This data could be used by the "calculator" to first determine the possible multimodal paths and secondly to rank the solutions generated according to the user preferences A USER entity would also be a source of road traffic data for the "DATA GENERATOR". Indeed, while traveling in a private vehicle, a user remains in communication with the DATA GENERATOR which allow the Road Traffic Agent to update the parameters of each road arc. The communication process is detailed in the following paragraph (THE ROAD TRAFFIC AGENT). The data generator In this work, we propose a system that relies on heterogeneous information sources to gather the needed arcs parameters to determine different potential suggestions to a user request. This process requires the development of a software module suited to model, collect, classify and generate the information needed by the system, mainly gathered from the Train-Agent, the Parking-Agent, the Tramway agent and the Road Traffic Agent and communicated to CALCULATOR via Interface Agent. The interface agent: Based the data generated by the other agents of the subsystem data generator, the interface agent entity manages these heterogeneous data to make them exploitable by the CALCULATOR. The interface agent is provided with a memory that, according to the possible paths generated by the computer, allows first the allocation of a dynamic sub-memory containing the concerned arcs structured as adjacency list and second to retrieve the numerical values of the parameters of the arcs from other DATA GENERATOR agents in real time. The design of this entity allows, according to each request, the integrating of the data retrieved from heterogeneous systems through a layer named Knowledge Layer created following each user request [33]. The Parking Agent: this entity main role is to: keep having a precise real-time view of the availability of city car parks. -Predict the arrival time of each vehicle to each parking based on road state and the previous history. -Take into account all the main events in the city [34]. The Train Agent and the Tramway Agent: are designed to provide not only the timetable of the concerned train or tramway arc, but the intended location of the train or tramway when the user is expected to use it. These predictions should be based on models that consider not only the normal transit frequency of each transit mode (train or tramway), but also any disturbance on a public transport line such as accidents, eventual breakdowns of a mode or the occurrence of a special event in the city as a football match [34]. The Road Traffic Agent is responsible for determining all the road traffic parameters required when a user is requesting the private mode. In fact, using the static data of the road map and the history of use of each road arc, this agent is able to foresee the state of the traffic, and then lists the parameters of each arc which could be traveled by the user. To take into consideration the troubles that may occur on the road network, in particular traffic jam, road traffic accident, road works, etc., the Road Traffic Agent ask for confirmations in real time from users on a concerned road arc. For instance, based on the difference between the regular and the actual speed of a vehicle, this agent sends a list of eventual causes of this deviation to which a user can confirm or deny the origin of this delay. This road traffic management officer is used recently by several technological solutions (Waze for example) [35]. The calculator The CALCULATOR entity is used to solve the multicriteria and multi-objective problem concerning the optimal urban multimodal path according to a sequential process in three steps. Calculations are executed by three agents according to the following algorithm. The Generation Agent: Generates possible multimodal paths according to the destination cited by the user. The resulting paths must meet logical and technical constraints; once a user has left his private vehicle in a parking lot, the proposed solution should no longer contain a return to private mode. The Evaluation Agent: Determines the different parameters of each path given by The Generation Agent based Erreur de traductionon the one hand on the Data generated by "The Data Generator", and on the hand, on the parameters set by a user. The ranking Agent: Ranks the obtained solutions based on both the user preferences and the parameters of the generated paths. The ranking process is based on the Pareto-Ranking approach since parameters to optimize in our study (time, cost, comfort level, etc.) conflict each other. The population (set of solutions) of generated solutions is then ranked according to a dominance rule. The three previous agents operate according to the Algorithm 1. Simulation parameters In our study we consider three multimodal transport networks composed of 500, 1000 and 15000 nodes (example of Casablanca in Morocco) respectively. we consider two objective functions to optimize: l 1 : cost. We consider 3 levels of preferences/weights (1, 2 or 3) for each of the two criterions l 1 and l 2 ; according to the user's preferences, objectives functions are weighted and classified. We consider two approaches to resolutions for responding to user queries. The first approach is to determine the optimal paths in three steps: 1. Determination of the optimal paths according to each of the two criteria based on the dynamic approach proposed in [36]. 2. Calculation of the overall objective function. This function is a weighting of weight for the two objective functions and defined as follow: 3. Classification of objective functions obtained. The second approach, on which our work is based, can be defined according to the three steps: 1. Determination of possible multimodal paths between origin and destination using the Direct. Results and discussions The results of the simulation in Figure 4 show that from a network of more than 1000 nodes, the second approach becomes much more efficient. In these simulations, we considered two parameters for which pareto convergence is ensured by the second approach. Similarly for reasons of simplification, we considered a simple weighting for the two objective functions (cost and duration). The proposed approach will also make it possible to save all the possible multimodal paths between two nodes that can be used directly without making the same calculations, which will make it possible to constitute a rather important database. The approach represents a great advantage since our perspectives consist in integrating artificial intelligence to solve the problem of multimodal urban networks. Conclusions The problem of multimodal travel assistance is a topical issue that is increasingly of concern for transport companies, as it has a direct impact on the quality of service offered to users. However, existing information transport systems are generally mono-modals, usually not considering private vehicle as a mode of transport, not offering to lookup for available parking lots, and do not provide accompaniment during travel to consider the disturbances that can occupy on the transport network. The proposed system makes it possible to generate for users the information needed for their displacements in order to validate the most appropriate path to reach their destinations. The system accompanies the user along his journey to warn him about any disturbance that may happen on a public transport line (breakdown, power failure, etc.) or on the Road Traffic network (accident, traffic jam, etc.). These data make it possible to update the parameters of the paths and subsequently to propose new paths more appropriate to the request of the user. In our approach, in order to be able to retrieve data from existing systems that may be heterogeneous, we propose a distributed system to define the relationships that can take place between these systems. Similarly, to deal with the interactions and interoperability of current systems, we use a multi-agent system. We therefore explain the particularities of the system, the agents involved, the interactions, communications and coordination between these agents. The proposed approach allows, not only to consider the different modes of public transport of a city, but also integrates the possibility of using partially or totally the private vehicle and then park it in one of the car parks of the city thing that has not been integrated in most studies and system proposals. The simulation results show the interest of adopting the second approach consisting on generating possible paths between origin and destination points, then evaluating them according to the preferences of the user before proposing the best suited to the user's request. This approach is not only a benefit in terms of execution time, but will also make it easier to include artificial intelligence for the work that will follow. As a perspective, we propose a comparative study concerning the algorithms of solving the multi-objective and multi-criteria problems for the optimal multimodal path according to the preferences of the users based in artificial intelligence. Similarly, we propose a state of the art on data management techniques in order to be able to deduce the optimal multimodal paths from previous experiments without being obliged each time to do the calculations again.
5,992
2019-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
AliasNet: Alias Artefact Suppression Network for Accelerated Phase-Encode MRI Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution. Popular methods are based mostly on compressed sensing (CS), which relies on the random sampling of k-space to produce incoherent (noise-like) artefacts. Due to hardware constraints, 1D Cartesian phase-encode under-sampling schemes are popular for 2D CS-MRI. However, 1D under-sampling limits 2D incoherence between measurements, yielding structured aliasing artefacts (ghosts) that may be difficult to remove assuming a 2D sparsity model. Reconstruction algorithms typically deploy direction-insensitive 2D regularisation for these direction-associated artefacts. Recognising that phase-encode artefacts can be separated into contiguous 1D signals, we develop two decoupling techniques that enable explicit 1D regularisation and leverage the excellent 1D incoherence characteristics. We also derive a combined 1D + 2D reconstruction technique that takes advantage of spatial relationships within the image. Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality. We also find AliasNet enables a superior scaling of performance compared to increasing the size of the original 2D network layers. AliasNet therefore improves the regularisation of aliasing artefacts arising from phase-encode under-sampling, by tailoring the network architecture to account for their expected appearance. The proposed 1D + 2D approach is compatible with any existing 2D DL recovery technique deployed for this application. INTRODUCTION The theory of Compressed Sensing (CS) [1,2] is integral to sparse image reconstruction and has seen application in many areas of signal processing [3].It is an especially important technology for magnetic resonance imaging (MRI), which is an intrinsically lengthy process subject to physical and physiological constraints.Such constraints necessitate the sequential acquisition of k-space (discrete Fourier space) through continuous trajectories [4,5].Scan times are subsequently influenced by available sampling and reconstruction methods [6,7], where implementation of CS can improve spatial-temporal resolution and increase scanner availability.From a signal processing perspective, CS performs optimally when three conditions are met: incoherent under-sampling, transform sparsity and nonlinear optimisation.Over the past decade, there has been active development of both suitably incoherent sampling schemes for MRI [8], and the algorithms deployed in reconstruction [9][10][11]. Incoherence of a sampling matrix is often synonymous with orthogonal measurement and adhering to the well-known restricted isometry property (RIP) [12,13].The objective is to produce noise-like image artefacts that can be easily distinguished from image features.Ideal measurement is therefore non-deterministic or an approximation thereof, as random sampling has been shown to produce incoherence with high probability.Unfortunately for MR applications, pure Cartesian random sampling is impractical to implement as a two-dimensional (2D) acquisition sequence [9].Instead, practical 2D Cartesian CS fully samples in the frequency-encode direction and under-samples in the phase-encode direction [4,5].Incoherence between these measurements is thereby one-dimensional (1D), which yields structured aliasing artefacts (ghosts) along the under-sampled axis.Strategies have been developed to mitigate this impact on image quality, such as varying the density of measurement [5,14].This is known as multi-level sampling, where acquisition is asymptotically coherent with the regularising function, ensuring that high energy regions of k-space are well captured [15,16].For reconstruction algorithms, another approach suggests first recovering an image from the undersampled 1D columns of k-space [17], fully exploiting randomness in the available direction. Alternatively, k-space interpolation networks [47,48] or direct k-space to image transformation [49,50] have also been proposed.As CS-MRI artefacts degrade an image in non-local and potentially unrecoverable ways, recent contributions have instead proposed a dual-domain reconstruction approach.These networks execute operations in k-space alongside image denoising layers, enabling recovery of image features that may be otherwise lost from single-domain techniques.Some explore k-space recovery in the denoising sense [51][52][53][54][55], and others perform k-space interpolation via k-space redundancies [56,57].Although dual-domain networks excel at recovering images from random 2D Cartesian and projection-based under-sampling patterns, they typically deploy convolutional neural network (CNN) operations directly in k-space.Reconstruction is thereby limited to interpolation from known values, with large distances between sampled points necessitating elaborate network architectures and high computation costs to overcome.Additionally, regularisation is still often limited to an idealised 2D transformation, which may not be suitable for certain image artefacts.For example, Iterative Shrinkage-Thresholding Algorithm Network (ISTA-Net) [39] and Deep cascade of Convolutional Neural Networks (DcCNN) [37] assume that the image can be represented as sparse in the 2D transform (ISTA-Net) or 2D denoising (DcCNN) sense. In this paper, we develop a CS methodology tailored to the recovery of 1D Cartesian random undersampled MRI.Our method employs a series of 1D CS operations that efficiently populate missing kspace to improve the image estimate (by alleviating aliasing artefacts), before 2D regularisation.This regularisation is executed by 1D CNN modules, which are then combined with existing 2D DL reconstruction techniques.Other notable 1D DL formulations have been proposed, such as domaintransform manifold learning in phase-encoding direction [50] and One-dimensional Deep Low-rank and Sparse Network [57].To the best of our knowledge, however, this is the first 1D only network designed to supplement existing 2D reconstruction techniques.Our contributions are summarised as follows: • A general 1D CS framework for MRI that decouples under-sampled k-space into a series of 1D signals in two separate domains.Asymptotic incoherence between sampling and regularisation is improved by explicitly recovering missing data in the direction of undersampling. • Development of an efficient proximal mapping for DL non-linear regularisers in our proposed algorithm.This constitutes the proposed 1D CNN regularisation modules. • The combination of our 1D modules with existing 2D DL recovery methods efficiently achieves image quality superior to simply increasing the size of the original 2D network. • Comparisons with state-of-the-art dual-domain reconstruction techniques highlight that our CS-based recovery of missing k-space is better suited to phase-encode under-sampling MRI. The proposed method is an Alias artefact suppression Network (AliasNet) that enhances the regularisation of 1D under-sampled MRI.An overview of the approach is visualised in Figure 1. Relevant theory and the proposed method are described in Section 2. Experiments with a purely 1D reconstruction, as well as integrating the model into well-known networks DcCNN [37] and ISTA-Net [39] are investigated in Section 3. Finally, Section 4 contains a discussion of these experiments. COMPRESSED SENSING One can model CS for MRI by considering an object MR image 0 and the subset of associated k-space measurements , such that = 0 + .Here represents the 2D discrete Fourier transform (DFT), is the under-sampling matrix and is complex Gaussian noise.As equation = () [ − ] is ill-posed, image estimate can be recovered according to, where ℎ(⋅) is a regularisation function.Commonly, ℎ(⋅) is chosen such that ℎ() = ‖Ψ()‖ 1 , which enforces sparsity in some transform Ψ(⋅); is a weighting factor.As asymptotic incoherence between sampling and sparsifying functions is important for the successful implementation of CS [15,16], it can be beneficial to consider the limitations of available sampling schemes when selecting the regularisation employed.In this work, we explore the addition of explicit DL 1D regularisation for phase-encode under-sampled MRI.image.Visualisation of a sampling mask's PSF thereby illustrates how pixels interfere during undersampling.While the 2D strategy produces unstructured and noise-like image artefacts, it is not a practical sampling pattern for 2D acquisition schemes.On the other hand, we see 1D artefacts arise as aliasing "ghosts" of surrounding image features.This stems from their respective PSF, in which the 2D strategy resembles the 2D dirac delta function.Conversely, all non-zero coefficients of the 1D under-sampling are large in amplitude and located in the central column.Importantly, the 1D PSF resembles the 1D dirac delta function.As adjoining image columns are similar in appearance, the resulting 1D ghosts are also similarly structured.It is therefore difficult for reconstruction algorithms to identify and remove these structures using non-directional 2D operations.As such, we suggest image regularisation be performed in the direction of under-sampling via a 1D CNN approach, leveraging incoherence in the available direction. DECOUPLING 2D K-SPACE INTO 1D SIGNALS To leverage the excellent 1D incoherence characteristics of phase-encode under-sampling, we must decouple image and its k-space into contiguous 1D signals.We identify two possible formulations of the problem by considering the following equality, Φ and Φ are the 1D DFT in the frequency-and phase-encode direction respectively.We may then relate columns of image and k-space with the following two equations, where denotes the extracted ℎ column.Figures 1c, 1d demonstrate this relationship. Phase-Encode Alias Artefact Suppression (Image Domain) Recognising that artefacts arising from phase-encode under-sampling can be characterised as 1D perturbations of Eq. ( 3), Yang et al. [17] provide an "upgraded" image estimate before 2D CS.Their proposed optimisation problem can be expressed as, 1 is the 1D under-sampling pattern and is a weighting factor.As visualised in Figure 1d, 1D columns are related to Fourier measurements (Φ ) via the 1D DFT.Therefore, recovery of missing k-space is performed by enforcing a total-variation (TV) constraint on columns , whilst ensuring consistency with related intermediate Fourier measurements (Φ ) . Phase-Encode Alias Artefact Suppression (Intermediate Fourier Domain) As image domain artefacts are often non-local and structured, dual-domain techniques perform convolutional operations directly in k-space [51][52][53][54][55].However, large distances between sampled points may limit the utility of this approach.Therefore, alongside 1D image domain artefacts, we also consider artefacts arising in the intermediate Fourier domain described by Eq. (4).Our proposed optimisation problem is therefore, In this instance, k-space values will be populated by enforcing learned constraint ℎ 1 (⋅|Θ ) on aliased 1D signals of intermediate domain (Φ ) (Figure 1c).For simplicity, we instead define, which allows Eqs.(5,6) IMPLEMENTATION To efficiently suppress 1D artefacts, we propose to solve Eq. ( 7) via the proximal gradient (PG) descent technique described by Algorithm 1. end In this approach, the fully sampled 1D signal 0 is recovered from measurements by enforcing ) is an indicator function of set , where is the set of noiseless MRI columns for which a family of denoisers (⋅) exist, then Eq. ( 10) is a special case of proximal mapping where = prox ℎ 1 ( ), and therefore, This assumes that = −1 − Φ can be modelled as = 0 + , where is residual undersampling noise.Because the 1D under-sampling is incoherent with respect to 0 (see Figure 2b), it is expected that can be modelled as signal independent and noise-like [17,57].We, therefore, propose to apply ℎ 1 (⋅|Θ) via (⋅ | ), which allows for a data-driven mapping between noisy measurements and fully sampled 1D signal 0 .Eq. ( 10) can be written as, Here are the CNN parameters at iteration .Figure 3a illustrates our chosen architecture for (⋅ | ), which features a skip connection for residual learning.As discussed in [39,48], residuals of natural images are highly compressible, consisting mainly of high-frequency components. Combination with 2D Techniques Given that structured and non-local artefacts are typical of phase-encode under-sampling (see Figure 2b), we propose a combined regularisation that directly penalises 1D perturbations via AliasNet.2D transforms then leverage spatial relationships to further enhance image quality.Additionally, for each × image there will be 1D training samples available, ensuring additional parameters incurred by the 1D modules are well trained.We demonstrate superior scaling with a total number of parameters compared to simply increasing the size of the original 2D network.We also compare against Dual-Domain Recurrent Network (DuDoRNet) [53] in various tests.Despite being more computationally efficient, our combined 1D + 2D networks consistently outperform alternatives.In the general case, we define our combined 1D and 2D optimisation problem as follows, Where and are defined in Eqs.(14,15) and penalise 1D perturbations, Ψ 2D () is the 2D regularisation.The combined optimisation is solved via an alternating minimisation (AM) procedure, which is split into three steps: Step 1: Populate missing k-space by denoising in intermediate domain (Φ ) , Step 2: Using this upgraded estimate of k-space we denoise columns by solving, Steps 1 and 2 can be solved in the PG technique described earlier in this section and illustrated in Figures 3a, 3b.The steps will be defined as solving and and their iterations are and respectively.Weights can be shared between successive or iterations but not between and themselves.This ensures AliasNet learns the appropriate intermediate Fourier or image domain denoisers.Figure 3d visualises the reconstruction performed by and , and how they are cascaded into one another. Step 3: The 2D regularisation can be any existing DL model by setting Ψ 2 (⋅) to the appropriate regulariser.For instance, regularisation in the denoising sense as per DcCNN [37] (and MoDL [40]), or regularisation by transformation sparsity as per ISTA-Net [39], In fact, 2D regularisation can even be that defined by dual-domain algorithms such as DuDoRNet [53]. Figure 3c illustrates our 1D regularisation combined with a general 2D regularisation for reference. Here, are the total number of iterations to solve the AM algorithm described in this section. Loss Function While the residual between image domain, intermediate k-space domains and k-space are linearly related, we found that gradient propagation to the 1D and layers is improved by directly supervising the intermediate domains they operate on.As these operations occur on decoupled representations of the image, it suggests the following: 18) helps to train the decoupled system by directly supervising the domains and operate on, whilst enabling them to learn a regularisation that doesn't negatively impact "unseen" image regions.Furthermore, use of a multi-domain loss function is consistent with DuDoRNet, which due to its dual-domain architecture, deploys a sophisticated multilayer k-space and image domain loss.While a simple MSE or MAE is sufficient to train our model, the proposed loss function [Eq.( 18)], was useful in our comparisons and helped AliasNet to surpass or match DuDoRNet in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). EXPERIMENTAL CONFIGURATION Two complex valued public MR datasets have been used to train and test our proposed method, each featuring raw 2D k-space of three-dimensional (3D) volumes.Image domain magnitudes of all volumes were normalised between [0, 1].There was no overlap between training, validation, or test sets. Single-coil images are obtained with emulated single-coil methodology as-per each dataset's implementation [58,59].As the intent of AliasNet is to improve the regularisation of phase-encode artefacts, single-coil images are utilised in this study to observe the improved regularisation of image artefacts without additional spatial correlations.Further to this, the tested 2D and dual-domain networks were originally developed for single-coil image reconstruction [37,39,53,54]. Calgary-Campinas We use the Calgary-Campinas brain dataset [58] to investigate the behaviour of our proposed 1D FastMRI We also train and test on a subset of the NYU fastMRI single-coil knee dataset [59,60].In this study, Training Methodology Discrete Fourier space was sampled using fixed 1D Gaussian random masks for each under-sampling ratio and anatomy.We use PSNR and SSIM to evaluate closeness to the original image.All experiments were conducted on an NVIDIA SXM-2T V100 graphics processing unit (GPU) with 32GB of vRAM.All networks were implemented in PyTorch and trained using the Adam optimiser.All DcCNN and ISTA-Net implementations were trained with a learning rate of 10 −4 , a batch size of 10 and with custom loss function [see Eq. ( 18)]; ISTA-Net was configured as ISTA-Net+ and also had matrix inverse loss as-per [39].DuDoRNet and MD-Recon-Net were trained following the original implementations [53,54], with batch sizes of 5 and 10 respectively.However, when training on the knee dataset we reduced the number of dilated residual dense blocks (DRD) of DuDoRNet from 3 to 2 within each of its dilated residual dense network (DRD-Net) layers.This is due to the knee dataset being substantially larger than the brain dataset, where 3 DRD will require several weeks to complete just 200 epochs.In contrast, it takes 1 day for our proposed DcCNN + AliasNet network to converge.Additionally, our network features approximately 3 × fewer parameters compared to DuDoRNet, meaning the reduced model is a closer comparison.All networks were trained until convergence, where the network with the lowest validation loss was chosen for testing.Complex numbers are treated as separate input channels by the proposed reconstruction models. ABLATION STUDIES To establish a relationship between iterations of and to reconstruction quality, we explore the combination of our proposed 1D AliasNet modules with the 2D regularisation found in DcCNN [37] and ISTA-Net [39].As-per the AM algorithm described in Section 2.3.1 and Figure 3c, the 1D recovery is interleaved between 2D steps.We also experiment with shared and un-shared configurations.Here, shared 1D modules indicate that the weights of iterations internal to and are common inbetween 2D update steps; and do not share weights themselves.Alternatively, un-shared means that every 1D iteration is unique throughout the reconstruction. Impact of AliasNet Iterations Table 1 compares the total number of unique 1D parameters necessary for shared and un-shared configurations given iterations of � �, � � and 2D steps ( ). Figure 4 compares the relative scaling between iterations of and in both shared (top) and un-shared (bottom) configurations, combined with a reference DcCNN [37] network.DcCNN is configured with 5 data consistency layers ( = 5), 5 convolutional layers and 32 filters per convolution (D5C5).Experiments are conducted at We see that improves the D5C5 estimate, with PSNR converging at = 5.By comparison, does not improve PSNR as significantly.This suggests that convolutions applied in image space are more effective than the intermediate domain denoising of .We also note that in shared configuration provides similar PSNR scores to un-shared.On the other hand, shared layers do not benefit from > 1.For simplicity, all subsequent experiments will cascade and layers by fixing = 1 and = 5 as-per the configuration in Table 1. Comparison to Large Versions Using the = 1, = 5 configuration, we compare the performance of DcCNN [37] and ISTA-Net [39] boosted with AliasNet layers against larger versions of the original networks.The objective is twofold: 1. Demonstrate that AliasNet enables better scaling with total number of parameters compared to simply increasing the size of the original network. 2. Demonstrate that a combined 1D + 2D CNN improves the regularisation of phase-encode artefacts compared to a 2D only approach. Average reconstruction performance is summarised in Table 2, where the first rows of DcCNN and ISTA-Net are the baseline models (not boosted by AliasNet).Subsequent rows then increase network size by either increasing the number of filters, increasing the number of cascaded convolutions and in the instance of DcCNN, increasing the number of convolutional layers in each 2D denoising step. AliasNet models are equivalent to the 2D baseline networks, with 1D regularisation interleaved between 2D denoising steps.While the shared configuration uses the same + = 6 1D modules between each 2D denoising step, the un-shared networks deploy a total of � + � × = 30. For both DcCNN and ISTA-Net and at all reduction factors, the inclusion of AliasNet as an additional 1D regularisation achieves the best PSNR and SSIM scores.This feat is particularly notable for the shared configuration, where only 3.6% and 2.8% of additional parameters are necessary for DcCNN and ISTA-Net respectively.Figures 5a, 5b illustrate the benefits afforded by AliasNet for two sample brain images.It is noted in the error maps that noise-like and faint image features are easily lost by the under-sampling, most evidently at R4.As highlighted in the locations of red and blue arrows, we see AliasNet models better preserve these regions in the reconstruction. VS STATE-OF-THE-ART ALGORITHMS To further explore AliasNet, we compare against state-of-the-art dual-domain reconstruction techniques DuDoRNet [53] and MD-Recon-Net [54] using the Calgary-Campinas brain and fastMRI knee datasets.As with the ablation study, baseline DcCNN and ISTA-Net networks are also included. FastMRI Knee For the knee images, the frequency and phase-encode directions are transposed to columns and rows respectively.Sample reconstructions at R6 and R8 are demonstrated in Figures 7c, 7d, with average performance over multiple reduction factors presented in Table 3.We see again that AliasNet boosted networks outperform the baseline DcCNN and ISTA-Net implementations.In fact, we also see that the reconstruction, the AliasNet boosted networks require just 23 hours for convergence.This suggests that our approach to regularise phase-encode under-sampling artefacts in the direction of undersampling provides a more efficient model from which to recover an MR image. 1D Only Reconstruction Recently, Wang et al. explored a fully 1D CNN to recover multi-coil phase-encode under-sampled MRI [57].The network achieves state-of-the-art performance when the number of training samples is limited, with performance that remains competitive to 2D techniques as the training samples are increased.The problem posed is similar to that solved in our [see Eq. ( 15)], with an additional lowrank constraint to interpolate missing k-space from captured values.Our approach differs from this in three key forms.Firstly, their low-rank constraint is applied directly to Φ , which we only utilise for data consistency in ( , (Φ ) |Θ ) ( ).Secondly, they do not consider 1D aliasing artefacts present in Φ that we denoise in ((Φ ) , |Θ ) ( ).Lastly, AliasNet was developed to supplement existing DL 2D solvers and therefore, features a minimal number of parameters.We, therefore, investigate the suitability of AliasNet modules for 1D-only reconstruction.In this configuration, the 1D layers illustrated in Figures 3a, 3b have been scaled into a 687.2K parameter, 1D-only network.Here the number of 1D convolution layers per iteration has increased from 3 to 6, the number of filters per layer from 8 to 32, and and have been set to 5 and 13.The number of parameters is similar to [57].For simplicity, we continue to experiment with single-coil MRI; AliasNet is easily extended to multi-coil by combining each reconstructed image with the square root of the sum of squares.Figure 8 compares the AliasNet 1D reconstruction, the 2D reconstruction from DcCNN configured in a larger D5C7 and a combined 1D + 2D reconstruction (2D network is D5C5).We find competitive reconstruction performance between 1D and 2D techniques, with PSNR scores only 0.4dB apart.The combined reconstruction however is 1.2dB higher.As per the difference image and the regions indicated by the red and blue arrows, the combined reconstruction better captures image features that are otherwise lost or corrupted in the alternative reconstructions.These findings are supported by Table 4. We suggest that extension of AliasNet to multi-coil MRI should include the low-rank constraint, such that the reconstruction benefits from standard 2D regularisation, as-well-as 1D regularisation of DISCUSSION To explore the impact of AliasNet modules, we evaluated their performance by pairing with two popular CS-MRI networks.For the sake of fairness, we also examined the effect of increasing the number of 2D operations by either increasing number of filters, increasing number of convolutions, and increasing the number of denoising steps.Our findings suggest that the inclusion of 1D AliasNet layers is more beneficial to the reconstruction compared to additional 2D operations.We then compared the AliasNet boosted networks against dual-domain reconstruction methods and found AliasNet performance was better whilst also requiring fewer parameters and less training time. Many existing DL CS-MRI algorithms are dependent on image domain operations, only integrating kspace information into data consistency layers and loss function design.Two major limitations have been identified with this approach.Firstly, under-sampling artefacts are structured and non-local, obscuring image features in potentially unrecoverable ways [53].Secondly, the reconstruction process typically utilises 2D operations, therefore limiting regularisation to an idealised 2D transformation. Recent state-of-the-art contributions have explored dual-domain architectures, executing convolutional operations directly in k-space.However, large distances between sampled points may limit the utility of k-space convolutions.We found that at higher reduction factors for the brain dataset (256 × 256), and all reduction factors for the larger knee images (320 × 320), that MD-Recon-Net [54] is unable to improve the reconstruction compared to DcCNN [37] or ISTA-Net [39]. CONCLUSION In this work, we have introduced a novel regularisation layer for phase-encode under-sampled MRI that enhances the reconstruction ability of existing 2D reconstruction techniques.This proposed AliasNet leverages the excellent 1D incoherence properties of phase-encode under-sampled MRI.The combined 1D and 2D regularisation better models image artefacts, producing superior performance scaling compared to simply increasing the size and number of 2D layers in the original networks.In fact, increasing the model size by just 3-4% improves the reconstruction of DcCNN [37] and ISTA-Net [39] to above state-of-the-art dual-domain networks such as MD-Recon-Net [54] and DuDoRNet [53]. We have also found competitive performance from a completely 1D reconstruction network based on AliasNet, potentially enabling high-quality reconstruction for data-constrained applications.In our experiments, image quality benefited more from the proposed image-domain regularisation of [Eq.( 15)] than intermediate Fourier regularisation of [Eq.( 14)].However, their combination serves to boost image quality further. While the objective of this work is to improve existing reconstruction techniques, we suggest a joint 1D and 2D reconstruction strategy be developed that limits the necessary data consistency (DC) layers within each denoising iteration.In the implementation investigated by this paper, we require a 1D DC operation for each 1D AliasNet module.This is a consequence of the AM approach to joint 1D and 2D reconstruction.While this solution yields simple construction and compatibility with existing 2D networks, it is inefficient to perform many fast Fourier transform (FFT) operations.Further, the evaluation of 1D and 2D layers must be performed sequentially rather than in parallel.As such, it may be useful to select and un-roll an optimisation method that limits the number of DC operations and enables parallel computation of regularisation functions. Another avenue to be explored is the 1D reconstruction of under-sampled projection data, wherein missing sinogram projections are populated via an approach similar to our proposed intermediate Fourier . Figure 2 Figure 2 illustrates artefacts associated with 2D random Cartesian (a) and 1D random phase-encode (b) under-sampling schemes.The central images correspond to each sampling scheme's point spread function (PSF).Convolution between the PSF and the ground truth image yields the under-sampled regularisation technique in several ablation studies.The dataset consists of 45 fully sampled T1w volumes acquired from a 12-channel clinical MR scanner (Discovery MR750; General Electric (GE) Healthcare, Waukesha, WI).In total, there are 7654 slices.Training consists of 25 subjects and 4254 slices.Validation consists of 10 subjects and 1700 slices.To avoid evaluating metrics over background images, the test set consists of 10 subjects and the central 1270 slices.Matrix size is 256 × 256. utilise the 3 Tesla coronal proton density (PD)-weighted images without fat suppression.This dataset consists of 12,366 slices from 332 subjects, captured on one of three clinical 3T MR scanners (Siemens Magnetom Skyra, Prisma and Biograph mMR).The data was acquired from a 15-channel knee coil array.Training consists of 8293 slices from 223 volumes.The validation set contains 2081 slices from 56 volumes.The test set is 1567 central slices from 53 volumes.Matrix size is 320 × 320. Figures 6a , Figures 6a, 6b compare the reconstruction methods for sample images at R4 and R6 of the Calgary-Campinas brain dataset.AliasNet boosted DcCNN and ISTA-Net yield the highest PSNR and SSIM scores.Compared to the state-of-the-art method DuDoRNet, AliasNet improves the reconstruction of the R4 test image from 35.6dB to 36.7dB.The error map image reflects this PSNR advantage.Our PSNR and SSIM scores have become competitive with DuDoRNet.This is an important outcome, as it is believed that the large receptive field afforded by DuDoRNet's DRD-Net layers is beneficial to the reconstruction of higher resolution knee images (320 × 320) compared to that of the brains (256 × 256).DcCNN and ISTA-Net deploy simple 3 × 3 convolutions that struggle to capture long distance interactions by comparison.However, DuDoRNet required approximately 41 days to complete 130 epochs on the 8293 knee slices.As both DcCNN and ISTA-Net employ relatively simple network architectures, and the proposed AliasNet modules are designed to supplement the 2D columns of image ( ), our proposed 1D regularisation of columns of intermediate domain Φ ( ) and the low-rank constraint on columns of intermediate domain Φ as suggested by Wang et al.[57] By comparison, DuDoRNet solves this non-local problem by invoking a sophisticated DRD-Net with large receptive field.While this approach is effective compared to MD-Recon-Net (as demonstrated in our comparisons), the design is computationally expensive.Further it does not directly address the structured nature of phase-encode CS-MRI artefacts.Our proposed AliasNet decouples CS-MRI artefacts into contiguous 1D signals.Further, convolution operations are performed on either the aliased image columns ( ), or aliased intermediate k-space ( ).In either case, convolution in undersampled k-space is avoided and the expected structure of CS artefacts is directly penalised.This enables our simple AliasNet architecture to improve the reconstruction of DcCNN and ISTA-Net to above DuDoRNet even at high reduction factors. Figure 2 : Figure 2: Visual comparison between artefacts of 2D random (a) and 1D random (b) k-space under-sampling schemes.From left to right are the sampling mask, PSF, zero-filled image and error map. Figure 3 : Figure 3: Proposed 1D PG technique for accelerated phase-encode MRI.(a) implementation of 1D denoiser ( | ), (b) proposed update procedure for (, |), (c) proposed 1D and 2D reconstruction procedure, where and can be configured to share weights within or iterations (w/ WS), or be independent between each iteration (w/o WS); are the total number of 2D iterations.(d) Illustrated cascade of and , which are defined by Eqs.(14,15). Figure 5 : Figure 5: Reconstruction performance demonstrated at R3 (a) and R4 (b) of sagittal cross-sectional brain MR images, with the zoomed in area (red rectangle) highlighting regions where reconstruction performance is notably improved by AliasNet layers.DcCNN and ISTA-Net are the baselines.PS and P indicate the inclusion of AliasNet in shared and un-shared configurations.Values ∈ [0, 1]. Figure 6 : Figure 6: Reconstruction performance demonstrated for the brains at R4 (a) and R6 (b).Featured in this comparison are DcCNN configured as D5C5, ISTA-Net, dual-domain networks MD-Recon-Net and DuDoRNet, and the AliasNet boosted DcCNN and ISTA-Net implementations without weight sharing.The zoomed area (red rectangle) highlighting regions where the reconstruction is notably improved by AliasNet compared to the baseline DcCNN and ISTA-Net implementations.Values ∈ [0, 1]. to be written concisely for each method as ( , (Φ ) �Θ ) and ((Φ ) , |Θ ) respectively; Θ and Θ are the set of CNN parameters for image and intermediate 1 (⋅ |Θ), which are suitably incoherent with respect to in image or intermediate Φ domain can be deployed for image recovery. • : The forward pass of a single image column through affects only itself and its correlated intermediate Fourier column (Φ ) , whilst affecting all columns of intermediate Fourier domain Φ and k-space . • : The forward pass of a single intermediate Fourier column (Φ ) through affects all columns of the image and intermediate Fourier domain Φ , whilst only itself and correlated columns of k-space are changed.We, therefore, propose a loss function that resembles the combination of losses required to train and independently of the entire image: = ‖ − �‖ 1 + ‖ 0 − �‖ 1 + Here, 0 and 0 are the target image and k-space measurements, � and � are the estimated image and k-space.Additionally, , , are the k-space, intermediate Φ and Φ domain loss ratios.We found that the approach was relatively insensitive to hyperparameter choice, with the image domain loss generally providing sufficient supervision.Therefore, we set each to 0.1, 0.3, and 0.3 to focus the training on image features, and equally weight the recovery of intermediate k-space in the aliased phase-encoding direction (Φ ) and zero-filled frequency-encoding direction (Φ ).Compared to mean absolute error (MAE) alone, convergence characteristics at later epochs are slightly improved, with an accompanying improvement to image quality for our AliasNet models (approximately 0.1dB on average).These findings indicate that Eq. ( Table 1 : Number of unique 1D AliasNet layers with respect to 2D update steps .Iterations and relate to 1D minimisation problems [Eq.(14)] and [Eq.(15)], respectively.This configuration is used in all AliasNet experiments. Table 2 : Average reconstruction performance for the Calgary-Campinas brain dataset (mean ± std).Included in this table are the number of parameters for tested networks.Bold and underlinedindicate best and second-best outcomes.AliasNet models combine = 1 ( ), = 5 ( ) and = 5 (2D denoising steps).AliasNet models with 6 and 30 1D layers are in the shared and unshared configurations respectively.The increase in parameters is with respect to the baseline model without modification (top row of DcCNN and ISTA-Net). Table 3 : Average reconstruction performance for the Calgary-Campinas brain and fastMRI knee datasets (mean ± std).Included are the number of parameters for tested networks and associated training information.Bold and underlined indicate best and second-best outcomes respectively.AliasNet models combine = 1 ( ), = 5 ( ) and = 5. 1D layers are in the shared(top) Table 4 : Comparison between a large 1D only AliasNet, a 2D only reconstruction via DcCNN configured to D5C7 and the combined 1D + 2D reconstruction configured as P1F5 + D5C5.Dataset used is Calgary-Campinas (mean ± std).
7,525
2023-02-17T00:00:00.000
[ "Computer Science" ]
Pre-impact Albedo Map and Photometric Properties of the (65803) Didymos Asteroid Binary System from DART and Ground-based Data This study provides a pre-impact map of the albedo of the Double Asteroid Redirection Test (DART) target Dimorphos corrected for all the effects of viewing geometry, as well as an estimate of photometric roughness for the hemisphere imaged by DART. Other photometric properties are derived for the (65803) Didymos binary system based on DART and ground-based measurements obtained at JPL’s Table Mountain Observatory. The roughness, geometric albedo, phase curve and phase integral, and single particle phase function are typical of the S-family of asteroids. The major remaining uncertainty lies in the behavior of the phase curve below 7°. These results provide a baseline for comparison with Hera measurements, leading to an understanding of the quantitative effects of the kinetic impactor mitigation strategy. Introduction Alongside NASA's campaign to search for and track near-Earth objects (NEOs) is the development of mitigation strategies for deflecting or disrupting an NEO on a collision trajectory to Earth.The Double Asteroid Redirection Test (DART) mission was NASA's first demonstration of the kinetic impactor technique, with a goal of quantifying the orbital change in the companion of the asteroid (65803) Didymos, Dimorphos (Cheng et al. 2018;Chabot et al. 2023;Cheng et al. 2023;Daly et al. 2023;Li et al. 2023).Launched from Vandenberg Space Force Base on 2021 November 24, the DART spacecraft impacted Dimorphos on 2022 September 26 and decreased its orbital period by an unexpectedly large 33 minutes (Thomas et al. 2023).An image taken of the binary system from Palomar Observatory 3.5 days after impact is shown in Figure 1: a large dust tail is still prominent. Although DART was primarily a technology demonstration, valuable scientific data were returned during its short mission.Perhaps most striking are a few images of Dimorphos obtained just seconds before impact, enabling a modestly robust analysis of the physical state of its surface.This pre-impact characterization is important to quantify the effects of the collision and optimize mitigation efforts, as well as to understand the nature of its surface.The European Space Agency's Hera spacecraft plans on performing a detailed investigation of the post-impact Dimorphos beginning in 2026, so it is important to establish a baseline characterization of the asteroid. Data returned from the spacecraft were limited owing to the mission's modest cost, focused goal, and emphasis on technology rather than science.For example, the solar phase angle excursion is limited to ∼59°, and only one hemisphere was imaged at high spatial resolution (∼20 cm pixel −1 ).We thus included as part of our investigation a program to obtain a solar phase curve at Table Mountain Observatory (TMO); as an NEO, Dimorphos travels through a large range of solar phase angles.Of course, our observations include the binary system, so our telescopic photometric measurements apply to both bodies, except for roughness, which could be derived from the image of Dimorphos obtained prior to impact. The prime goal of this investigation is to provide a preimpact photometric baseline of Dimorphos by quantifying the albedo of the asteroid, its surface roughness, and several fundamental photometric parameters, including the single particle phase function, which is related to the size of surface particles, and the single scattering albedo.Characteristics of the pre-impact surface such as friability, cohesiveness, and compaction, among others, can be traced directly to the postimpact observations to be gathered by Hera, enabling better mitigation strategies.One key missing characteristic is the surface compaction state, which can be modeled with observations very close to opposition (<6°).The smallest possible solar phase angle from the TMO campaign was 6°.7.The system goes through solar phase angles <1°prior to the Hera encounter in 2026, but of course these measurements will be dominated by Didymos and a small amount (∼4%) from the post-impact Dimorphos.Didymos could also show alterations by the impact as well, as particles from the long-lived ejecta caused by the impact may have accreted onto its surface. The second goal of this investigation is to understand the placement of Didymos-Dimorphos in the family of asteroids, especially the S-family.For example, its albedo can be related to the effects of space weathering, as this process lowers the surface albedo of S-type asteroids (Pieters et al. 2000;Hapke 2001).The S-family of asteroids spans a large range of albedos (Tedesco et al. 1989), and that range can be related not only to composition but also to the amount of space weathering and thus surface age (Binzel et al. 1996).Regolith scattering properties can be related to the interactions between Dimorphos and Didymos, such as tidal effects and transfer of regolith particles from one body to the other.Finally, comparison of photometric parameters such as roughness, compaction state, and the size of regolith particles with those of other asteroids and planetary bodies enables a comparison of evolutionary processes among asteroids, other small bodies, and planetary surfaces in general. Observations This investigation relies on two prime data sets: images acquired by the Didymos Reconnaissance and Asteroid Camera for Optical navigation (DRACO) prior to impact that were obtained at a solar phase angle of 59°, and a solar phase curve of the Didymos-Dimorphos system obtained at TMO between 2022 October 21 and 2023 February 10 UTC.The DRACO data are well suited to constructing an albedo map (ideally a map of normal reflectance, which has all the effects of viewing geometry eliminated) and deriving photometric roughness, while the phase curve from TMO is ideal for determining the geometric albedo, the phase integral, the single scattering albedo, and the single particle phase function.Unfortunately, no observations at opposition to model the compaction state of the surface were obtained, or even possible, during the 2022-2023 apparition.The asteroid will go through an opposition with a phase angle <1°in 2024 and 2026 before the Hera encounter.We also obtained data with the 200-inch Hale telescope at Palomar Observatory on 2022 September 30, but because of the extensive, still extant dust tail of the asteroid (Figure 1), these observations were not suited to accomplishing photometric measurements. We planned our observing run to capture the maximum excursion in solar phase angle, with 5°increments in data, except at small phase angles (less than ∼18°), where we planned measurements every degree.Gaps exist where the Moon approached within an angular distance of ∼30°.This range is shown in Figure 2 with both planned and successful observations (the slight offsets between the two data sets are due to ephemeris updates).We obtained 22 nights from 2022 October 21 to 2023 February 10 UTC with TMO's 1.0 m Boller and Chivens telescope and the 2 K Spectral Instruments CCD camera.Our field of view was 6 2 × 6 2, and we obtained a total of 1128 images using a Sloan ¢ r filter with an effective wavelength of 0.62 μm.We covered ∼150 minutes of data each night outside of the mutual events.Our exposure time was 60 s in 2022 October and extended all the way to 240 s in 2023 February as the asteroid brightness dimmed from 15.7 to 18.7 mag.Both maximum and minimum solar phase angles were obtained at 76°in October and 6°.7 in January, along with dense coverage in between.Most of our nights were clear, with an average seeing around 2 5 (range 1 5-4 8), but to cover the full phase curve, we occasionally observed during light cirrus or periods of poor seeing.Data were averaged over one rotation period of 135 minutes.Figure 3 is a typical observation from TMO, and Table 1 summarizes the observations. Our dual data sets accentuate the importance of using a combination of spacecraft and ground-based observations to extend the capabilities of both collections of data.Spacecraft measurements generally provide spatial resolution, while ground-based observations cover a wider temporal excursion -particularly for flyby missions, or less than flyby missions, such as DART-and a larger range in solar phase angles.For photometric modeling, a large excursion in solar phase angle is required to perform robust and unique fits to physical parameters.The single particle phase function and surface roughness are tightly correlated and thus difficult to uniquely determine.The most effective way to disentangle these two parameters is to determine the roughness from resolved spacecraft images and then separately determine particle phase function from a well-determined solar phase curve (Helfenstein et al. 1988;Buratti 1991;Buratti et al. 2004). Phase Curve Our data reduction and on-chip photometry followed the procedures outlined in Mommert (2017).To generate biascorrected and flattened science images, we used the "imred" package. 6Within the reductions we used the Gaia DR3 star catalog (Gaia Collaboration et al. 2023) for both astrometric plate solutions and photometry.Field stars used for calibrations were limited to those with solar colors, which typically allowed for about a dozen calibration stars.Photometry for field stars used an aperture based on the curve of growth, while we manually selected the photometric aperture for Didymos, usually in the range 8″-24″. After obtaining calibrated Didymos photometry for individual frames, we deselected those measurements known to take place during mutual events and obtained a nightly mean over one 135-minute rotation period.We assigned qualitative data weights for each individual night based on weather and seeing values. Figure 4 shows the reduced phase curve of the system.The curve is remarkably linear, with a phase coefficient of 0.033 ± 0.001 mag deg −1 .The extrapolation of the curve to 0°yields a reduced opposition magnitude in the R filter of 18.17 ± 0.04, or 18.36 ± 0.04 in the V filter, assuming a V − R of 0.19 (Moskovitz et al. 2024).The image shows that the phase curve of the system is typical of an S-type asteroid between 6°and ∼40°and then drops off more steeply at larger solar phase angles, but not as steeply as a typical C-type asteroid (Helfenstein & Veverka 1989;Li et al. 2015).The graph also illustrates another point: a simple extrapolation to 0°i s almost certainly not valid for this asteroid, and additional observations at opposition in 2024 and 2026 are required for a complete solar phase curve.These results are in good agreement with those of Hasselmann et al. (2024). The geometric albedo is a fundamental photometric parameter.It is a measure of the integral brightness of a celestial body at a solar phase angle of 0°, compared to a perfectly diffusing disk of the same size (Horak 1950).We computed the geometric albedo (p) in both the R and V filters with the following formula (Horak 1950): where m target is the mean opposition magnitude of the asteroid system, m Sun is the magnitude of the Sun at the same wavelength, R is the semimajor axis of Earth (1 au), ρ is the radius of the combined cross section of both objects (379 m, assuming dimensions of 819 × 801 × 607 m for Didymos and 179 × 169 × 115 m for Dimorphos), ¢ r is the semimajor axis of the system (1.644 au), and Δ is the distance between the system and Earth at opposition (0.644 au).We obtain a visible geometric albedo of 0.16 ± 0.02, which is identical to that obtained earlier (Daly et al. 2023) and slightly lower than the average of ∼0.20 for S-type asteroids (e.g., Tedesco et al. 1989).The geometric albedo in the Sloan r filter is 0.19 ± 0.02. However, these values are probably incorrect, as they do not include the effects of any opposition surge.If an opposition surge of 0.41 mag between 6°and 0°from a conglomerate S-type phase curve is assumed (Helfenstein & Veverka 1989), the visible geometric albedo is 0.23 ± 0.02, which is slightly high for an S-type asteroid.This uncertainty underscores the importance of obtaining opposition measurements. The other limitation on this work is that our measurements of the phase curve are of the aggregate system: the value for the geometric albedo assumes that the two objects are the same.Barnouin et al. (2024) state that Dimorphos is "slightly brighter" than Didymos, a finding that is consistent with its younger age and less darkening due to space weathering.In addition, the post-impact measurements of both bodies may be affected by contamination from the dust tail created by the impact. The phase integral, q, which expresses the directional scattering properties of a planetary body, is given by where Φ(λ) is the disk-integrated normalized phase curve normalized to unity at 0°(we assumed the linear y-intercept depicted on the y-axis in Figure 4 as our normalization point). Using a four-point Gaussian quadrature formula (Chandrasekhar 1960) and making the reasonable assumption that the values at 110°and 149°(the two phase angles in the quadrature that we did not observe) are 0.05 and 0.01, respectively, based on values of objects of similar albedo at these phase angles (e.g., Buratti et al. 2017, Figure 1), we obtain a value of 0.48 ± 0.04.The Bond albedo, defined as A B = p•q, is 0.09 ± 0.01.Because our phase curve was obtained in the R filter, this value applies to that wavelength.The bolometric Bond albedo is this quantity integrated over all wavelengths of reflected light.It is a measure of the energy balance on a planetary surface (total energy out/total energy in) and represents a fundamental parameter for understanding energy balance on a celestial body.Unfortunately, we lack measurements for the solar phase curve of the system in additional wavelengths. Albedo Map The DRACO images returned prior to the impact provides a data set to derive a map of normal reflectance and photometric roughness of one hemisphere of Dimorphos.Standard procedures have been developed over the years to create maps of normal reflectance, which has all the effects of viewing geometry removed (e.g., Buratti et al. 1990Buratti et al. , 2017;;Buratti & Mosher 1991, 1995;Hofgartner et al. 2023).Publicly available software such as Integrated Software for Imagers and Spectrometers (ISIS)7 and Video Image Communication And Retrieval (VICAR)8 have embedded subroutines that correct for these effects. We made use of VICAR and the shape model (ver.003) and backplanes provided by the DART team (and deposited into the PDS small bodies node) to correct the geometry of the image.Figure 5 shows the backplanes for the incident and emission angles (the solar phase angle is 59°), along with images of the reflected specific intensity (I/F) with and without a geographical grid.An image of normal reflectance corrected for viewing geometry was constructed using the following equation (Squyres & Veverka 1981;Buratti & Veverka 1983;Buratti 1984): where I/F is the specific intensity, μ o is the cosine of the incidence angle (i), μ is the cosine of the emission angle (e), and f (α) is the surface solar phase function, which includes changes in intensity due to the physical character of the surface (roughness, the single scattering albedo, the single particle phase function, the compaction state of the optically active portion of the regolith, and coherent backscatter).Our going-in assumption was that A = 1 (a Lommell-Seeliger or lunar photometric function), which applies to low-albedo surfaces, but we found that a small "Lambert" component of about 5% provides the best fit.This small amount of multiple scattering supports the possibility of the system having the higher albedo predicted by an S-type opposition surge, although the laboratory measurements of Veverka et al. (1978) suggest that multiple scattering effects can be neglected for normal reflectances less than 0.3.(More complicated models such as those of Hapke 1981Hapke , 1984Hapke , 1986Hapke , 1990 are not suited for this part of the investigation because the number of parameters to be fit, combined with the paucity of data, underconstrains the determination of those parameters.)After corrections are made for the incidence and emission angles, the effects of the physical phase function ( f (α)) were removed.For a purely Lommel-Seeliger object, the mean normal reflectance of the target is equal to the geometric albedo.With a 5% Lambert contribution to Equation (3), this approximation is still quite good.Following Equation (6) of Buratti & Veverka (1983), the geometric albedo of an object with A = 0.95 would be 0.17, which is very close to our value of 0.16 derived from the solar phase curve and size of the targets.Given the much greater uncertainties in the opposition surge, we normalized the average value of the albedo map to be 0.16, as shown in Figure 6.One of the more unique features of this map are the striations depicted on the surface.These structures are not artifacts, nor are they caused by an error in the shape model or geometric backplanes.As discussed in Section 4, other asteroids, including Didymos, and some of the small ring moons of Saturn share similar features: the common explanation may be that they are caused by the placement of smaller, brighter surface particles. Roughness Albedo and roughness are the only photometric parameters that can be reliably determined for Dimorphos from DRACO observations because of solar phase angle constraints and the limitations of a rapid encounter.Macroscopic roughness encompasses facets ranging in size from aggregates of particles to boulders, trenches, and craters.These features alter the specific intensity of a planetary surface in two ways: the local incidence and emission angles are changed by alteration of the surface profile from that of a smooth sphere, and they remove radiation from the scene by casting shadows.In addition, surfaces are removed from the line of sight of the observer.Two formalisms have been developed to quantitatively model this effect: Hapke's mean slope model (Hapke 1984), and the crater roughness model, which defines rough facets by a craterlike shape defined by a depth-to-radius ratio (q) and fractional coverage (Buratti & Veverka 1985).Both models use idealized shapes: one way of visualizing the difference is to think of craters as concave features on the surface while the mean slope angles are convex features.We make use of the crater roughness model, which has been applied to 19P/Borrelly (Buratti et al. 2004), different terrains on Titan (Buratti et al. 2006), Phoebe (Buratti et al. 2008), and high-and lowalbedo terrains on Iapetus (Lee et al. 2010).The disk-resolved form of the model is particularly useful because it relates surface roughness to limb darkening, which occurs as the emission angle changes, rather than to solar phase angle.A major problem with using a disk-integrated solar phase curve to derive roughness is that roughness is convolved with other effects (such as the single particle phase function), and thus it cannot be derived uniquely (Helfenstein et al. 1988).Diskresolved measurements obtained by spacecraft are far more diagnostic of surface roughness than integral data sets.Furthermore, determining roughness from disk-resolved spacecraft data and fixing roughness parameters in further modeling fits leads to a more robust determination of other photometric parameters.Because we are in the geometric optics limit, the roughness model is wavelength independent for dark surfaces with a minimum of multiple scattering (which would partly illuminate primary shadows).Given that Dimorphos nearly follows a lunar (Lommel-Seeliger) photometric function, multiple scattering is minimal. The roughness parameter can be determined uniquely from a spacecraft image (or sets of images) by the functional form of the model at any solar phase angle.Furthermore, we can "peer" below the resolution limit of the camera.Our model-as all current models-is scale invariant, with features as large as mountains and craters and as small as boulders or clumps of particles having the same effects.Helfenstein & Shepard (1999) show that small features-just clumps of a few particles -dominate, at least for the Moon, which is the only object for which we have ground truth.Thus, our model indicates the roughness of Dimorphos below the spatial resolution limit of the image we analyzed. A scan of I/F extracted from the radiometric image in Figure 5 is shown in Figure 7(a), along with an approximate hand-adjusted best fit and a roughness fit based on a Python routine that employs Bayesian statistics (Mishra et al. 2021) in Figures 7(b)-(d).First, this image and resulting scan are ideal for fitting photometric roughness, as there is a wide range in emission angle that includes the characteristic inflection point at ∼75°(for a solar phase angle of 59°), which uniquely defines the functional form of the roughness model for each depth-toradius value.Second, the hand-adjusted (Figure 7(b)) fit is better than the formal fit (Figure 7(c)), and the formal fit appears to mesh with only part of the curve.This result suggests that the roughness varies over the surface of the asteroid.Figure 7(d) is a fit that shows quantitatively that different sections of Dimorphos possess very different roughness characteristics.The physical reason for this result awaits further investigation, perhaps by the Hera mission.One possible interpretation is that rough facets in the section with the striations-which is less rough according to the model-are infilled with fine dust, which in turn makes then brighter.Or perhaps there are more boulders in the rough area.An independent analysis centering on boulder counts and geologic analysis finds that the surface roughness varies significantly: generally lower elevations are smoother (Barnouin et al. 2024).Our scan traverses mainly smooth terrain (due to the need to capture a full range of emission angles), but even within this terrain there are substantial differences.Because our analysis peers below the resolution limit of the image, the two methods are thus not directly comparable, but both show substantial variations in roughness over Dimorphos. Photometric Modeling With a phase curve created from both ground-based and preimpact DART data, and roughness already determined with the work described above, it is a straightforward exercise to fit global photometric parameters to the system.A full photometric model is summarized by the following expression for the reflectance r (Horak 1950;Chandrasekhar 1960;Goguen 1981;Hapke 1981Hapke , 1984Hapke , 1986Hapke , 1990Hapke , 1993Hapke , 2002Hapke , 2008)): where w is the single scattering albedo (the probability that a photon reflected from the surface will be scattered into 2π sr of space), B is the function representing the opposition surge (h and B 0 describe the shape and amplitude of the surge, respectively, which are related to the compaction state), P(α) is the single particle phase function, H(μ 0 , w) and H(μ, w) are Chandrasekhar's multiple scattering H-functions (Chandrasekhar 1960), and S(i, e, α) is the function describing macroscopic roughness.The single particle phase function is usually described by a oneor two-term Henyey-Greenstein phase function defined by g, where g = 1 is purely forward scattering, g = −1 is purely backscattering, and g = 0 is isotropic.For the case of objects with low geometric albedos (less than ∼0.3) such as the Didymos-Dimorphos system, multiple scattering is not significant, and the H-functions are close to unity, so the equation can be approximated by Equation (3).We thus see that f (α) contains much important information about the physical properties of the surface.Because we lack observations of the opposition surge of the system, parameters that depend on these observations will not be modeled.Instead, model parameters based on aggregate data for S-type asteroids (Helfenstein & Veverka 1989, partial data are shown in Figures 4 and 8) have been adopted (Table 2).Note that this sequence of fitting photometric parametersfitting roughness to disk-resolved data, then fitting the single scattering phase function and single scattering albedo with a well-defined phase curve-is far more robust than fitting parameters to a disk-resolved solar phase curve, as it is not possible to determine unique unconstrained parameters with disk-integrated data alone (Helfenstein et al. 1988).Many derived photometric parameters are more a function of the range of phase angles observed and modeled rather than an expression of anything physical on the surface.Photometric modeling has also come under scrutiny because the results may have little to do with physical reality (see Shepard & Helfenstein 2007; response by Hapke 2008).These issues can be ameliorated if one realizes that it is comparisons among the results of the models that are most useful (every model is a physical idealization).The models do yield fundamental information on how rough surfaces are and whether the surface is forward-or backscattering, for example.Forward-scattering particles tend to be smaller, as photons are more likely to exit the particle in the forward direction before they are scattered again in the particle. We closely followed the techniques of our previous work in fitting the model outlined by Equation (4) (e.g., Hillier et al. 1999Hillier et al. , 2021;;Buratti et al. 2022).We adopted the approximate, hand-adjusted best-fit roughness of q = 0.16 fit to the disk-resolved image, which is equivalent to a mean slope of 18°, an h of 0.020, and an S(0) of 0.97 to define the opposition surge.These values are the averages for S-types from Helfenstein & Veverka (1989) (S(0) is an earlier terminology for the amplitude of the opposition surge).We find a single scattering albedo of 0.126 ± 0.008 and a g of −0.36 ± 0.01.Note that these values are wavelength dependent and apply to the Sloan ¢ r filter, while the roughness is wavelength invariant (in principle at least). Figure 8 is a plot of the data shown with the model, and Table 2 provides a comparison of the results with other objects.These comparisons are important given the idealized nature of photometric models: it is the differences among kindred objects, and between different classes of objects, that offer clues to geophysical processes.For example, asteroids are more backscattering in general than icy moons and other higher-albedo bodies.One explanation for this trend is the presence of multiple scattering, which tends to scatter more isotropically, in bright surfaces.The table also illustrates the wildly different results that photometric modeling can produce (see, e.g., Rhea), a point that harks back to our warning that photometric fits are often just a function of the range in phase angles of the data set being fit. Hapke's photometric model also yields results for the geometric albedo (p), the phase integral (q), and the Bond albedo (A B ).We find that these values (in the Sloan r filter) are 0.19, 0.38, and 0.073, respectively, which are in reasonable agreement with our results based on direct measurements from the solar phase curve, except for the substantially lower value of the phase integral.This discrepancy is due to the inclusion of the average S-type asteroid's opposition surge in the model, which "depresses" the phase curve and thus lowers the value of the area underneath it.The value from the TMO data with a linear extrapolation may have to be updated to a lower value if the system has a substantial opposition surge.However, the larger geometric albedo due to an opposition surge may "cancel out" this decrease. Discussion and Summary Investigating the effects of a kinetic impactor event is key to formulating a strategy to mitigate potential future impacts by NEOs.Specific changes to the physical character of the surface of an NEO can be defined by modeling the target before and after the impact.The main goal of this paper is to define the albedo of Dimorphos, corrected for the effects of viewing geometry, and its photometric properties just prior to the impact of the DART spacecraft.These results will provide a baseline for the investigation of the Hera spacecraft, which is due to arrive at Dimorphos in 2026.The secondary goal is to understand where this binary asteroid lies in the family of S-type asteroids.Each asteroid flyby or rendezvous mission seems to prove that each asteroid is different, and defining that diversity not only is scientifically important but also provides foundational information for mitigation strategies. The image obtained by DRACO at 59°contained nearly half of Dimorphos's surface, and it is thus ideal for deriving both an albedo map and photometric roughness of the imaged terrain.Although the solar phase curve measured at TMO extended over a larger range than is typical for asteroids (those in the main belt are restricted to solar phase angles less than about 35°), we lack the critical measurements at opposition that are key to defining the geometric albedo.Extrapolating to 0°using the linear phase coefficient of 0.033 mag deg −1 (which holds until at least 6°.7) yields a visible geometric albedo of 0.16 ± 0.02, which is low for an S-type asteroid, but which is probably wrong, as almost all asteroids exhibit an opposition surge.There are exceptions, such as the C-type Jupiter Trojan 1173 Anchises (French 1987).Assuming a typical S-type opposition surge, the geometric albedo is 0.23 ± 0.02, which is more typical of S-types.The albedo map may need to be rescaled if a substantial opposition surge is observed in 2024-2026 when the solar phase angle is less than 1°.Perhaps the most intriguing feature of the map is the placement of albedo striations, which appear regardless of the photometric correction applied (Sunshine et al. 2023a(Sunshine et al. , 2023b)).It is not an artifact of the shape model either, as Figure 5 shows.Asteroid 25143 Itokawa shows similar but less extensive stripes with an adjacent pond of fine material, although the pond does not appear to be closely connected with the striations (Cheng et al. 2007;see Cartier 2019).The change in roughness on Dimorphos may be due to infilling of rough facets with dust from the impact that formed the striation.The asteroid's roughness is typical (Table 2), but the surface appears to be substantially smoother toward the ends of the radiating striations.The presence of fine-grained dust could also explain the higher albedo of the striations, as photons are less likely to be permanently absorbed into them.Other possibilities include flow features of surface dust, as on the inner small Saturnian moons (Buratti et al. 2019b), or tidal stress marks, such as those seen on Phobos. 9Although there are no other images of Dimorphos at additional geometries, an image of Didymos obtained by DRACO prior to the impact shows bright features on its surface and ponding of what appear to be fine particles. 10 The single particle phase function shows that the particles are slightly more backscattering than other S-family asteroids, while the single scattering albedo is lower than other S-asteroids investigated.This result suggests that the regolith particles are more opaque than those of the typical S-asteroid, perhaps because the small surface gravity of Dimorphos means that more small particles (which are more forward scattering) formed in impact events escape.The small particles that may explain the higher albedo striations could be localized to those regions.C-type asteroids tend to be more backscattering and of course lower albedo, so that could explain why the phase curve beyond ∼60°becomes somewhat "C-like" (see Figure 4).Hasselmann et al. (2023) also noted this deviation in the phase curve of the system. One limitation of our results is that light represented in the DRACO image is barely included in the telescopic diskintegrated measurements: the DART-imaged hemisphere of Dimorphos is only about 4% of the integral brightness of the system.Crater counts by Barnouin et al. (2024) suggest that the age of Dimorphos is 1/30 that of Didymos.Because of the time-dependent accumulative effects of space weathering, gardening by meteoritic impacts, and other factors, there is no reason to expect that their surface properties would be comparable.This problem is compounded by the changes on both bodies due to the DART impact.Of course, our own current telescopic observations were obtained after the impact, and the degree of dust accumulation is unknown.Ground-based studies of the system between the two spacecraft visits will be obtained after significant accretion of dust and boulders on both bodies.Thus, the new ground-based data-including the key observations of the opposition surge-will not be strictly comparable to the pre-impact data. Another tack is to search archived data from surveys to see whether there are prior pre-impact observations of the system near opposition.This task is beyond the scope and resources of our current project, but if it could be done as part of a future investigation, the results would help to quantify the effects of the impact, including the accumulation of dust on the surfaces of both bodies.Of course, this future data set would still possess the problem of being a composite of the entire Didymos-Dimorphos system. On the more positive side, the albedo, roughness, and surface properties of at least one hemisphere of Dimorphos can be directly compared to Hera data.And this hemisphere is the area on which the impact occurred.Thus, the main goal of the DART mission-to understand key physical changes due to an impact to optimize mitigation strategies-will be realized. Figure 1 . Figure 1.Image obtained at Palomar Observatory on the 200-inch telescope on 2022 September 30 (3.5 days after the impact) in the Sloan r filter. Figure 2 . Figure 2. Planned and actual phase angle coverage at TMO during the 2022 October-2023 February campaign.The planned and actual phase angles and times do not exactly line up because of updates to the ephemeris. Figure 3 . Figure 3.A typical image of the Didymos-Dimorphos system obtained at TMO on October 21 11:05:37 UT, 25 days after impact.Didymos-Dimorphos is the object with the tail. Figure 4 . Figure 4.The reduced data for the TMO (blue squares).For comparison, aggregate curves for C-type and S-type asteroids are shown.The latter phase curves are from Helfenstein & Veverka (1989) and Li et al. (2015).The line is a linear least fit of 0.033 mag deg −1 . Figure 5 . Figure 5.The four panels represent (a) the radiometrically corrected image of the DART target, (b) the same image with a geographical grid of latitude and longitude, and (c, d) the backplanes of geometric information: the emission and incidence angles (panels (c) and (d), respectively) computed with the DART Project's shape model version 003 (Daly et al. 2023; Barnouin et al. 2024).The image was obtained 8.559 s before closest approach with a spatial resolution of 26 cm pixel −1 . Figure 6 . Figure 6.The normal reflectance of Dimorphos computed from the calibrated DRACO image with a corresponding color scale bar.The blue halo surrounding Didymos is a small residual signal in the background of the PDS image. Figure 7 . Figure 7. (a) The position of the scan we used for modeling macroscopic roughness based on the crater model ofBuratti & Veverka (1983).(b, c) Hand-adjusted fit that looks best and a formal fit to the depth-to-radius ratio, respectively.Panel (d) shows that much better fits can be obtained if the roughness changes with position.The upper 25%-30% of Dimorphos toward the limb is substantially rougher, as shown by the green model.The q-values of 0.25, 0.16, 0.19, and 0.79 correspond to Hapke mean slope angles(Hapke 1984) of about 18°, 12°, 14°, and 53°, respectively. Figure 8 . Figure 8.The composite data set from TMO and Helfenstein & Veverka (1989) for data at opposition (less than 6°.7) shown with the best-fit photometric model. Table 1 Summary of Observations Obtained at TMO Table 2 Photometric Modeling and Fundamental Photometric Parameters of Selected Objects in Comparison to Didymos/Dimorphos a No opposition surge.b Sloan r filter.c Assumed.
7,991.2
2024-03-01T00:00:00.000
[ "Physics" ]
Self-Powered Photoelectrochemical Assay for Hg 2+ Detection Based on g-C 3 N 4 -CdS-CuO Composites and Redox Cycle Signal Amplification Strategy : A highly sensitive self-powered photoelectrochemical (spPEC) sensing platform was constructed for Hg 2+ determination based on the g-C 3 N 4 -CdS-CuO co-sensitized photoelectrode and a visible light-induced redox cycle for signal amplification. Through successively coating the single-layer g-C 3 N 4 , CdS, and CuO onto the surface of an electrode, the modified electrode exhibited significantly enhanced PEC activity. The microstructure of the material was characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FT-IR). However, the boost in photocurrent could be noticeably suppressed due to the consumption of hole-scavenging agents (reduced glutathione) by the added Hg 2+ . Under optimal conditions, we discovered that the photocurrent was linearly related to the Hg 2+ concentration in the range of 5 pM–100 nM. The detection limit for Hg 2+ was 0.84 pM. Moreover, the spPEC sensor demonstrated good performance for the detection of mercury ions in human urine and artificial saliva. Introduction It is well known that heavy metals are very harmful to the human body. Among such metals, mercury is the most widely used heavy metal, and it is used in the manufacture of compact fluorescent-lamp light bulbs, jewelry, thermometers, blood pressure monitors, and silver dental fillings [1]. Mercury ions (Hg 2+ ) are the general form of mercury and these can cause various diseases, such as hypertension, enteritis, bronchitis, and pulmonary edema [2]. At present, several methods have been utilized for the detection of mercury ions, including fluorescent and colorimetric sensors [3][4][5], surface-enhanced Raman scattering [6], and electrochemistry [7,8]. Although these methods have the advantages of high sensitivity and accuracy [9], they are limited by complex sample pretreatment and high operation cost [10]. Therefore, developing a mercury-ion detection method with a facile pretreatment process, and low cost, is imperative. Photoelectrochemical (PEC) sensors demonstrate significant potential for analysis owing to advantages such as weak background noise, simple operation, high sensitivity, and quick response [11]. The self-powered photoelectrochemical (spPEC) sensor is an emerging photoelectrochemical detection method owing to its portability, lack of external power supply, and sustainability [12]. The spPEC sensor is primarily based on a three-electrode system, including the type of anode or cathode. The three-electrode scheme adopts a single electrode system, including the type of anode or cathode. The three-electrode sch adopts a single working electrode as the signal source and a specific recognition platf [13]. However, the interaction between the optical electrode and the reducing agent o target molecule inevitably leads to deterioration of the performance of the PEC se [14]. Cao et al. [15] constructed an ingenious visible light-induced membraneless powered PEC biosensing platform by integrating a signal amplification strategy for bioanalysis. Owing to the simple enzyme-induced chemical redox cycle process, the si could be effectively and repeatedly regenerated through coupling reduction and ox tion reactions, which were used for ultrasensitive PEC analysis. Therein, the selection of photoactive materials is the key to improving the detec of photoelectrochemical sensors. Currently, g-C3N4 is considered as the photoactive terial with the highest potential owing to its advantages such as suitable band gap, ex lent stability, and low toxicity [16]. Nevertheless, the photocatalytic efficiency of sin phase g-C3N4 is low owing to the lack of surface redox active sites and the high recom nation rate of electron-hole (e − -h + ) pairs [17]. To effectively improve the photocata activity of g-C3N4, it is generally combined with other photoactive materials with lo band gaps. As n-type narrow-band gap semiconductors, CdS quantum dots (QDs) often used for the study of visible light-active materials [18]. However, CdS QDs are only photocorrosive [19] but also exhibit a high electron-hole recombination rate which can be resolved by combining them with other p-type photoactive materials. C nanoparticles (NPs) are p-type semiconductors [21] that have been widely employe photocatalysts for different reactions owing to their high conductivity and low band [22]. Therefore, combining n-type CdS and p-type CuO to form composites [23] may s as an effective strategy for enhancing stability and inhibiting electron-hole recombina Herein, we propose a spPEC assay for Hg 2+ detection based on g-C3N4-CdS-C composites and a redox cycle signal amplification strategy. As presented in Scheme the g-C3N4-CdS-CuO material is prepared via layer-by-layer self-assembly as a self-p ered system. A fluorine-doped tin oxide (FTO) electrode is modified using the compo material as a photoanode. As illustrated in Scheme 1B, under irradiation, glutath (GSH) is oxidized by CuO to form oxidized glutathione (GSSG) for generating a str photocurrent owing to the GSH-oxidase and peroxidase-like activities of CuO NPs [24 In the presence of GSH reductase (GR), GSSG is reduced to GSH using a reductive c zyme II (NADPH) as a substrate, resulting in a redox cycle system [26], signal ampli tion is achieved through redox cycle. However, in the presence of Hg 2+ , GSH can bin Hg 2+ based on the hard-soft acid-base (HSAB) theory [27], which influences the ab redox cycle system and significantly reduces the photocurrent. Based on these results can conclude that the g-C3N4-CdS-CuO composite was successfully used to develo spPEC senor for the detection of Hg 2+ in human urine and artificial saliva. Scheme 1. Schematic diagram of electrode preparation process (A) and schematic diagram of r cycle on photoanode of self-powered device and test in actual sample (B). Photoelectrochemical Measurement Prior to the photoelectrochemical measurement, g-C 3 N 4 and CdS were prepared. The detailed preparation processes of these composites are illustrated in Support Information. Firstly, the bared FTO was cleaned ultrasonically in water, ethanol and acetone in sequence and dried at 60 • C. Then, 20 µL g-C 3 N 4 suspension (1.0 mg mL −1 ) was dropped on the FTO electrode with an active area of 0.25 cm 2 and dried at room temperature. After that, 20 µL CdS (5 mg mL −1 ) and 20 µL CuO (1.0 mg mL −1 ) were dropped on the FTO in the same operation, successively, to form layer upon layer of electrode materials. A three-electrode system was used to measure the photocurrent in CHI660E Electrochemical Workstation. The modified FTO was used as the working electrode, Pt wire as the counter electrode, and the saturated Ag/AgCl electrode as the reference electrode. GSH (15 mM, 1 mL), GR (170 u mg −1 , 2 µL) and NADPH (10 mg mL −1 , 250 µL) were added into the 0.1 M phosphate buffer solution (PBS) (5 mL, pH 7.4) as a redox cycle to realize signal amplification. The excitation light was turned on every 10 s in the photoelectric measurement process, and the voltage was constant at 0.0 V. The change in photocurrent was observed by adding different concentrations of mercury ions into the electrolyte. Real Sample Processing Urine samples [28]: 1 mL 50% hydrochloric acid and 0.8 mL 0.1 mol L −1 KBrO 3 / 0.084 mol L −1 KBr were added to 10 mL urine samples; after reaction for 15 min, the appropriate amount of hydroxylamine (120 g L −1 ) hydrochloride/sodium chloride (120 g L −1 ) solution was added into the above mixture until the yellow color disappeared; samples were diluted to 50 mL with deionized water and diluted 10 times with PBS buffer before use. Artificial saliva [29]: 0.6 mg mL −1 disodium hydrogen phosphate, 0.6 mg mL −1 anhydrous calcium chloride, 0.4 mg mL −1 potassium chloride, 0.4 mg mL −1 sodium chloride, 4 mg mL −1 mucin and 4 mg mL −1 urea was dissolved in 1 mL of the deionized water. This was set to pH 7.2, stored in the refrigerator and diluted 10 times with PBS buffer before use. Characterizations of Composite Material Scanning electron microscopy (SEM), X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FT-IR) were used to characterize the prepared composites and to analyze the electrode preparation process. SEM images reveal that the bare FTO electrode is a uniform spongy layer ( Figure 1A) [30]. After g-C 3 N 4 is loaded onto FTO, the surface of the g-C 3 N 4 /FTO electrode presents a typical plate-like structure ( Figure 1B), which results from aggregation of the g-C 3 N 4 sheets [31]. A large number of CdS QD nanoparticles are distributed on the surface of g-C 3 N 4 ( Figure 1C), which is consistent with the literature [32]. The large surface area of g-C 3 N 4 serves as an effective anchor for loading CdS NPs. When the CdS QD-g-C 3 N 4 /FTO electrode is covered with CuO nanoparticles ( Figure 1D), the CuO nanoparticles appear to be nearly spherical [33]. SEM ( Figure 1E,F) and elemental mapping by energy dispersive X-ray spectroscopy ( Figure 1I) reveal that C, N, Cd, S, Cu, and O exist in the g-C 3 N 4 -CdS-CuO electrode, indicating that the composite material is successfully loaded onto the FTO electrode. The XRD analysis results of the crystal phase of the synthesized material are presented in Figure 1G. The XRD spectrum of g-C 3 N 4 exhibits two evident diffraction peaks at 24.3 • and 27.0 • . These peaks represent the (100) and (002) planes of the graphite material, respectively, corresponding to the in-plane structure of the triazine ring and layered superposition of the conjugated aromatic groups [34]. The diffraction peaks of CdS at 26.5 • , 37.6 • , and 43.7 • correspond to the (002), (102), and (103) crystal planes, respectively, which correlate with the wurtzite phase standard cards (JCPDS 80-0006) [35]. The diffraction peaks of the CuO nanoparticles at 35.6 • , 38.7 • , and 48.8 • , in accordance with their monoclinic and crystalline nature, correspond to the (110), (111), and (200) crystal planes, respectively [33]. In the FT-IR spectra ( Figure 1H) with the direct detection (Resolution: 4, Number of scans: 8, Detector: DTGS), g-C 3 N 4 (curve a) exhibits strong absorption peaks in the range of 1200-1600 cm −1 , which correspond to C-N heterocycles. The absorption band at 806 cm −1 corresponds to the typical breathing mode of the tri-s-triazine ring, and the intensity of the absorption peak near 2900-3400 cm −1 corresponds to the N-H stretching vibration [36]. The Cd-S band of CdS (curve b) demonstrates stretching and bending vibration absorption peaks at 631 cm −1 , whereas the peak at 1630 cm −1 corresponds to the O-H stretching vibrations [37]. For the CuO NPs (curve c), the peaks appearing at 701 cm −1 belong to the CuO NPs and confirm the formation of NPs [38]. Moreover, peaks corresponding to OH − and C-O stretching vibrations can be observed from 2800 to 3600 cm −1 [39]. These results confirm that the composite and modified electrodes were successfully synthesized. superposition of the conjugated aromatic groups [34]. The diffraction peaks of CdS at 26.5°, 37.6°, and 43.7° correspond to the (002), (102), and (103) crystal planes, respectively, which correlate with the wurtzite phase standard cards (JCPDS 80-0006) [35]. The diffraction peaks of the CuO nanoparticles at 35.6°, 38.7°, and 48.8°, in accordance with their monoclinic and crystalline nature, correspond to the (110), (111), and (200) crystal planes, respectively [33]. In the FT-IR spectra ( Figure 1H) with the direct detection (Resolution: 4, Number of scans: 8, Detector: DTGS), g-C3N4 (curve a) exhibits strong absorption peaks in the range of 1200-1600 cm −1 , which correspond to C-N heterocycles. The absorption band at 806 cm −1 corresponds to the typical breathing mode of the tri-s-triazine ring, and the intensity of the absorption peak near 2900-3400 cm −1 corresponds to the N-H stretching vibration [36]. The Cd-S band of CdS (curve b) demonstrates stretching and bending vibration absorption peaks at 631 cm −1 , whereas the peak at 1630 cm −1 corresponds to the O-H stretching vibrations [37]. For the CuO NPs (curve c), the peaks appearing at 701 cm −1 belong to the CuO NPs and confirm the formation of NPs [38]. Moreover, peaks corresponding to OH − and C-O stretching vibrations can be observed from 2800 to 3600 cm −1 [39]. These results confirm that the composite and modified electrodes were successfully synthesized. Electrochemical Activity of g-C 3 N 4 -CdS-CuO Electrode The electrochemical properties of the bare and material electrodes were studied, including the diffusion coefficient and heterogeneous electron-transfer rate constant. First, the electrochemical performances of the bare and modified electrodes were determined using cyclic voltammetry, in a 5 mM [Fe(CN) 6 ] 3−/4− solution containing 0.1 M KCl, by employing platinum as the counter electrode and saturated Ag/AgCl as the reference electrode (See supporting literature for details). As illustrated in Figure 2A,D, the current response increases gradually with an increase in the scanning rate and demonstrates a good linear relationship with the square root of the potential scanning rate, indicating that the redox reaction on the surface of the bare and modified electrodes is a diffusion control process. The electrically active area of the bare and g-C 3 N 4 -CdS-CuO electrodes was calculated according to the Randles-Sevcik equation [40]: where I p denotes the maximum current (A), D denotes the diffusion coefficient (cm 2 /s), C = [Fe (CN) 6 ] 3−/4− (mol/cm 3 ), N represents the number of transferred electrons, υ denotes the scanning rate (V/s), and A indicates the effective working area (cm 2 ) of the modified electrode. The effective working area (A) can be calculated by a linear fitting of I p to υ 1/2 . The active area of the bare electrode was 53.72 cm 2 ( Figure 2B) and that of the g-C 3 N 4 -CdS-CuO electrode was 72.40 cm 2 ( Figure 2E). Thus, the larger the specific surface area of the g-C 3 N 4 -CdS-CuO electrode, the more numerous the active sites are, and this is conducive to the amplification of electrical signals. In addition, the electron transfer rate constants of the bare and g-C 3 N 4 -CdS-CuO electrodes can be calculated according to Nicholson's equation [41]: ϕ = k 0 [πDnFV⁄(RT)] −1⁄2 ; where ϕ denotes the peak-to-peak separation (δ Ep), k 0 indicates the heterogeneous electron transfer rate constant, D represents the electroactive material diffusion coefficient, F denotes the Faraday constant, R represents the molar gas constant, and T denotes the temperature. The slope of the ϕ pair [πDnF/(RT)] −1/2 υ −1/2 curve corresponds to the heterogeneous electron transfer rate constant k 0 , which is 3.35 × 10 −4 cm s −1 ( Figure 2C) for the bare electrode and 2.72 × 10 −4 cm s −1 ( Figure 2F) for the g-C 3 N 4 -CdS-CuO electrode. This indicates that the electrochemical equilibrium time required by the g-C 3 N 4 -CdS-CuO-modified electrode was not significantly shortened. material diffusion coefficient, F denotes the Faraday constant, R represents the molar gas constant, and T denotes the temperature. The slope of the φ pair [πDnF/(RT)] −1/2 υ −1/2 curve corresponds to the heterogeneous electron transfer rate constant 0, which is 3.35 × 10 −4 cm s −1 ( Figure 2C) for the bare electrode and 2.72 × 10 −4 cm s −1 ( Figure 2F) for the g-C3N4-CdS-CuO electrode. This indicates that the electrochemical equilibrium time required by the g-C3N4-CdS-CuO-modified electrode was not significantly shortened. Feasibility of the Designed Strategy To evaluate the spPEC properties of the fabricated CuO-CdS-g-C 3 N 4 /FTO electrode, photocurrent responses of bare FTO, g-C 3 N 4 /FTO, CdS-g-C 3 N 4 /FTO, and CuO-CdS-g-C 3 N 4 /FTO were monitored ( Figure 3A). The photocurrent of CuO-CdS-g-C 3 N 4 /FTO (column c) is better than that of g-C 3 N 4 /FTO (column a) and CuO-CdS-g-C 3 N 4 /FTO (column b), which is further verified by electronic impedance spectroscopy (EIS), recorded by the EIS different changes of preparing the electrodes in ferricyanide/ferrocyanide mixed solution ([Fe(CN) 6 ] 3−/4− ) (See supporting literature for details). As illustrated in Figure 3B, the resistance of g-C 3 N 4 /FTO (curve b) is larger than that of FTO (curve a), which may be attributed to the poor conductivity of g-C 3 N 4 /FTO. The resistance of CdS-g-C 3 N 4 /FTO (curve c) is lower than that of FTO, indicating that CdS promotes g-C 3 N 4 electron transfer. Moreover, the resistance of CuO-CdS-g-C 3 N 4 /FTO (curve d) is less than that of CdS-g-C 3 N 4 /FTO, which indicates that CuO increases the transfer of electrode current and further increases the photocurrent. The feasibility analysis further verifies this principle. To verify the amplification caused by the redox cycle strategy, the developed spPEC sensor was studied using the photocurrent output. For comparison, we also carried out the comparison experiment of redox single signal amplification and double signal amplification; that is, under the same conditions, different concentrations of Hg 2+ (0.5 nM, 10 nM) were added to GSH ( Figure 3C) and GSH, NADPH, and GR ( Figure 3D) solutions respectively. As presented in Figure 3C, GSH is oxidized by CuO to GSSG to generate a strong photocurrent (column b vs. a) and single signal amplification was achieved. When Hg 2+ was added from 0.5 nM to 10 nM, the photocurrent I increased from 0.97 µA to 2.27 µA (column c and d). These results demonstrate that the GSH could specifically recognize Hg 2+ , and the photocurrent was inversely proportional to the concentration of Hg 2+ . Evidently, when NADPH and GR are added ( Figure 3D), the photocurrent reaches a maximum (column b vs. a) and double signal amplification. In the presence of GR, GSSG is reduced to GSH using NADPH as a substrate, thereby resulting in a self-powered redox cycle system. When Hg 2+ was added from 0.5 nM to 10 nM, the photocurrent I increased from 2.49 µA to 3.59 µA (columns c and d), which was the same as the photocurrent trend in Figure 3C, indicating that both methods were beneficial to PEC detection, and the detection effect of double signal amplification was better. Therefore, the GSH-NADPH-GR redox cycle can be used as a signal amplification strategy to improve the detection sensitivity for Hg 2+ . b), which is further verified by electronic impedance spectroscopy (EIS), recorded by the EIS different changes of preparing the electrodes in ferricyanide/ferrocyanide mixed solution ([Fe(CN)6] 3−/4− ) (See supporting literature for details). As illustrated in Figure 3B, the resistance of g-C3N4/FTO (curve b) is larger than that of FTO (curve a), which may be attributed to the poor conductivity of g-C3N4/FTO. The resistance of CdS-g-C3N4/FTO (curve c) is lower than that of FTO, indicating that CdS promotes g-C3N4 electron transfer. Moreover, the resistance of CuO-CdS-g-C3N4/FTO (curve d) is less than that of CdS-g-C3N4/FTO, which indicates that CuO increases the transfer of electrode current and further increases the photocurrent. The feasibility analysis further verifies this principle. To verify the amplification caused by the redox cycle strategy, the developed spPEC sensor was studied using the photocurrent output. For comparison, we also carried out the comparison experiment of redox single signal amplification and double signal amplification; that is, under the same conditions, different concentrations of Hg 2+ (0.5 nM, 10 nM) were added to GSH ( Figure 3C) and GSH, NADPH, and GR ( Figure 3D) solutions respectively. As presented in Figure 3C, GSH is oxidized by CuO to GSSG to generate a strong photocurrent (column b vs. a) and single signal amplification was achieved. When Hg 2+ was added from 0.5 nM to 10 nM, the photocurrent △I increased from 0.97 μA to 2.27 μA (column c and d). These results demonstrate that the GSH could specifically recognize Ultraviolet-visible diffuse reflection spectroscopy (DRS) was used to study the electronic orientation of the composite materials. The band gap of the three semiconductors was calculated according to the equation (αhν) 2 = C (hν-Eg) (where α, hν, C, and Eg are the absorption coefficient, photoenergy, the constant, and band gap energy, respectively) [42] and extrapolated the intercept of the linear part of (αhν) 2 vs. hν plot to the x-axis shown in Figure S1A. The band gaps (E g ) of g-C 3 N 4 , CdS, and CuO are 2.84 eV, 2.10 eV, and 1.38 eV, respectively. The conduction bands (ECB) and valence bands (E VB ) were estimated using the following formula: E CB = X − E c − E g /2 (E c = 4.5 eV) and E VB = E CB + E g , where X and E g denote the geometric average value of the absolute electronegativity of each atom in the semiconductor and the band gap width of the semiconductor, respectively. The E CB values of g-C 3 N 4 , CdS, and CuO were 0.92 eV, −0.36 eV, and 0.62 eV, respectively, and the E VB values of g-C 3 N 4 , CdS, and CuO were 3.76 eV, 1.74 eV, and 2.0 eV, respectively. The band gaps of g-C 3 N 4 , CdS, and CuO reported in the literature are presented in Scheme 2A [43]. When g-C 3 N 4 , CdS, and CuO are in contact, the energy level differences between CdS and g-C 3 N 4 and between CdS and CuO (Fermi-Dirac statistics) cause electrons to flow from CdS (high energy level) to g-C 3 N 4 and CuO (low energy level). If the material is described in terms of a Fermi-Dirac distribution, this electron transfer is referred to as a Fermi level alignment. The redistribution of electrons between CdS and g-C 3 N 4 and between CdS and CuO is assumed to trigger the downward and upward movement of CdS, g-C 3 N 4 and CuO band edges, respectively. Therefore, we can infer that the electron orientation of the g-C 3 N 4 -CdS-CuO electrode demonstrates a stepped structure, as illustrated in Scheme 2B. The edges of the conduction and valence bands of these three materials increased in the following order: g-C 3 N 4 < CdS < CuO; that is, the CdS layer was inserted between g-C 3 N 4 and CuO to increase the conduction band edge of CuO, providing a higher driving force for the injection of excited-state electrons from outside the CuO layer. When g-C 3 N 4 -CdS-CuO was photoexcited under irradiation, the stepped structure at the edge of the band proved not only beneficial for electron injection in g-C 3 N 4 -CdS-CuO, but also for hole recovery. Under irradiation, the g-C 3 N 4 -CdS-CuO material generated a photocurrent, and the GSH-NADPH-GR redox cycle acted as a self-powered system to further increase the photocurrent and achieve signal amplification ( Figure S1B). CdS and g-C3N4 and between CdS and CuO is assumed to trigger the downwa upward movement of CdS, g-C3N4 and CuO band edges, respectively. Therefore, infer that the electron orientation of the g-C3N4-CdS-CuO electrode demonst stepped structure, as illustrated in Scheme 2B. The edges of the conduction and v bands of these three materials increased in the following order: g-C3N4 < CdS < Cu is, the CdS layer was inserted between g-C3N4 and CuO to increase the conductio edge of CuO, providing a higher driving force for the injection of excited-state el from outside the CuO layer. When g-C3N4-CdS-CuO was photoexcited under irrad the stepped structure at the edge of the band proved not only beneficial for electro tion in g-C3N4-CdS-CuO, but also for hole recovery. Under irradiation, the g-C3N CuO material generated a photocurrent, and the GSH-NADPH-GR redox cycle act self-powered system to further increase the photocurrent and achieve signal amplif ( Figure S1B). Optimization of Experimental Conditions To improve the sensitivity of the sensor, we optimized the experimental parameters by using the photocurrent of PEC. We primarily optimized the concentration of NADPH ( Figure S2A), and a maximum change was noted when the amount of NADPH was 0.40 µg/µL (250 µL). The amount of GR was also optimized, which is one of the most important factors in the signal detection process. As illustrated in Figure S2B, when different concentrations of GR are added to the bath solution, a maximum photocurrent signal can be obtained at 2.08 ng/µL (2 µL). Next, we determined the optimal GSH concentration ( Figure S2C). The maximum signal was obtained at a concentration of 15 mM, after which the signal slowly diminished. Thus, 15 mM was selected as the optimal concentration for GSH. Finally, we optimized the mass ratio of g-C 3 N 4 to CuO. As presented in Figure S2D, the photocurrent signal of the material reaches a maximum when the ratio of g-C 3 N 4 and CuO is 1:1. As shown in Figure S2E, when using a Na 2 CO 3 -NaHCO 3 (0.1 M, pH 7.4) and tris-HNO 3 (0.1 M, pH 7.4) buffer, the signal response of the photocurrent decreases. In contrast, no corresponding interference can be observed in PBS (0.1 M, pH 7.4). Therefore, we think that using PBS (0.1 M, pH 7.4) to prepare the bath solution will result in a better signal response. Analytical Performance and Selectivity and Stability To verify the analytical performance of the proposed spPEC sensing system, different concentrations of Hg 2+ were added and measured under the optimal conditions. As illustrated in Figure 4A, the photocurrent signal gradually weakens with increasing Hg 2+ concentration. Hg 2+ is linearly and logarithmically correlated in the range between 5 pM and 100 nM. The regression equation was Y = 0.90LogC Hg2+ − 2.40 (R 2 = 0.993), and the calculation formula for detection limit is S/N = 3 [44]. The detection limit was 0.84 pM ( Figure 4B). Compared with other Hg 2+ detection methods, this sensor demonstrated a picomolar detection sensitivity for Hg 2+ without requiring aptamer fixing and pretreatment processes (Table S1). Therefore, we can conclude that the detection efficiency of the sensor is significantly improved. trated in Figure 4A, the photocurrent signal gradually weakens with increasing Hg 2+ concentration. Hg 2+ is linearly and logarithmically correlated in the range between 5 pM and 100 nM. The regression equation was Y = 0.90LogCHg2+ − 2.40 (R 2 = 0.993), and the calculation formula for detection limit is S/N = 3 [44]. The detection limit was 0.84 pM ( Figure 4B). Compared with other Hg 2+ detection methods, this sensor demonstrated a picomolar detection sensitivity for Hg 2+ without requiring aptamer fixing and pretreatment processes (Table S1). Therefore, we can conclude that the detection efficiency of the sensor is significantly improved. The experimental results indicated that the PEC response did not change significantly within 200 s ( Figure 4C), and the relative standard deviation (RSD) of the PEC response change was less than 0.77%. To assess the selectivity of the constructed spPEC sensor, we investigated its specificity for Hg 2+ (0.5 nM). Among several analogs, including Mg 2+ , Mn 2+ , Cd 2+ , Fe 3+ , Zn 2+ , K + , and Na + (50 nM), a much lower current value was recorded with Hg 2+ than with other interfering species ( Figure 4D). This result demonstrates the excellent selectivity of the spPEC method for Hg 2+ . The stability of the sensor was also evaluated for the detection of 0.5 nM Hg 2+ . Moreover, to evaluate the repeatability of the proposed sensor, six replicate measurements were performed with the sensor using 0.5 nM Hg 2+ ( Figure S3). The results revealed an RSD of 0.53%, indicating reasonable repeatability of the spPEC sensor for Hg 2+ detection. Analysis of Real Samples To investigate whether the proposed method could be used to detect Hg 2+ in complex samples, the experiment was conducted for two different substances (the urine of healthy people and artificial saliva). The standard addition method was used to evaluate the feasibility of the method for actual samples, as presented in Table 3. The concentration of Hg 2+ was in the range 0.1-10 nM, and the recovery rate was between 93.56% and 103.02%, indicating possible applicability of the spPEC sensor for detecting Hg 2+ in real-sample application. "-" Indicates that it is below detection limit. Conclusions A new spPEC sensor for Hg 2+ detection based on g-C 3 N 4 -CdS-CuO composites, as well as a redox-cycle signal-amplification strategy were developed. The novel g-C 3 N 4 -CdS-CuO co-sensitized modified electrode, which exhibits large surface area and improved photoelectric response due to a cascade structure, was successfully used to detect target ions in complex samples. In addition, the self-powered sensor demonstrated satisfactory sensitivity owing to double signal amplification (GSH and GSH-NADPH-GR) and a limit of detection of 0.84 pM for Hg 2+ . More importantly, in vitro application of the GSH-NADPH-GR redox cycle demonstrated its potential as a photoelectrochemical sensor and, therefore, presented a new method for constructing novel sensors. The sensor also demonstrated significant potential for detecting Hg 2+ in human urine and artificial saliva samples. More significantly, the developed approach can also be used to detect other heavy metal ions according to the HSAB theory and may serve as a new analytical tool for specific, sensitive, and reliable analysis of heavy metal ions in clinical toxicology, food, and the environment. Table S1: Comparison of other analysis strategies for Hg 2+ detection. References [10,25,34,[45][46][47][48][49][50] are cited in the supplementary materials. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
6,875.4
2022-07-18T00:00:00.000
[ "Materials Science" ]
Digital Data Creation for Property Tax Management In India, the municipal corporations are facing issues in property tax collection and major reason for it is lack of count of assessed properties under its jurisdiction. Also the storage of information of the properties are majorly based on manual efforts which lead to data redundancy and failure to appropriate tax collection. The study has been carried out in the Hauz Khas Ward, South Delhi Municipal Corporation, Delhi. The purpose of this paper to develop spatial database for the property tax management which include capture of building footprint, road, land use such as Parks, paved area, Drains etc and demarcation of boundaries such as locality, slums etc on basis of regular grid net with cell size of 250 m by 250 m. Along with this, to create an unique no. to each building polygon called BUID (Building Unique ID) to provide individuality to each polygon. This data will be useful for the collection of information of properties related to property evaluation parameters for tax calculation through ground survey. Introduction Property tax is a levy on property that the owner is required to pay to the governing authority of the area were property exist. For any development, Property tax is one of major source of income state or country to cover the expenditure of development (Pavi, 2011). The local or municipal authorities are bound for the development of their administrative and without capital, it became a challenge for them. Apart from that, taxation can be used as urban management tool which can track land use, urban expansion, land market and transactions related to properties (Kundu & Ghosh, 2011). As a matter of fact, for property taxation, location is foremost as it matters to unit value for tax calculation. Location of property as a spatial attribute of property is crucial for analysis or estimation of many studies related to property or urban management. Earlier, It was very di cult to identify the impact of location on land value as determination of accurate location was very challenging (Stylianidis, 2009). Nowadays, using GIS there are many techniques to analysis spatial attribute of any entity (Arbia, 1989;Tomlin, 1990;Huxhold, 1991;Star & Estes, 1990). Also, GIS mapping is of great signi cance in scienti c, research, planning and management. GIS is not a standalone method. To achieve desired result, it require hardware, software and human effort (Arnof, 1995). GIS technologies are capable of handling large volumes of data from multiple sources, integrating them to produce information in a spatial context in the form of maps, and models (Fedosin, 2014). Applying GIS based municipal information tools can lead to solution of these data management problems. Thus GIS can be an extremely useful tool for municipal planning and decision-making that involves evaluation and assessment of assets. Not just the worldwide but in India also there are many local and municipal government such using GIS as decision making tool to design and support development programmes (Farooqi, 2014). In Delhi, calculation of property tax is on basis of unit area method. Since 2004, the collection of property tax has been moved to self-assessment method. Since then the no. of assess properties has been decreased (Simanti, 2013). Also because it is self-assessment basis, MCD have information for those properties only which have led the property tax. The properties which have never paid tax, have not come under the notice of MCD. So the primary concern it to create a database of all the properties comes under tax net of MCD by capturing all the building under the jurisdiction boundary. The unit for the tax calculation depends on the category of colony which vary from A to H. This category of colony depends on the infrastructure of the colony such as type road, drainage etc. Again for this evaluation, the municipal evaluation committee depends on manually documented information. The spatial digital data will be more feasible and reliable for these kind of decision making system. However currently, the municipality do not yet have adopted GIS technology. It is very important to make them aware of how powerful tool GIS can be to support and facilitate planning and many decision-making at the municipal level. This paper is focusing on creation of spatial database of consisting multiple layers which can interact each other on GIS environment, can store information from other sources, have capabilities to provide solution through various ground re ective analysis and easy to access through any intranet or internet application. The current technologies have potential to design enterprise GIS for municipalities that can customise attributes and geospatial data with spatial and non-spatial models and operations to create a service oriented architecture from Geo web services (Samadzadegan, 2008). Literature Review common problem is growth of illegal land subdivision, which causes growth of urban sprawl as unauthorized or unapproved colonies. The study also brief the reason of this rapid growth of unauthorized colonies on the point of buyer, colonizer and political aspect which shows ine ciency of governance in reference of management, policy implementation, poor economic structure, corruption and political in uence. The recommendation has been given in term of regularization of Policy objective and Process and implementation of scheme, Penalization charges, Property & Vacant land tax, Level of Awareness, Alternate Scenario for Regularization, Land Registration, Purchase and Sale of land, regulations of Building byelaws, Identi cation of unauthorized colonies(Swain et al., 2016). It can be outlines that major issues behind the lesser taxation of properties are lack of complete database having data of properties in term of spatially and non-spatially information with the capability of transparent interaction with both government as well as citizen. Considering the issue, GIS is a modern technological tool for effective and e cient storage, retrieval and manipulation of spatial and non-spatial data for deriving scienti c, management and policy making information (Mathur et al., 2009). It has the capabilities to supply statistical and spatial record and capability for creation of base-maps, formulation of designing proposals and to act as monitoring tool during implementation phase of any planning scheme (Gupta et al, 2001). Integration of IT and GIS has capability to modernize the current process of Government (Garg, 2015). Geographical information System has been successful used for the count of properties under concerned jurisdiction by few urban bodies, which result improvement in revenue. The record of the property should be kept on the categorization of properties based on its usage such as commercial, health, education and its exemption such as religious, charitable of government so that it will be feasible for the study of tax assessment. Other important part is gap between tax billing and collection. Using CAD or ESRI platform software the data can be store in one spatial database as feature dataset enable with access through the structured query based language, both geographic as well as its attribute. Objective The DATA AND SOFTWARE USED For this study primary concern is building as it is basic for property tax calculation. To have a good visual interpretation, for this study World View -2 has been used. The resolution of world view is 0.5 met which is good enough for the interpretation of building edges and partitions. Visible feature like Building, Landcover, and Drains can be capture with the image but demarcation of localities is not possible with it. For interpretation of localities, blocks, colonies other sources such as google and Eicher maps are very much useful. For this reason, reference of Eicher map (2007) and Google map has used. For this task, Arc GIS (ver-10.1) which is an ESRI suit platform has been opted. Methodology As clear as objective, to create the spatial database, all the required entity has been captured stored in stored in geodatabase. Other possible attributes has been depicted in the related database using secondary input data i.e. Google and Eicher. Below illustrated diagram is high overview of the methodology. To achieve the desired database, following are the major steps involve in this task. 5. Demarcation of location boundaries using secondary data source. 6. Updating the attributes using secondary data source. Design of Geodatabase Geodatabase is structured way to store multiple shape les of different feature type whether it is point or line or polygon (Idrizi, 2018). To cover the area, multiple layers have been capture. The set shape les have been categorise in geodatabase on the basis of its entity type . Attribute Design on the basis of objective Before capturing the data, in the shape le, required elds with name has been created as per data type and length of the eld. Identi cation and digitization of feature Creation of spatial data, is majorly an image interpretation job. Before capture of any building, a grid has been created of 250* 250 met. This grid has been created using the shnet tool in Arc GIS. Each grid have been assign with unique number as shown in map as gure 4. Same grid no. will be update in the building that falls under the respective grid. It will not just help in tracking of digitization of data but also will be helpful for creating a unique ID for Building. Identi cation of features has been done with the help of tone, texture, pattern and association of the entity on satellite image. All the list out feature has been capture in its speci ed shape le. Creation of BUID (Building Unique ID) for building: Since for property taxation "Property" is the foremost requirement which comes under building. There are various distribution of property on the ground. It can be one building or part of building or cluster of buildings. Addressing this concern, while digitization, all building has been assign with an BUID which will unique no. for each building so that these building polygons can be used to map real world properties. All building will be assign with a reference no. which will be the range of 1to N with in the grid where N is the maximum count of buildings in any grid having limits of 99999. This reference no. will be design to building ID will be 5 digit alpha-numeric text. To make it 5 digit, additional pre "0" has been added to reference no. (RN) in attribute. For eg. in the given grid no. 30274, the building has been given reference no. from 1 to 114. The build Id will be "00001",00010, 00100 etc. The URN is 10 digit no. which is combination of Grid no. and Building ID. This will create a unique reference no. for each building throughout the data. In Table (1) the design and creation of BUID has been shown Demarcation of location boundaries using secondary data source For digitization of location boundary, Eicher image has been used. Eicher is nothing but an image having visualised major base layer information. In general the colonies and location are divided by any natural or manmade boundary such as road, drain, building or landcover pattern. Reference Google image has also been taken for this task. Updating the attributes using secondary data source Information like name of major road, major drains, localities and major landmark has been added in the data using Eicher and Google. Results In result, a digital pro le of Hauz Khas ward has been created which has visual information of building, road, use of lands and locality. Land use and land cover information are important for several planning and management activities concerned with the surface of the earth (Lillesand & Keifer, 1994). The land use types are statistics of study area is given in the Table (2) and percentage distributions of all classes' shows by Figure (7). The landuse distribution can de ne and predict the human activity in term of use agriculture or housing or institutional etc. Ward Area of Hauzkahs is 3.14 sq. Km having 5620 buildings under its jurisdictions. The covered area of building is 0.93 sq. Km in Hauzkhas i.e. 29% of the area which is the tax-net for corporation. Through this approach only horizontal expansion of built-up can be determined but again for actual covered area it is very important to calculate the vertical built-up which can be best possible through door to door survey of these building. Apart from this approx. 20% of the area is covered by road and paved area. Since Hauzkhas ward is a planned area, 23% of area has been dedicated to the gardens and park. Since many of Since the data has been capture on GIS platform, for eld work activity and transfer of multiple attributes of eld data, it will be act as powerful tool.Again for evaluation and development of colonies other information of infrastructure such as type of road and drain, governing bodies etc are vital. For this all information are available with the department and should be converted in digital platform to link with spatial database. This one time task will be the key for multiple solution of municipal challenges. This database is capable to store historic as well current and future database consecutively and can be utilised for, analysis or development. Since there are many GIS platform, this can be used and access through any IT enabled GIS application. Declaration of Interest We have no con ict of interest to declare. Figure 2 High overview of methodology to create database of spatial dataset Pie chart of Land-use Pro le of HauzKhas ward
3,113.6
2020-07-12T00:00:00.000
[ "Computer Science", "Business" ]
Ranking Influential Nodes in Complex Networks with Information Entropy Method . The ranking of influential nodes in networks is of great significance. Influential nodes play an enormous role during the evolution process of information dissemination, viral marketing, and public opinion control. The sorting method of multiple attributes is an effective way to identify the influential nodes. However, these methods offer a limited improvement in algorithm performance because diversity between different attributes is not properly considered. On the basis of the k-shell method, we propose an improved multiattribute k-shell method by using the iterative information in the decomposition process. Our work combines sigmod function and iteration information to obtain the position index. The position attribute is obtained by combining the shell value and the location index. The local information of the node is adopted to obtain the neighbor property. Finally, the position attribute and neighbor attribute are weighted by the method of information entropy weighting. The experimental simulations in six real networks combined with the SIR model and other evaluation measure fully verify the correctness and effectiveness of the proposed method. Introduction Multidimensional information flows rapidly on the network, while different nodes have different effects on information transmission [1], viral marketing [2], public opinion guidance [3], and social recommendation [4,5] due to their different influences. From the perspective of information transmission, different social networks have different modes of information transmission because of the diversity of functional focuses and user structures. From the perspective of marketing, by providing influential user rankings for different hobbies and groups, it can help new users quickly and effectively obtain relevant information sources of interest so as to achieve a smooth cold start. From the perspective of public opinion guidance and control, the event evolution process of hot public opinions often includes the forwarding and comments of users with different influences on different platforms. ese simple operations often lead to an enormous development of public opinions in different directions. e influence of nodes is evaluated from global structure information, such as betweenness centrality [6], closeness centrality [7], and Katz centrality [8]. ese methods show good performance in node sorting. However, because of O(n 2 ) or even higher computational complexity, these methods are not suitable for large-scale networks. e influence of nodes is quantified by local information, such as degree centrality [9], semilocal centrality [10], hybrid degree centrality [11], average shortest path centrality [12], and h index [13]. Local measures are less efficient because they only consider local neighborhood information. ere are many heuristic algorithms [14,15] combined with local neighborhood information. Research based on random walk evaluates the influence of nodes through multiple iterative operations with high-computational complexity such as feature vector centrality [16], PageRank [17], LeaderRank [18], and Hits [19]. Kitsak et al. [20] argues that the most influential nodes are those located at the core of the network. Each node is assigned a fixed shell value after k-shell decomposition. However, k-shell decomposition tends to assign the same shell value to many nodes so that the influence of these nodes with same shell value cannot be further distinguished. On this basis, plenty of methods have been proposed to further improve the performance of k-shell method. Zeng and Zhang [21] propose a mixed degree decomposition method, which combines the residual degree and depletion degree to update the nodes. In each step of decomposition, the nodes are removed and decomposed based on the mixed degree. However, the λ parameter is difficult to optimize. Liu et al. [22] proposes an improved ranking method to generate a more differentiated ranking list. is method is realized by calculating the shortest distance between the target node and the core node of the network. e core nodes of the network are in a node set with the highest shell value in k-shell decomposition. e computational cost of this method is relatively expensive by calculating the shortest distances to the core nodes. Bae and Kim [23] propose a new measurement of neighborhood coreness centrality, which calculates the diffusion influence of nodes in the network by summing all neighborhood shell values. e influence of nodes with the same ks value can be further distinguished by using the iterative information of removal nodes to identify the position difference of nodes in the network. e degree decomposition method based on iteration factor [24] is to improve the performance of the traditional method by using the iteration information and node degree in the decomposition. In addition, some other node-sorting algorithms were introduced to improve the sorting performance. On the basis of the k-shell method, our work makes full use of the iterative information in the decomposition process and proposes an improved multiattribute k-shell method. First, the iteration information is processed by sigmod function to obtain the position index. en, the position attribute is captured by combining the shell value and the position index. e local information of the node is adapted to obtain the neighbor property. Finally, the position attribute and neighbor attribute are weighted by the method of information entropy weighting. In the experiment, the SIR model, Kendall coefficient, and imprecision function are used to, respectively, evaluate the propagation capability of different probabilities of propagation, the imprecision of ranking and the correlation coefficient of different probability of propagation. Furthermore, we evaluate ranking results of the proposed method by selecting seeds in influence maximization problem and measuring the ranking uniqueness and distribution. e experimental results prove that the proposed method can effectively distinguish the differences of different attributes and significantly promote the performance of identifying the influence of nodes. e following parts are organized as follows. We briefly review the definition of related algorithms used for comparison in Section 2. In Section 3, our improved multiattribute k-shell method is proposed and a meaningful example is illustrated to show how the proposed measure works. In Section 4, we present the details of the data, the spreading model, and the evaluation measure that are used to evaluate the performance of our measure. e experimental results are presented in Section 5. Finally, we expose the conclusion of the work in Section 6. Kitsak et al. proposed the k-shell method to determine the influence of nodes in the network. is method considers that the closer the node is to the center of the network, the higher the influence of the node will be. is method uses node degree to rank the importance of nodes. e following details show the decomposition principle of the k-shell method. First, we need to remove the nodes and edges with a medium degree as 1 in the network. At this time, nodes with a degree as 1 still exist in the remaining network. We should continue to remove them until no nodes with a degree as 1 exist in the network. At this point, the removed nodes form a layer and its ks value is assigned to 1. According to the abovementioned method, continue to remove nodes with a degree value as 2 in the network and repeat this operation until there are no nodes in the network. As can be seen in Figure 1, the ks value allocated for nodes 8,9,10,11,12,13,14,15, and 16 is 1. e value allocated for nodes 5, 6, and 7 is 2. e ks value allocated for nodes 1, 2, 3, and 4 is 3. Related Work K-shell decomposition method is suitable for large networks because of low-computational complexity. Of course, the disadvantages of this method are obvious. First, most nodes are assigned the same ks value so that the importance of these nodes cannot be further distinguished. For example, from the perspective of degree, the degree of node 8 is 3 and the degree of node 11 is 1. e influence of node 8 is obviously greater than 11, but they have the same ks value. Second, in the process of removing nodes, the edges that have been removed are not considered and the influence of residue is only concerned about. In this way, it is considered that nodes with the same ks have the same number of edges in the outer layer, which is obviously not consistent with common sense. For example, node 1 and node 2 have abundant first-order and second-order neighbors in the outer layer, while node 1 has no neighbors in the outer layer. e same ks value assigned to them does not reflect the difference between them. ird, in the regular network, the ks value of most nodes is 1, which is obviously not suitable for this background. e traditional K-shell decomposition method only updates the nodes according to the residual degree of the remaining nodes and completely ignores the depletion degree of the removed nodes. In the mixed degree decomposition method, the decomposition process is based on the residual degree of the remaining nodes and the depletion degree of the removed nodes. For node i, its residual degree and depletion degree are represented by k r i and k e i , respectively. In each step of the mixing degree decomposition method, the node removal is determined by the mixing degree k m i : where λ is the adjustable parameter between 0 and 1. When λ is 0, the mixing degree decomposition method is consistent 2 Complexity as the K-shell decomposition method. When λ is 1, the mixed degree decomposition method is equivalent to the degree centrality method. Different from the traditional K-shell decomposition method, in the mixed degree decomposition method, the value of ks of all nodes can be decimal. e parameter λ is usually set to 0.7. e traditional K-shell decomposition method is improved by using the shortest distance from the source node to the core node of the network. e propagation capability of nodes with the same ks value can be further distinguished by the following methods: where ks max is the maximum ks value in K-shell decomposition, ks i is the ks value of node i, and S c is the set of nodes whose ks value is maximum. Although this nonparametric method can identify nodes with the same ks value, the computational complexity is high because of calculation of the shortest path distance to the core. More seriously, if the network is not fully connected, the shortest path distance between partial node pairs cannot be obtained. If the iterative information is used to identify the position difference of nodes in the network, the propagation ability of nodes with the same ks value can be further distinguished. Degree decomposition method based on the iteration factor is proposed to improve the performance of the traditional method by using iteration information and node degree in decomposition. It is worth noting that the degree is a local variable and the iteration factor is a global variable. is method fully combines local and global factors to identify influential nodes more effectively. e iterative factor δ i of node i is where m is the total number of iterations in K-shell decomposition and iter(i) is the number of times that node i has been removed from the decomposition. e influence of node i in the degree decomposition method based on iterative factor is e influence of node will be great if a node has many neighbors at the core of the network. Based on this assumption, the neighbor core of the node is where Γ(i) is the neighbor of node i. Recursively, the extended neighbor kernel is defined as Materials and Methods Practice has proved that the combination of multiple attributes can further improve the sorting effect. In recent years, researchers have combined different attributes and strategies to mine the influence of nodes. e performance of these methods proves that considering multiple attributes is an effective strategy to evaluate the impact capability of nodes. At present, there are many attribute weighting methods, such as least squares weighting method and principal component analysis method. Among the many attributes of nodes, location attributes play a significant role in the sorting process of nodes. At the same time, the influencing ability of nodes depends largely on the neighbor attributes. Combining these two attributes, it is an effective strategy to use the attribute weighting method to further identify the influence of nodes. In the K-shell decomposition process, the number of iterations reveals very important location information, and it can further distinguish the location differences of the removed nodes. A node with a higher number of iterations is closer to the core of the network, or it is closer to the edge of the network. e number of iterations here refers to the number of global iterations decomposed from K-shell to the end. In this paper, sigmod function is used to further process the number of iterations when the node is deleted, so as to define the node position index p(i): e relationship between the position index and the number of iterations is shown in Figure 2. As the number of iterations increases, the position index increases in a downward slope with a critical value of 0.75. Complexity e position attribute of the node is represented by PN p (i), which is composed of the ks value of the node and the sum of the location index of the neighbor of the node: where ks i is the ks value of the node in the K-shell decomposition. e position attribute of a node cannot effectively distinguish the influence of a node in the same position. For example, all nodes on the edge of a network have the same position information, so their influence should be the same. In fact, due to the difference of local structure, the influence of edge nodes in the same position will vary greatly. e local attributes of nodes should be further used to distinguish the influence differences of nodes with the same location attributes. If a node has more neighbors, it can have a greater impact on the network. Furthermore, the influence of a node's neighbor is also impacted by its neighbors. Consider the second-order neighbor can improve the capability of measuring the influence of a node. e neighbor attribute of a node is represented by PN n (i), and it is represented by the second-order neighbor number of the node: where k l represents the degree of node l. Both position attribute and local attribute play a significant role in identifying the influence of nodes. Combining these two key attributes, the influence of nodes can be accurately calculated and the performance of node influence ranking can be further improved. Many traditional multiattribute sorting methods treat all attribute weights to be consistent. At the same time, there are many weighting methods, such as analytic hierarchy process, multiobjective programming, principal component analysis, and weighted least square method. Information entropy weighting method is an excellent weighting method, which has been verified by many examples. Our method adapts the method of information entropy weighting to avoid the defects of the traditional weighting method: where w 1 represents the weight of the location attribute and w 2 represents the weight of the neighbor attribute. e calculation process of information entropy weighting method is as follows. First, the entropy value of each attribute is calculated: where H i represents the entropy of the ith attribute, and r ij represents the normalized value of the ith attribute of the jth node. Because this method has only location and neighbor attributes, i is set as 1 and 2: en, the information entropy is combined to calculate the weight of the two attributes: Whether the multiattribute-improved K-shell algorithm proposed in this section is feasible, the feasibility can be explained with the help of diagrams. e graph is an undirected graph with 16 nodes and 20 edges. e PN value of each node is calculated according to the algorithm. e values in the calculation process are shown in Table 1. As we can see from the table, the importance of all nodes can be sorted in the descending order according to the PN value. e PN values of node 11 and node 12 are the same and their importance cannot be distinguished. e PN values of nodes 1, 2, 3, and 4 are in the first gradient, and the PN value of node 2 is the largest. Its importance can be seen in Figure 1 in the network. PN values of nodes 5, 6, 7, and 8 are located in the second gradient. ese nodes are not outer edge nodes. e PN value of node 14 is larger than that of edge nodes 9, 11, 12, 13, and 16. It can be seen from the figure that since it is directly connected to core node 4, its influence has been enhanced. From the calculation results shown in Table 1, it can be preliminarily concluded that the improved K-shell method based on multiple attributes is feasible to some extent. Data Description. We conduct several experiments on six different real networks to evaluate the performance of our proposed centrality measure. e real networks are drawn from disparate fields. CA-Hep [25] is a collaboration network of Arxiv High Energy Physics eory. Netscience [25] is the network of co-authorship of scientists in network theory and experiments. Cond-Mat [25] is from the e-print arXiv and covers scientific collaborations between authors papers submitted to Condense Matter category. DNC Email [26] is the network of emails in the 2016 Democratic National Committee email leak. Nodes in the network correspond to persons in the dataset and the edge is the mail exchange between users. Ego-Twitter [27] contains Twitter user-user following information. A node represents a user and an edge indicates that the user follows the other user. Route Views [28] is the network of autonomous systems of the Internet connected with each other. Nodes are autonomous systems (AS), and edges denote communication. A brief overview of the networks is shown in Table 2. Spreading Model. To evaluate the lists ranked by all the centrality measures, we need to know the list ranked by the real spreading process of the nodes. In the spreading process, the probability of accepting a message from another user depends on the user's influence [31]. So, the spreading efficiency of nodes is used to measure the ranking result of influential nodes. ere are many information diffusion models, such as SIR model, Independent Cascade model, and Linear reshold model, and some information diffusion models independent of network topology [32,33]. In this paper, we employ the standard SIR model [34] to simulate the spreading process on networks and record the spreading efficiency for every node. In the SIR model, every node belongs to one of the susceptible states, the infected state or the recovered state. In detail, we set one node as an infected node and the other nodes are susceptible nodes. At each step, for every infected node, it can infect its susceptible neighbors with infection probability β and then can be removed with probability λ. Generally, we set λ � 1.0. e appropriate propagation probabilities are needed to be chosen, in case too small or too large propagation probability makes the propagation effect not ideal and leads to the failure to distinguish the influence of nodes. According to the heterogenous mean-field method, the epidemic threshold of network is β th � < k > /( < k 2 > − < k > ). k and k 2 are degree and second-order degree of node. e propagation probabilities are set just larger than the epidemic threshold. In the experiment, this dynamical process of infection and recovering will repeat until there are no infected nodes. e sum of infected and recovered nodes at time t, denoted by F(t), can be considered as an indicator to evaluate the influence of the initially infected node at time t. Obviously, F(t) increases with the increasing of t and will reach stable state at time t c , denoted by F(t c ), where t c represents the final time and F(t c ) represents the eventual influence of the initially infected node. To guarantee the reliability of the results, all of them are averaged over a large number of realizations. Evaluation. In order to evaluate the performance of the centrality measures, we use Kendall's coefficient τ [35] to measure the correlation between one topology-based ranking list and the one generated by the SIR model, which is approached by a large number of simulations. Let (x i , y i ) and (x j , y j ) be a randomly selected pair of joint from two ranking list, X and Y, respectively. If both (x i > x j ) and N and M are the numbers of nodes and edges, respectively. K and K max denote the average degree and the maximum degree. C and r are the clustering coefficient [29] and assortative coefficient [30]. β th and β are the epidemic threshold of network and the infection probability used in our experiment. Complexity 5 (y i > y j ) or if both (x i < x j ) and (y i < y j ), they are said to be concordant. If (x i > x j ) and (y i < y j ) or (x i < x j ) and (y i > y j ), they are said to be discordant. If (x i � x j ) or (y i � y j ), the pair is neither concordant nor discordant. Kendall's coefficient τ is defined as where n c and n d denote the number of concordant and discordant pairs, respectively. e value τ lies between +1 and −1. e higher the τ value indicates, the more accurate ranked list a centrality measure could generate. e most ideal case is τ � 1, where the ranked list generated by the centrality measure is exactly the same as the ranked list generated by the real spreading process. To measure the imprecision [17] of methods in ranking influential nodes, we compare propagation capability of influential nodes obtained by ranking result with nodes which have largest propagation capability and the propagation capability of nodes generated from the SIR model. Kendall's correlation coefficient considers the correlation between the ranking order of all nodes in the network and the order of propagation capability of all nodes. However, the imprecision function is used to evaluate the cumulative propagation capability of top-ranked nodes in different proportions. e imprecision function ε(p) is defined as where p is the proportion of top-ranked nodes and ϕ m (p) and ϕ s (p) denote the set of nodes at the top when proportion is p. F i (t c ) is propagation capability of node i. e value of ε lies between 0 and 1. e lower ε value indicates, the more precise the centrality measure is in ranking propagation capability of node. Results and Discussion In this chapter, SIR model, Kendall coefficient, and imprecision function are used to verify the performance of the proposed method. Degree, Ks (K-shell), MDD (Mixed Degree Decomposition), ksIF (K-shell iteration factor method), and Cnc+ (Extended Neighborhood coreness centrality measure) were compared with the proposed multiattribute PN method. Evaluate the Spreading Capability of Nodes. is section verifies that different propagation probabilities are selected under the fixed proportion of transmission sources to calculate the propagation capability of the influence node set. By comparing the propagation capability of different algorithms under the same propagation probability, the performance of different methods can be compared. e simulation set the propagation source ratio as 0.1. e max time step of infection process is set as 100, but the infection will stop when there is no user in the infection state. e results were based on an average of 500 independent experiments. e simulation results are shown in Figure 3. e abscissa is the propagation probability and the ordinate is the propagation capability, which is expressed as a percentage. Firstly, with the increasement of the propagation probability, the node propagation ability of the six methods is improved. e performance of the six methods is relatively close when the propagation probability is small and obviously different when the propagation probability is large. Specifically, in the CA-Hep , DNC Email, and Cond-Mat, the performance of PN maintained the best under various propagation probabilities. In the Ego-Twitter dataset, the performance of PN and Cnc+ is significantly higher than that of the other four methods. Moreover, when the transmission probability is 0.03, the node transmission ability of Cnc+ is higher, and in other cases, the node transmission ability of PN method is higher. In the Netscience and Route Views dataset, the performance of PN is relatively better than the other method, but Cnc + outperforms the PN method when the transmission probability is less than 0.1 in the Netscience. PN is not the best in several points in Route Views, but it had highest propagation ability in most cases, as shown in Figure 3(f ). In general, the proposed PN in this chapter has better performance. e difference in performance is related to the specific network structure. Evaluate the Imprecision of Ranking. is section verifies that different proportions of top-ranked nodes are selected under the fixed propagation probability to calculate the imprecision of ranking the influence of nodes. By comparing the imprecision of different algorithms under the same proportion of top-ranked, the performance of different methods in ranking the influence of nodes can be distinguished. e simulation set the propagation probability of node, as shown in Table 2, and the proportion top-ranked nodes varies from 0.01 to 0.2. e max time step of infection process set as 100 and the results were based on an average of 500 independent experiments. Simulation results in six networks are shown in Figure 4. e abscissa is the proportion of top-ranked nodes that are considered and the ordinate is the imprecision of ranking. In Figure 4(a), the method PN, KsIF, and Cnc + have low imprecision when p is higher than 0.02 and they have poor performance in identifying the 2% top-ranked nodes. However, the imprecision of PN is much lower than the other two in CA-Hep dataset. e imprecision of Degree to rank top 1% nodes is low about 0.07 than the method PN. However, Degree has highest ε when p is greater than 0.04. In Netscience and Cond-Mat, Degree, K-shell, and MDD have higher value, and PN is lowest in most p except that p is in the range of 0.02 to 0.04, KsIF outperforms PN in Netscience. When p is larger than 0.04 and smaller than 0.06, Cnc + outperforms PN in Cond-Mat. In the dataset, DNC Email, K-shell, and PN are better compared with other method, and K-shell has a slight edge when p between 0.04 and 0.1 or 0.13 and 0.15. However, K-shell performs poor in 3% top-ranked nodes. In the dataset, Ego-Twitter and Route Views, it is obvious that the method we proposed performs Complexity well and is more stable in all cases. From the simulation results of the abovementioned six datasets, it can be seen that the method PN can not only precisely identify most of the top 1% to 4% nodes but also rank the following nodes by their influence stably. Evaluate the Correlation Coefficients of Method. is section verifies the correlation between the influence value and propagation ability of nodes under different propagation probabilities by Kendall's coefficient τ. e value range of propagation probability in this simulation is 0.02 to 0.2 in the datasets CA-Hep , Cond-Mat, DNC Email, Ego-Twitter, and Route Views, and extended to 0.3 in the dataset Netscience because of higher epidemic threshold of it. In the DNC Email and Route Views, we also put the simulation result when the value of propagation probability is equal to their epidemic threshold. e max time step of infection process is 100. e results were based on an average of 500 independent experiments. e simulation results are shown in Figure 5. e abscissa is the propagation probability and the ordinate is the correlation coefficient. e epidemic threshold of each network is also drawn into the figure as a vertical line. e correlation of ranking result of different methods and real propagation abilities obtained by the SIR model has different manifestations in different datasets, but the general trend is the same. When the propagation probability is small, the ranking correlation of Degree, K-shell, and MDD with real propagation ability is higher than the other three methods and the thresholds are roughly the same as the epidemic threshold β th of network. It can be seen in Figure 5 that the Degree has high Kendall's coefficient τ when β is less than 0.06 in the CA-Hep and Cond-Mat and less than 0.07 in the Netscience, and it is not obvious in other three networks because of the small epidemic threshold. However, in the cases where β is higher than β th , these methods that have considered both degree and other aspects such as the influence of neighbors to the spreading ability of nodes perform better than the method that mainly take degree into account. is is because the very small propagation probability will make the infection behaviour in a small range near the initial infected node, unable to spread over a large area, so the degree is the decisive factor. As the probability of propagation increases, the act of infection becomes easier, so the extent of infection depends not only on the number of neighbors of a node but also on the ability of its neighbors to propagate, or even the neighbors' neighbors. On the contrary, when the propagation probability increases to a point much larger than β th of network, infection becomes too easy so that the nodes in the core of the network or with high degree had largest scope of infection. It is clearly shown in the networks with small epidemic threshold such as DNC Email and Route Views in the figure. When β > β th , the method PN has the highest Kendall's coefficient t in CA-Hep , Netscience, and Cond-Mat. Our proposed method also performs best when β is between 0.06 to 0.14 in DNC Email and between 0.06 to 0.18 in Route Views. In short, compared with the other five methods, the PN method proposed in this chapter has good performance under the appropriate propagation probability value range. Evaluate the Performance of Selecting Seeds in Influence Maximization Problem. is section verifies the reliability of PN when it is applied in the influence maximization problem. Maximization of influence is widely used in real life. For example, viral marketing is a typical application that can promote new products or ideas for merchants or publicity departments. It aims to select a group of nodes in the network called seed node set as initial propagation nodes and spread in the network as widely as possible according to a certain diffusion mode. ere are many related studies. We use the selection of top from the ranking to specify seed nodes to measure the propagation range of seed nodes selected by several ranking methods. We examine the spreading efficiency of different seed node set with six real networks: CA-Hep , Netscience, Cond-Mat, DNC Email, Ego-Twitter, and Route Views. In the experiment, we set P as the proportion of seed nodes in the whole network, ranging from 0.01 to 0.05. We also used the SIR Model to simulate the propagation process and calculated the influence range by using the number of users who were finally infected. e propagation probability in the simulation is determined according to the epidemic threshold and is shown in Table 2. e max time step of infection process is set as 100 and the results were based on an average of 500 independent experiments. e simulation results are shown in Figure 6. e abscissa is the ratio of the seed set in all users, and the ordinate is the propagation scope of initial seed nodes and expressed as a percentage. e results manifest that the PN method performs best in the datasets Cond-Mat, DNC Email, Ego-Twitter, and Route Views. In the CA-Hep dataset, the seed set selected by the method PN when P is greater than 0.02 can infect wilder than others. When P is 0.02 and below, degree is most effective and the PN is better than KsIF an Cnc+. In the Netscience network, KsIF is best when P is 0.03 and 0.04, and the infected rate of seeds selected by PN is almost same as KsIF in other cases. Measure the Ranking Uniqueness and Distribution. is section verifies the monotonicity of the new method on the sorting by using Bae and Kim's ranking monotonicity method [23]. Since the K-shell method will calculate the K-shell value of many nodes into the same value, it is difficult to distinguish their differences in influence. In this respect, the method we proposed can do better. According to the definition of Bae and Kim, the monotonicity of the ranking result is expressed as follows: In (16), R represents the ranking list and n is the size of it, every element of the ranking list is a set of nodes that have the same ranking value, and n r is the number of nodes in R that have the same ranking position r. e value of M(R) fluctuates between [0, 1] and the higher the value, the stronger the uniqueness. In extreme cases, 1 means that each node is assigned a different sort value, whereas 0 is the opposite and all nodes are in the same rank. We examine the monotonicity of different methods with the same six datasets as above. e calculation results are shown in Table 3. It can be seen from the table that the monotonicity result of the PN is apparently higher than the degree, ks and MDD, and approximate to KsIF and Cnc+. In order to clarify the ranking distribution of the different measures more clearly, a complementary cumulative distribution function (CCDF) is plotted. According to the CCDF principle, if many nodes are in the same rank, the CCDF plot will decrease rapidly; otherwise, the CCDF plot will slow down. Figure 7 shows the ranking distribution in six networks. e line representing Degree, ks, or MDD drop sharply, as can be seen on the left side of each graph. is is especially true when the number of nodes in the dataset is large. For the method PN, the ranking distribution is slightly improved compared with Cnc+ and KsIF in datasets DNC Email and Route Views. e curves of KsIF and PN in the dataset Netscience are basically overlapping and drop off more slowly than that of Cnc+. In the dataset CA-Hep and Cond-Mat, the method Cnc+ also does not perform well compared to the method KsIF and PN. e KsIF and PN are equally good at identifying the influential nodes, while in the latter part of the ranking, the downward trend of KsIF curve is more obvious than that of PN curve. It is indicated that the ability of method PN to distinguish the nodes' spreading capability is better than Cnc+. It can be seen in the Ego-Twitter that the performance of PN is better than the method Cnc+. So, we can say that PN performs well in most networks. Conclusions On the basis of the K-shell method, we proposed a new multiattribute ranking method based on node position and neighborhood. We made full use of the iterative information in the decomposition process. First, the iteration information is processed by sigmod function to obtain the position index. e position attribute is obtained by combining the shell value and the position index. en, the local information of the node is adopted to obtain the neighbor property. Furthermore, the position attribute and neighbor attribute are weighted by the method of information entropy weighting. Finally, we evaluated the propagation capability of different propagation probabilities, the imprecision of different proportions, the correlation coefficient of different propagation probabilities, and the propagation capability of selected seed nodes in influence maximization problem. At the same time, we also verified the good performance of our method in distinguishing influence of nodes. Compared with other K-shell decomposition and its improved algorithms, the method proposed in this paper had better performance. rough simulation experiments, it is found that the PN method can make full use of the iterative information in the decomposition process and the influence of neighbors to further distinguish the difference of nodes with the same ks value. Experiments with SIR model, Kendall's coefficient, and imprecision function fully verified the correctness and effectiveness of the proposed method. In a word, the effectiveness of the proposed method in the identification of influence nodes was verified by various forms of experiments. Data Availability Previously reported network datasets were used to support this study and are available at http://networkrepository.com
8,565.2
2020-06-08T00:00:00.000
[ "Computer Science", "Mathematics" ]
The Musite open-source framework for phosphorylation-site prediction Background With the rapid accumulation of phosphoproteomics data, phosphorylation-site prediction is becoming an increasingly active research area. More than a dozen phosphorylation-site prediction tools have been released in the past decade. However, there is currently no open-source framework specifically designed for phosphorylation-site prediction except Musite. Results Here we present the Musite open-source framework for building applications to perform machine learning based phosphorylation-site prediction. Musite was implemented with six modules loosely coupled with each other. With its well-designed Java application programming interface (API), Musite can be easily extended to integrate various sources of biological evidence for phosphorylation-site prediction. Conclusions Released under the GNU GPL open source license, Musite provides an open and extensible framework for phosphorylation-site prediction. The software with its source code is available at http://musite.sourceforge.net. Background Protein phosphorylation is one of the most studied posttranslational modifications (PTM). It is an important regulatory event, playing essential roles in many aspects of cell life [1]. The protein phosphorylation data have increased rapidly in the past decade, thanks to the highthroughput studies [2][3][4][5][6] and web resources [7][8][9][10][11][12][13][14][15]. However, experimental identification of phosphorylation sites is still an expensive and time-consuming task. Computational prediction of phosphorylation sites provides a useful alternative approach for phosphorylation site identification, and hence has become an active research area. We have developed Musite, an open-source software tool for large-scale prediction of both general and kinase-specific phosphorylation-site prediction. In [16], we introduced its methodology and validated it by applying to several proteomes and comparing it to other tools. In this paper, we will describe the underlying open-source software framework of Musite. Machine learning based framework In Musite, we formulated the problem of phosphorylation-site prediction as a machine learning problem, specifically a binary classification problem, i.e., protein residues can be classified into two categories: phosphorylation sites and non-phosphorylation sites. The Musite framework is an implementation to solve this machine learning problem, which consists of three main procedures: data collection, feature extraction, and training/ prediction, as shown in Figure 1. The data collection procedure is designed for collecting phosphorylation data from various sources and converting them into formats that Musite accepts. For example, phosphorylation sites can be easily retrieved from UniProt/Swiss-Prot using the utility of converting UniProt XML to Musite XML. Musite also has functionalities of merging phosphorylation annotations from different sources and building non-redundant datasets. The feature extraction procedure extracts features from the collected data to characterize patterns of phosphorylation sites. To date, three sets of features have been integrated in Musite, namely k-nearest neighbor (KNN) scores, protein disorder scores, and amino acid (AA) frequencies [16]. We are currently in the process of evaluating more features, such as solvent accessibility and secondary structure information. A feature will be integrated after evaluation if it meets the following criteria: 1) it is relevant to the biological context, i.e., it is related to protein phosphorylation; 2) it helps to improve prediction performance; and 3) it is computationally feasible for large-scale predictions. In the training and prediction procedure, binary classifiers are trained using the features extracted from training data. The trained classifiers can then be used to predict phosphorylation sites in users' query protein sequences. We have integrated a support vector machine (SVM) classifier, and we also implemented a bootstrap classifier and a boosting classifier, which were combined to implement a bootstrap aggregating procedure. Other utilities were also provided to assist phosphorylation-site prediction and analysis in Musite, including prediction model management, customized model training, specificity estimation, filtering, statistics, etc. Java API Musite is written in Java and released under the GNU GPL open source license. Figure 2 illustrates its overall architecture and application programming interface (API). Musite architecture contains six modules that are loosely coupled with each other. The data module defines the core data structure in Musite, representing protein information, posttranslational modification, prediction model, and prediction result, etc. This module also contains several utility classes, for example, PTMAnnotationUtil is a class for annotating phosphorylation and other PTM sites in proteins. All other modules are dependent on this module. Figure 1 Overall Musite Framework. The data collection procedure collects phosphorylation data from various sources. The feature extraction procedure extracts multiple features for prediction model training. The training/prediction procedure trains prediction models and makes predictions for new query sequences. All procedures are extensible, for example, more data sources can be added and more types of features can be extracted. The classifier module, feature extraction module, and training and prediction module form the core modules of machine learning. The classifier module contains a set of binary classifiers. We have incorporated SVM light [32] and implemented a bootstrap aggregating procedure [33] to handle the highly unbalanced large training datasets. A developer can easily define/incorporate a new classifier such as random forest [34] by implementing the BinaryClassifier interface and integrate it into the bootstrap aggregating procedure and/or the machine learning framework. Musite architecture contains six modules loosely coupled with each other. The data module defines the core data structure. The classifier module contains a set of binary classifiers. The feature extraction module defines the features to be extracted from data and used in classifiers. The training and prediction module defines the machine learning procedure. The I/O module provides utilities for reading/writing different types of files and converting between them. The UI module provides users with a biologist-friendly GUI. Gao and Xu BMC Bioinformatics 2010, 11(Suppl 12):S9 http://www.biomedcentral.com/1471-2105/11/S12/S9 The feature extraction module defines the features to be extracted from data and used in a classifier. Currently, we have integrated three sets of features: k-nearest neighbor (KNN) scores, protein disorder scores, and amino acid (AA) frequencies. A developer can incorporate new features into Musite simply by implementing the FeatureExtractor interface. The training and prediction module defines the machine learning procedure by utilizing the classifier module and feature extraction module. MusiteTrain defines the model training procedure. It extracts features from training sequences and trains prediction models using the extracted features. MusiteClassify defines the prediction procedure. It extracts features from query sequences (new sequences from users for prediction) and makes predictions using the extracted features based on some prediction model trained by MusiteTrain. The I/O module provides utilities for reading/writing different types of files and converting between them. Currently Musite supports file formats of FAST A, Musite XML, UniProt XML, and Phospho.ELM report. To support other file formats, a developer can implement the Reader and/or Writer interfaces. The Musi-teIOUtil class provides uniform methods to access different Readers /Writers. The UI module provides users with a biologist-friendly graphical user interface (GUI) to most functionalities in Musite. With the GUI, one can easily perform phosphorylation-site prediction, result analysis, stringency adjustment, customized model training, prediction model management, file format conversion, etc. Results and discussion Open framework for phosphorylation-site prediction With its extensible API, Musite provides an open framework for phosphorylation-site prediction. With the rapidly accumulating data and better understanding of protein phosphorylation over time, more evidence needs to be integrated for better prediction performance. Musite provides a platform for integration of increasingly more diverse data and knowledge on protein phosphorylation. For example, phosphoproteomics data are scattered among different web resources. Musite can already read different file formats and can be easily extended to support more. It also provides functionality of merging phosphorylation annotations from different sources. Moreover, Musite makes it simpler to incorporate more biological evidence as features for phosphorylation-site prediction. With the open framework of Musite, it is possible to build a community-based tool, which could integrate the different expertise of various people from diverse areas and coordinate a joint effort towards better prediction and understanding of protein phosphorylation. Better utilization of the large magnitude of data One challenge in phosphorylation-site prediction, similar to many other bioinformatics problems, is how to handle the magnitude of data. There are two issues: 1) how to utilize the large amount of experimentally verified phosphorylation data; 2) how to perform proteome-scale applications. Musite's I/O module and its associated XML format provide a solution for collecting phosphorylation data from various sources. The bootstrap aggregating (bagging) procedure [33] implemented in Musite provides a solution for utilization of large datasets in machine learning applications and it also solves the problem of highly unbalanced data between positive and negative data. This procedure samples representative small datasets from large unbalanced datasets for training prediction models and aggregates prediction results from multiple classifiers for more robust performance. At the application level, Musite, as standalone software, can perform phosphorylation-site prediction up to the proteome scale on personal computers in an automated fashion. Moreover, users can utilize the customized model training utility to take advantage of the latest phosphorylation data. Integration into experimental design Musite is a good candidate for integration into experimental studies because of its two unique utilities: customized model training and continuous stringency adjustment. Customized model training enables the users to train their own models from any phosphorylation dataset. Continuous stringency adjustment makes it possible for users to choose any stringency to meet their requirements for confidence level. Using these two utilities, Musite can be integrated into experiments for more efficient identification of phosphorylation sites. For example, in a hypothesis-driven experiment, after an experimental biologist gets some initial phosphorylation data from experiments, he or she could make proteomescale predictions based on the initial dataset, and then focus more on predictions above a certain confidence level (using stringency adjustment) in the next stage; after each stage, the prediction model can then be refined (using customized model training) based on the new data and guide the experiments in the next stages. Using such an iterative design combining experimental and computational approaches, phosphorylation site identification could be much more efficient and less expensive. Case study: training an AIDS-specific model using Musite A common limitation of all phosphorylation-site prediction tools is that prediction results cannot be correlated with different cell states or tissue conditions. Similarly, prediction results based on pre-trained models released in Musite 1.0 may only indicate whether a query site can be phosphorylated or not, but have no implications for cell types or states. However, it is possible to train tissue-or disease-specific models using the customized model training utility in Musite. In this section, a sample recipe of training an AIDS-specific phosphoserine/ threonine prediction model is provided as follows. • Step 1: retrieve AIDS-specific protein data from UniProt. i. Open http://www.uniprot.org and search for keyword:aids AND reviewed:yes. ii. Download the complete data in XML format. Conclusions With the rapidly accumulating phosphoproteomics data in recent years, the area of phosphorylation-site prediction has attracted increasingly more interest and attention. Musite provides an open-source framework for easy integration of new evidence and/or methodologies for better phosphorylation-site prediction. By providing an open resource for protein phosphorylation research, we hope that Musite could eventually evolve into a joint effort in the phosphorylation research community for both bioinformaticians and biologists. We are also expanding the scope of Musite to predict other types of PTM sites, such as acyletation, ubiquitination, protein methylation, and tyrosine sulfation.
2,546.6
2010-12-21T00:00:00.000
[ "Biology", "Computer Science" ]
An Elementary Proof of Jin's Theorem with a Bound We present a short proof of Jin's theorem which is entirely elementary, in the sense that no use is made of nonstandard analysis, ergodic theory, measure theory, ultrafilters, or other advanced tools. The given proof provides the explicit bound 1/c where c=BD(A)*BD(B) to the number of shifts of A+B that are needed to cover a thick set. Introduction Many results in combinatorial number theory are about structural properties of sets of integers that only depends on their largeness as given by the density. A beautiful result in this area was proved in 2000 by Renling Jin with the tools of nonstandard analysis. • Jin's theorem: If A, B ⊆ N have positive upper Banach density then their sumset A + B is piecewise syndetic. (The upper Banach density is a refinement of the usual upper asymptotic density; a set is piecewise syndetic if it has "bounded gaps" in suitable arbitrarily long intervals. See §1 below for precise definitions). Many researchers showed interest in Jin's result but were not comfortable with nonstandard analysis. In answer to that, a few years later Jin himself [7] directly translated his nonstandard proof into "standard" terms, but unfortunately in this way "certain degree of intuition and motivation are lost" [cit.]. In 2006, with the use of ergodic theory, V. Bergelson, H. Furstenberg and B. Weiss [3] found a completely different proof of that result, and improved on it by showing that the sumset A + B is in fact piecewise Bohr, a stronger property than piecewise syndeticity. (This result was subsequently stretched by J.T. Griesmer [5] to cases where one of the summands has null upper Banach density.) Again by means of ergodic theory, V. Bergelson, M. Beiglböck and A. Fish [1] elaborated a shorter proof and extended the validity of the theorem to the general framework of countable amenable groups. In 2010, M. Beiglböck [2] found another proof by using ultrafilters plus a bit of measure theory. Recently, this author [4] applied nonstandard methods to show several properties of difference sets, and gave yet another different proof of Jin's result where an explicit bound to the number of shifts of A + B that are needed to cover arbitrarily large intervals is found. In this paper we present a short proof of Jin's theorem in the strengthened version mentioned above, which is entirely elementary and hence easily accessible also for the non-specialists. (Here, "elementary" means that no use is made of nonstandard analysis, measure theory and ergodic theory, ultrafilters, or any other advanced tool.) The underlying intuitions are close to some of the nonstandard arguments in [4], but of course formalization is different. We paid attention to keep the exposition in this paper self-contained. Jin's theorem with a bound Let us start by recalling three important structural notions for sets of integers. • A is thick if it covers intervals of arbitrary length, i.e. if for every k ∈ N there exists an interval I = [y + 1, y + k] of length k such that I ⊆ A. • A is syndetic if it has bounded gaps, i.e. if there exists k such that A∩I = ∅ for every interval I of length k. • A is piecewise syndetic if it covers arbitrarily large intervals of a syndetic set, i.e. if A = B ∩ C where B is thick and C is syndetic. Remark that thickness and syndeticity are dual notions, in the sense that A is thick if and only if its complement A c is not syndetic. Recall the difference set and the sumset of two sets of integers A, B ⊆ Z: With obvious notation, we shall simply write A − z to indicate the shift A − {z}. It is easily shown that A is syndetic if and only if A + F = Z for a suitable finite set F ; and that A is piecewise syndetic if and only if A + F is thick for a suitable finite set F . Let us now turn to concepts of largeness for sets of integers. A familiar notion in number theory is that of upper asymptotic density d(A) of set of natural numbers A ⊆ N, which is defined as the limit superior of the relative densities of its initial segments: The upper Banach density refines the density d to sets of integers by considering arbitrary intervals instead of just initial intervals. Definition 1.2. The upper Banach density of a set A ⊆ Z is defined as: One needs to check that such a limit always exists, and in fact BD(A) = inf n∈N a n /n where a n = max x∈Z |A ∩ [x + 1, x + n]|. In consequence, if BD(A) ≥ α then for every n. Trivially BD(A) ≥ d(A) for every A ⊆ N. The following properties, which directly follow from the definitions, will be used in the sequel: • BD(A) = 1 if and only if A is thick. • The family of sets with null Banach density is closed under finite unions, Remark that the upper Banach density is not additive, i.e. there exist disjoint sets A, B such that BD(A ∪ B) < BD(A) + BD(B). However, for families of shifts of a given set, additivity holds: A consequence of the above property is the following: To see this, notice that for every z one has An important general property of difference sets is given by the following wellknown result. Proof. If by contradiction A − A was not syndetic, then its complement (A − A) c would include a thick set T . By the property of thickness, it is not hard to construct an infinite set X = {x 1 < x 2 < . . .} such that X −X ⊆ T . Since (X −X)∩(A−A) = ∅, the sets in the family {A − x i | i = 1 ∈ N} are pairwise disjoint. But this is not possible because if k ∈ N is such that 1/k < BD(A), then one would get Remark that the above property does not extend to the general case A − B; e.g., it is not hard to find thick sets A, B, C such that their complements A c , B c , C c are thick as well, and A − B ⊂ C. However, A − B is necessarily thick in case the two sets are "sufficiently dense". Precisely, the following holds: Proof. For every k ∈ N, we show that an interval of length k is included in A − B. Let N be such that N · (a N /N − α) > k, and pick an interval I of length N with a N = |A ∩ I|. For every i = 1, . . . , k we have that: Now recall that BD(B) = inf n∈N b n /n, so by the hypothesis we can find an interval J of length N such that |B ∩ J| ≥ (1 − α) · N . Finally, pick t such that t + J = I. We claim that the interval [t + 1, t + k] ⊆ A − B. To show this, notice that for every i = 1, . . . , k we have that So, (A− i)∩(B + t)∩I = ∅, and we can find a ∈ A and b ∈ B such that a− i = b + t, and hence t + i ∈ A − B. Notice that BD(A) > α implies sup n→∞ n · (a n /n − α) = +∞, which in turn implies BD(A) ≥ α; however, neither implication can be reversed. The fact that A − B is thick whenever BD(A) + BD(B) > 1 was first proved by M. Beiglböck, V. Bergelson and A. Fish in [1]; in fact, their proof actually shows the (slightly) stronger property given in the previous proposition. What presented so far is just a hint of the rich combinatorial structure of sumsets and sets of differences, whose investigation seems still far from being completed (see e.g. the monographies [9,8]). In this area, a relevant contribution was given in 2000 by Renling Jin. By working in the setting of hypernatural numbers of nonstandard analysis, he showed that the appropriate structural property to be considered for differences of dense sets is piecewise syndeticity. Theorem 1.5 (Jin -2000). If A, B ⊆ N have positive upper Banach density then their sumset A + B is piecewise syndetic. As mentioned in the introduction, Jin's theorem has been recently re-proved by other means and with some improvements. Here we shall present an elementary proof of the following strengthening from [4], where an explicit bound is given on the number of shifts that are needed to cover a thick set. (Recall that a set C is piecewise syndetic if and only if C + F is thick for a suitable finite set F .) The elementary proof By the definition of upper Banach density, we can pick two sequences of integers x n | n ∈ N and y n | n ∈ N such that if we put: then lim n→∞ |A n |/n 2 = α and lim n→∞ |B n |/n = β. As the first step in the proof, for every n we shall find a suitable shift of A n that meets B n on a set whose relative density approaches αβ as n goes to infinity. To this end, we need the following lemma from [4]. (In order to keep this paper self-contained, we re-prove it here.) Proof. Let ϑ : N → {0, 1} the characteristic function of C. Then: By the pigeonhole principle, there must be at least one z such that 1 n Finally, notice that For every n, apply the above lemma where C = A n − x n ⊆ [1, n 2 ] and D = B n − y n ⊆ [1, n]. (Notice that |C| = |A n | and |D| = |B n |.) Then we can pick a suitable sequence z n | n ∈ N such that |B n | n − |B n | n 2 . Now put: Passing the above inequality to the limit, we obtain that lim n→∞ |E n | n ≥ αβ. In the second part of the proof we shall use the fact that any sequence of subsets of [1, n] whose relative densities have a positive limit as n approaches infinity, satisfies a relevant combinatorial property about the corresponding difference sets. Precisely: If lim n→∞ |E n |/n = γ > 0 then there exists a finite set F with |F | ≤ 1/γ and such that the following property is satisfied: Proof. We inductively define a finite increasing sequence σ = m i | i = 1, . . . , k . Set m 1 = 0. If property (⋆) is satisfied by F = {0}, then put σ = m 1 , and stop. Otherwise, let m 2 ∈ N be the least counterexample. So, Γ 1 = {n | [1, m 2 − 1] ⊆ (E n − E n ) + m 1 } has positive upper Banach density, but Λ 1 = {n ∈ Γ 1 | m 2 ∈ (E n − E n ) + m 1 } has null upper Banach density. Notice that for every n ∈ Γ 1 \ Λ 1 one has (E n + m 1 ) ∩ (E n + m 2 ) = ∅. If for every m ∈ N the set of all n ∈ Γ 1 such that [1, m] ⊆ 2 i=1 (E n − E n ) + m i has positive upper Banach density, then put σ = m 1 , m 2 and stop. Otherwise, let m 3 ∈ N be the least counterexample. So, Notice that for every n ∈ Γ 2 \ Λ 2 one has (E n + m i ) ∩ (E n + m 3 ) = ∅ for i = 1, 2. Iterate this process. We claim that we must stop at a step k ≤ 1/γ. To see this, we show that whenever m 1 < . . . < m k are defined, one necessarily has k ≤ 1/γ. This is trivial for k = 1, so let us assume k ≥ 2. Notice that Λ 1 ∪ . . . ∪ Λ k has null Banach density, and so X = Γ k \ (Λ 1 ∪ . . . ∪ Λ k ) has positive Banach density (and hence it is infinite). Since X ⊆ Γ i \ Λ i for all i, for every N ∈ X the sets in the family {E N + m i | i = 1, . . . k} are pairwise disjoint. Now, every E N + m i ⊆ [1, N + m k ], and so we obtain the following inequality: By taking limits as N ∈ X approaches infinity, one gets the desired inequality γ ≤ 1/k. Finally observe that, by the definition of σ = m i | i = 1, . . . , k , for every n ∈ Γ k and for every m ∈ N we have the inclusion [1, m] ⊆ k i=1 (E n − E n ) + m i . This shows that property (⋆) is fulfilled by setting F = {m 1 , . . . , m k }. By the above Lemma where γ = αβ > 0, we can pick a finite set F with |F | ≤ 1/αβ and such that property (⋆) is satisfied by the sets E n = (A n − x n − z n ) ∩ (B n − y n ). So, for every m there exists n (actually "densely many" n) such that: [1, m] ⊆ (E n − E n ) + F ⊆ (A n − x n − z n ) − (B n − y n ) + F ⊆ A − B + F − t n , and hence [t n + 1, t n + m] ⊆ A − B + F , where we denoted t n = x n − y n + z n . This shows that A − B + F is thick, and the proof is complete. Open problems (1) Lemma 2 states a much stronger property than needed for the proof of the main theorem. Can one derive a stronger result by a full use of that lemma? (2) We saw in §1 that A − B is thick whenever BD(A) + BD(B) > 1. Can one combine this fact with similar arguments as the ones presented in this paper, and prove interesting structural properties about A − B, A − C, B − C under the assumption that BD(A) + BD(B) + BD(C) > 1?
3,528
2012-09-25T00:00:00.000
[ "Mathematics" ]
Factors of Ethical Decline and Religious Measures to Overcome This article aims to investigate the causes and factors involved in the moral decline of youth and its solution in the light of Islamic teachings. It is generally observed that the students studying at various education levels, especially undergraduate students of universities are seriously lacking in moral values and their ethical conduct is not satisfactory. Research shows that factors of bad conduct include, less emphasis on the moral conduct of students at secondary and higher secondary levels, lack of parent’s interest in building their children's attitude and least efforts by the religious scholars and government institutions in morale building. To preserve a nation on its Culture and Values, character building is one of its important aspects, based on religious norms and principles. Positive building constructs approach to promote behavioral, social, emotional, and moral competence and raises spirituality, self-efficiency, loyalty, confidence, identity, and prosaically norms. The findings of this research highlight that there is a correlation between the ethical conduct of students and their university environment, which brings changes in human personality and character development. Social illnesses and moral deterioration are rooted in the behavior of the students, which they usually acquire from the home and university environment. Introduction Moral qualities and generosity were common in the young generation. At the time of adolescent youth learn, politeness, thought fullness, goodness, respectability, trustworthiness as well as they learn about to had enough selfcontrol. With time, youth face tremendous difficulties. Things are not that they use to teach, respect to the love of senior citizens and established specialists, fornication, distress, insecurity, stability, and others. It is fundamental to understand the risk capable of reducing moral standards in the public. Ethical features are the main part of every individual's life. Ethical deteriorations are the primary means of upbringing social disorders. Moral Deterioration According to the Oxford Dictionary, Moral degradation means: "The state or process of being or becoming degenerate, decline or deterioration" 1 In Merriam-Webster, moral degradation is defined as: "The act of treating someone or something poorly and without respect" 2 To summarize, moral deterioration refers to the state or act of behaving in a bad manner. The youth should be adorned with self-control and good conduct to remove the causes of moral deterioration and ethical degradation. Moral Deterioration in the Universities In any society, moral values are of great importance; whereas its deterioration causes the decay of the whole society or nation. It is observed that violence self-centeredness, dishonesty, bullying, and rudeness have increased immensely in societies worldwide. Researches show that youth are less care about moral and ethical values. Family-based moral values are essential for developing ethics in society to enhance family integration. Anangisye (2010) and Mngarah (2008) concluded that: Causes of Moral Deterioration: There are many causes of ethical deterioration in the universities whereas following causes play key roles in the moral deterioration of the society: Family Background The parents' behavior and guidance play an effective role in the development of youth. The parents' attitude and actions affect the moral values and help in the improvement of youth conduct in society. In today's world, parents are more concerned about their youngsters' scholastic accomplishments than social, customary, and moral qualities. Ted (2006) explains that: "Parents should become the child's idols, best friends, and motivators who can implant moral principles in children but now a day's parents are blamed and the moral decay of our youth nowadays when they fail to play their roles well. Workaholic parents, parents who fail to discipline their youngsters as well as parents who give less emotional attention to the children are the argues for rising up spoilt teens." 4 Mostly, working parents do not pay attention to their youth, which results in the moral deterioration of youth. The parents are responsible to bring moral defects and ethical problems among youth. In the present situation, parental control over their youngsters diminishing day by day, this results in self-governance of youth. Subsequently, the youth are being redirected and guided by outside environment nightclubs, bad friends, and undesirable dialogues. Therefore, the communication gap between parents and children is the basic reason of moral decay and degradation. Communication Gap: Ethics are based on family, society, culture, social characteristics, and so on. The person who can't learn the moral values in youth wouldn't have the capacity to learn in entire life. Working parents neglect their children and they pay less attention to train them because of their jobs and /work duties. The communication gap rises due to parents' tough job schedules and other work activities. Youth depends on the way that meets the general standards of the public. 5 Especially due to some social elements, many families are ethically backward. In recent decades, it is assumed that moral values are declined with time. Influence of Broad Communications: Usually, parents neglect their youngsters' moral education. The youth directly exposed to Television, print media, websites, online sources, silver screens, and other electronic media. These sources are assumed as an essential part of forming the identity of youth and adolescents. Addler, Baranowski, Kalin, Lallatin, Smiley (2003) explains that: "There are some factors that influence the negative and unethical behavior in youth; numerous media, pictures, motion pictures and recreations that kill brutality as well as regularly extol it. " 6 Defective Education System: The absence of an appropriate instructional framework is basic components from which our future generation can learn distinctive qualities legitimately. The defective education system is a cause of ethical deterioration because in this system students just get the degree rather than to improve their personality. Absence of ethical values in the syllabus: The present training framework has not understood to calculate the ethical features within them. In the current framework, understanding has been made to improve the knowledge it is an incorrect idea of our current instruction framework. The current academic curriculum is not able to keep our minds about our good qualities rather than ages. Lacking extracurricular activities based on moral values: Academic and non-academic activities are incredible to understand moral values. As it may be, now the extra curriculum is limited to sports, artistic rallies, quizzes, barely entertaining, moving, or gulf opportunities. Earlier events, applications, intellectual work, social issues discussed and other social exercises have been terminated. Today, there is a chance of the birth of our country's legends. Instructors and students must avoid the exchange of points and if the open level is discussed on such things, people are more centered to understand the real significance or ethics of the dialog. 7 Factors Affecting the Universities Following are the factors affecting the moral values of students in the universities: Financial Changes: The economy is moving at such a high speed to materialism. Skilled persons advise socially different products, additional skills in the form. In such a bad environment, it is not surprising that the youth do not have to maintain high needs and individual standards. The practical standard is the reason for the unusual action of students related to standard issues due to the unusual state of mind and immoral behavior. Friends and Groups: Friends and groups are considered an integral part of the life of standard quality. In schools and universities, every individual has their groups and friends. They share their thoughts and arguments. The point is that friends have significant moral values that may be overlap with others, but it is a fact that the peer group is an excellent way to learn ethical values. The willingness of Self-Presentation: Adolescence is a period when youth start to examine issues, create see focuses, legitimize, and question the present state of affairs, and battle to build up an identity of their own. Individuals want of self-display, frequently takes an adolescent towards wrongdoing and indecent conduct like smoking, drinking, conveying manhandling words, quarreling, sex mishandle, and so forth. Political Parties: The majority of the political gatherings tend to catch youthful age for utilizing them for serving their particular intrigue. All the political gatherings are indulged in diverse developments with the affirmation of business and various eagerness and guarantees. Being propelled by those, the youthful age is in effect unfavorably influenced and the public is being dirtied by an unsteady circumstance. There are students' associations in schools and colleges, which make the students, drew in additional in deceptive exercises. For example, hitting instructors if they bombed in examinations, making strikes for satisfying their pointless requests bringing about crumbling of moral and good norms of the public. Social Factors Numerous ways can help students in moral conduct. One less perceived hypothesis was created by Bandura (1991a, 1991b) called the "Social Cognitive Theory of Moral Thought and Action." This hypothesis can help clarify the numerous factors that influence youth activities. In characterizing this hypothesis, it is clarifying that activity is influenced by subjective and natural components. Psychological procedures incorporate scholarly and good formative levels, response to circumstances, and social standards (1991a). It is supposed that these elements have impacted our explanation of our surroundings. At the end of the day, we make our particular significance of the world relying upon our capacity to reason and past encounters. From this hypothesis, it appears to be obvious that the passionate associations between people are a critical variable in deciding conduct. In this manner, one can induce that when people trust that conditions are close to home and inviting, they carry more competently socially. Affiliations are basic, yet so are social principles and part exhibiting concurring. It prescribes that these two factors likewise emphatically influence conduct since individuals seek others for course on the proper behaves. 8 These models can be effortlessly connected to higher instructive settings. One can see that as far as scholarly genuineness, the subjective variables of good thinking and insight are critical. These individual factors influence youth impression of their surroundings, including their point of view having a place and variances. As indicated by this hypothesis, social standards and students indulge in them additionally influence in deciding how they carry on at school. Social standards might be to some extent set through the execution of implicit rules and activities of personnel, directors, students' undertakings experts. 9 Environmental Factors The earth influences the general population, or clients, that associate with it; this is particularly valid in youngsters who are unsafe to the impacts of their environment. The physical condition in the preschool setting affects a student's conduct. As indicated by Isbell and Exelby, the earth is a great pointer of how youngsters should react or act. Room course of action and materials figure out where students concentrate. Youngsters learn thorough investigation and examination of their environment. A learning condition must be necessary, energizing, and a place where a youngster can learn. The vast majority of the qualities in the physical setting can affect the way the inhabitants carry on and on their emotional wellness. This incorporates the connection with the earth, which helps youngsters in their improvement. How youngsters collaborate with their condition and its inhabitants should impact the plan of items and exercises in the space. In an ongoing report, children were offered in front of different types of roof length and shading. They felt that the adjustment of the roof of the ceiling was fundamentally changed by the children's behavior this investigation shows that progressions made to the physical setting may affect children's attitudes. It was found that the changes in the general association of the room were positive in the sense Behavior: Actions that show academic integrity or dishonesty. Environmental Factors: Social norms, possible sanctions, codes of conduct, classroom college environment, role modeling & respect for others. of the room; thus, the positive behavior in youth has increased. Cultural Factors Some examinations have tried to show the effect of social factors on the development of people's identity, that they recognize that people's identity can influence the development of similar social order. At the individual level, social status will affect the Archives Map, through the administrative mental method (for example, on the move towards encouragement). Identify characteristics more than cultural assets. For example, the neurotoxic minimum is the minimum, and the additional version is included. At the level of culture, overall identification characteristics or moral values (every culture or country means respect) have been identified with some social elements or to establish factors, for example, the overall national identity, geographical boundaries, and freedom. The general level National investigations have been linked to obstacles in some investigations. It is suggested that people, in particular, have difficulty in troubling financial arrangements and will continue to search after their goals, while in different examinations, national success. Similarly, the general identification profile of social aggregation and ethical deterioration has been identified with geographical restrictions (for example, scope). Besides, at large, those who live far beyond equality are more and less honest. For a long time, unusual investigations have shown that the geographical and truly relatively comparative society is associated with identifiable profiles. Western society is usually open to engaging in functions and actions related to African and Asian society, which are more pleasant, more honest. Another way to consider the identity of the culture is to consider national character. It is usually shared widely on what is more, which is influenced by the environment and national affairs. Meanwhile, they are unsatisfactory with total identification features; with lines, they are uninterrupted. 10 Teacher's Role in the Development of Moral Values Teachers play an important role in the development of students' morals and values. Oladipo (2009) and Paul (1993) explain that: "Scholars, especially educational psychologists acknowledge that children are born with certain innate endowments but not ethics or morals. This implies that children learn morals from their immediate environments in which they are brought up. It is the duty of teachers, parents, and other close relatives to ensure that children are morally upright." 11 For the improvement of student behavior, it is necessary to use the different layered schemes and models that are divided into five different phases. These phases not only show the progress but also enhance to bring the change in student attitude in the universities. These phases are a dynamic and persistent process that takes into consideration and students focused approach that can adjust and change, as the students' conduct needs change. These five stages are helpful to bring the change. Tracking The way to enhance students' conduct begins with the students and instructor in the classroom. The instructors require a steady strategy to watch and distinguish conduct in the classroom and afterward an easy to use approach to record and report it. Frank. B. Cross (2011) explains that: "Students can also easily generate reasons to act unethically, but these are all patently selfish rather than noble, short term rather than the long term, shallow rather than thoughtful, and overall unattractive and often repellent. But there are many positive reasons to act ethically as well. Students seem to have a good sense that ethical actions breed trust and that trust in a society is a key to economic growth" 12 Through the gathering of significant data, educators can decide specifics of the issue conduct and the conditions that incite and fortify it about the classroom. Following practices, their recurrence and their force overtime, are the building squares to an effective social arrangement. Summative Once a classroom following strategy is set up, the overall image of wise behavior can be understood by the information and data. A successful conduct administration will be eliminated by setting up social information in a managed database. Everyday collaborations and perceptions followed by the educator can be classified and condensed for use by instructors, executives, and other schoolwork force. That makes it straight forward for educators and managers to get to what's the more, by examining the advertising, including key markers, for example, events of unique episodes, particular social destinations, and rates of consideration. 13 Examine After following and collecting the social information and data, the instructor can build up an unmistakable profile of the students' conduct. With a sensible and proof-based image of the issues inside the setting of the classroom, the educator can better comprehend the students and their basic issues that result in poor conduct. At that point, the educator can break down the data with a specific end goal to create and include the positive help that can be utilized to encourage the youth to enhance conduct. 14 Intervene Using information-rich investigations of student's attitudes, organizing management can make important data. Brown-Chidsey and Steege (2005) research intervening the ethical behavior of students and describes: "An invaluable behavioral solution system endows teachers with strategies that allow them to meet and address behavioral challenges and facilitate and sustain improvement in their students. This process includes meaningful interventions, appropriate behaviors, and adaptive skills that students can use to replace poor behaviors." 15 An unusual archive management framework has helped the teachers with methodologies that enable them to meet and address behavioral challenges and encourage and support change in their students' attitudes. This procedure incorporates significant mediations, proper practices, and versatile abilities that students can use to provide poor methods. These arrangements should be kept with school and conventions; however, it is quite applicable to understand and customize the students' needs and enough for their requirements. Converse: To increase the collection of every classroom feature by daily schedule from the classroom, they manage the information and should be in the way to enhance the positive change. The basic framework strengthens the methods for discussion that are considered in all classes, schools, universities, and homes. It provides the methods for understanding the same page and it allows intellectual, education, teachers, managers, and guardians for management and improvement in behavior. 16 These are the five steps that improve the student's behavior animated the feature, successfully identifying and recognizing the problems in the classroom and managed it. It means it a non-therapeutic procedure to maintain the ethical values and to improve it in, students. As information is collected and criticized educators, managers and guardians, with positive endurance with a special end. The students can be motivated toward moral values. Institutional Environment and Moral Development: It is the responsibility of the institution, to promote methods and approaches, decisions, and choices for ethical values, which may affect masses daily. Any organization can change the processes of the people through environmental changes, describe the importance of good character in the work environment, and can identify. 17 The following points describe the importance of the institutional environment in moral values: Internal Communication: To use internal discussion channels, a well-lost condition is to indicate the positive behavior in the working environment and by promoting volunteering through the network, and coaching, e.g., through (a) Internal pamphlet, (b) Ethical informative blurbs in containers and entertainment rooms, (c) Mailers (d) Electronic sends. Outside Communication: Concerning understanding the teachers and others, to convey messages about intentional roles and ethics. For example; a) Make sure that you cannot weaken the construction of any of your procedures for the education and education process. b) Include positive messages on a basic level of the framework. c) Check the character that may be weak in the guideline to improve the quality. Network Outreach: a) Use open effort structures to empower coaching and other characterbuilding programs. b) Encourage instructive and youth associations to wind up dynamic in character building. c) Use corporate impact to energize (assemblies of trade, conference, boards, classes, gatherings, workshops, addresses) and different organizations to help character building. 18 Morale Deterioration: A Solution in the light of Islamic Teachings: The history of the world has never seen the period won a legitimate sense of deeper quality. Only extends about 13 years of Islamic history, where the Prophet ‫)ﷺ(‬ established public in Madina, which can be considered perfect and example for the world. "And undoubtedly, you possess excellent manners" The Prophet ‫)ﷺ(‬ presented a good example among them. The Quran picked it up as one of the roles. Ground quality is that man has to manage and decide directly with human instruction. It is not the one-day snapshot certificate, it is certified when a person is not being caught with something or thinking. This is a routine procedure that goes through everyone's life. The inquiry chain continues when the accuracy of movement is compromised, right, or non-foundation. It has been identified with moral standards and Muslims to find the direction to decide their leadership. After the Qur'ān, the life of the Prophet is the best source of guidance. While explaining on everlasting verities he all the while exemplified through his character and behaviors direct. His life is loaded with such episodes, where he anticipates an appropriate feeling of ethical quality. "He who obeys the Messenger thereby obeys Allah; as for he who turns away, we have not sent you as a keeper over them." Indeed, his opponents admitted that his ethics were very good. There are cases of their thoughts for human benefit, the work of generosity, their words for morality, their respect to calm support, and the brightness cases for everyone to copy their prayers for human salvation. 21 Despite such a wonderful lesson, our common public is an extraordinarily powerful, today it shows the overall condition of goodness; it is facing various problems, however, the most severe condemnation at every level is moral corruption. Unfortunately, good ideas can be seen in businesses, workshops, manufacturers, plants, and open places, where people are involved most extraordinarily. Most likely, it would be for all accounts, individual, insight, and maximum purpose-based investment, impact, and economic prosperity. The Prophet ‫)ﷺ(‬ said: The item which makes it soft, it becomes adorned and it becomes defective from the item that is removed." "For each one is successive [angels] before and behind him who protect him by the decree of Allah. Indeed, Allah will not change the condition of a people until they change what is in themselves. And when Allah intends for a people ill, there is no repelling it. And there is not for them besides Him any patron." The Qur'ān creates a moral course. However, part 17 specifically provides obedience that can be followed by material life, through which one person and society can clean the colors of their enemies. This lesson is the only reason for making a moral society. Allah explains that: "Your Lord knows well what is in your hearts, if you be righteous, then no doubt. He is forgiving to those who repent." The desire of a slave is gaining less contact with Allah. Allah the Almighty says in the Qur'ān: 26 "After that, those who came, they lost their prayers and started walking on their faces, so they would certainly be punished for error." Conclusion The findings of this research highlight that there is a correlation between the ethical conduct of students and their university environment, which brings changes in human personality and character development. Social illnesses and moral deterioration are rooted in the behavior of the students, which they usually acquire from the home and university environment. The purpose of this research was to provide students an opportunity to develop themselves in their ethical conduct and moral values by practicing certain modules and instructions. We can't change society and profoundly settled in social states of mind overnight yet we should predict and design our future society at present and ask ourselves what will be our general public in the year 2036, for example, quite a while from now when a radical new age will have grown up. To recognize and acknowledge individual qualities, human beings must take their parts and desires and develop themselves, based on the belief of Islam under the umbrella of Islam. 27 It might be a tough assignment, however, it should be tried, but it could be done with the help and support of the scholars of Islamic Studies. They should be firm with a dedicated soul to train the students' character to shape an ideal future society free from misuse, corruption, fraud, and arrogance. In this way, it is important to work that man is suitable for developing emotions and self-esteem, and then he will have the ability to work until he is flexible with external weight. Hopefully, this research would help improve the ethics of the students at the university level and developing them with strong moral values and social norms. It is believed that this research would be a door opener for future research This work is licensed under a Creative Commons Attribution 4.0 International License.
5,712
2020-06-29T00:00:00.000
[ "Education", "Philosophy" ]
Gut microbiota in regulation of childhood bone growth Childhood stunting and wasting, or decreased linear and ponderal growth associated with undernutrition, continue to be a major global public health challenge. Although many of the current therapeutic and dietary interventions have significantly reduced childhood mortality caused by undernutrition, there remain great inefficacies in improving childhood stunting. Longitudinal bone growth in children is governed by different genetic, nutritional and other environmental factors acting systemically on the endocrine system and locally at the growth plate. Recent studies have shown that this intricate interplay between nutritional and hormonal regulation of the growth plate could involve the gut microbiota, highlighting the importance of a holistic approach in tackling childhood undernutrition. In this review, I focus on the mechanistic insights provided by these recent advances in gut microbiota research and discuss ongoing development of microbiota‐based therapeutics in humans, which could be the missing link in solving undernutrition and childhood stunting. In the past decade, exciting research has demonstrated that the gut microbiota plays an important role in the hormonal and nutritional regulation of bone growth and body growth (de Vadder et al., 2021). In this review, I focus on the mechanistic insights provided by these recent advances and discuss ongoing development of microbiotabased therapeutics for the treatment of undernutrition and childhood stunting. Growth hormone-insulin-like growth factor I axis and bone growth Childhood bone growth is governed by a complex interplay between different endocrine signals, including growth hormone (GH), insulinlike growth factor I (IGF-I), thyroid hormone, glucocorticoids and sex steroids, such as oestrogen.Instead of providing an extensive review of this topic, I focus on the GH/IGF-I axis, which is by far the most important endocrine regulator of bone growth and has recently been indicated in gut microbiota interactions (Figure 1).Secretion of GH in the anterior pituitary gland is stimulated by GHreleasing hormone (GHRH) produced in the hypothalamus (Figure 1). Growth hormone then travels in the circulation as an endocrine factor to act on its target tissues, where the GH receptor is expressed, such as the liver, the intestine and the growth plate.Although GH is also known to have a direct growth-promoting effect locally on its target tissue (Liu et al., 2017), the predominant function of GH is to activate the production of IGF-I, which itself is a potent stimulator of growth. When GH-induced IGF-I production happens in the liver, the hepatic IGF-I complexes with the IGF-binding protein IGFBP3 and functions as an endocrine factor, acting on IGF-I receptors and stimulating growth in many different tissues throughout the body (Yakar et al., 2018). When non-hepatic IGF-I is produced locally, such as in the growth plate, IGF-I acts as an autocrine or paracrine factor to stimulate stem cell recruitment, proliferation and hypertrophic differentiation of chondrocytes (Lui et al., 2019), all of which are essential for driving bone elongation.In addition, GH-induced local IGF-I also supports epithelial stem cell proliferation and absorption of nutrients in the small intestine (Zheng et al., 2018).The importance of the GH/IGF-I axis for body growth is demonstrated by pituitary adenomas that lead to elevated systemic GH and IGF-I causing gigantism and acromegaly and, conversely, deficiency of GH leading to decreased IGF-I and short stature.Similar to many other endocrine systems, a negative feedback loop exists between circulating IGF-I levels and GH production in the pituitary.Therefore, in patients with GH insensitivity (also known as Laron syndrome) attributable to loss-of-function mutations of the GH receptor, IGF-I production is diminished, leading to poor linear growth despite elevated GH levels caused by the lack of feedback inhibition. Interestingly, GH production in the pituitary is also regulated by several hormones related to energy metabolism and appetite.Somatostatin, Highlights • What is the topic of this review? This review is about the mechanisms by which the gut microbiota regulates bone growth. • What advances does it highlight? There is in vivo evidence for regulation of the gut microbiota on the growth hormone-insulin-like growth factor I axis.An association exists between gut microbiota immaturity and undernutrition. Clinical trials of microbiota-based therapeutics are underway. which is produced in the hypothalamus and the gastrointestinal tract, suppresses GH production.In contrast, ghrelin and leptin, produced by the stomach and adipose tissue, respectively, both stimulate GH production. Nutritional regulation of the GH/IGF-I axis Nutrition plays an important role in modulating the GH/IGF-I axis of bone growth.During chronic undernutrition, GH receptor expression is downregulated in both the liver and the growth plate (Wu et al., 2013), thereby limiting the ability of circulating GH to induce both hepatic and peripheral IGF-I production and leading to a state of GH insensitivity similar to that in Laron syndrome.Undernourished children therefore have decreased systemic IGF-I but elevated GH levels, primarily owing to the lack of negative feedback from IGF-I, in addition to elevated levels of hunger-induced ghrelin (El-Hodhod et al., 2009).Consequently, decreased IGF-I signalling via the IGF-I receptor can directly limit chondrocyte proliferation and hypertrophic differentiation in the growth plate (Lui et al., 2019).In addition, decreased IGF-I and scarcity of essential amino acids during undernutrition can also suppress mechanistic target of rapamycin (mTOR) signalling, which, in turn, negatively regulates cell division and survival (Sancak et al., 2008). Growth plate senescence and catch-up growth The rate of longitudinal growth is most rapid in the first 1000 days of life and gradually slows as we approach our final adult height.This growth deceleration is associated with the gradual decline in growth plate function, also known as growth plate senescence.Importantly, growth plate senescence is characterized by a gradual depletion of chondrogenic stem cells, decreasing chondrocyte proliferation and hypertrophy in the growth plate (Lui et al., 2011).Although growth F I G U R E 1 Regulation of bone growth by the GH/IGF-I axis.Secretion of GH from the anterior pituitary is positively regulated by GHRH produced by the hypothalamus.Growth hormone stimulates production of IGF-I in the liver, which then acts as an endocrine factor to stimulate bone growth at the growth plate.Growth hormone also stimulates local IGF-I production in target tissues, such as the growth plate and the intestine, which acts as a paracrine/autocrine growth factor.Nutritional status positively regulates bone growth and maturation of the gut microbiota, which reciprocally promote nutritional intake.(Forcinito et al., 2011). Effect of the gut microbiota on bone growth In 2016, a couple of studies highlighted the importance of the gut microbiota in bone growth.Schwarzer et al. (2016) showed that germfree mice (lacking the whole microbiota, including that in the gut) have decreased body growth and decreased longitudinal bone growth compared with wild-type mice.This growth deficit in germ-free mice appears to be caused by the lack of the gut microbiota, because gut recolonization with Lactobacillus plantarum was able to rescue much of their growth deficiency.In a similar study by Yan et al. (2016), long-term gut microbiota colonization in germ-free mice improved bone formation and long bone length.Taken together, these two studies showed convincingly that the presence of the gut microbiota is beneficial, and perhaps essential, to normal bone growth. Possible effect of the gut microbiota on IGF-I The molecular mechanisms by which the gut microbiota supports bone growth are not clear.A connection via the GH/IGF-I axis has been proposed based on the observation that circulating IGF-I and IGFBP3 levels were downregulated in the germ-free mice (Schwarzer et al., 2016), which is reversed upon microbiota colonization (Schwarzer et al., 2016;Yan et al., 2016).One exciting possibility is that the gut microbiota specifically regulates the host GH/IGF-I axis, such that, for example, hepatic IGF-I production is stimulated by certain molecules or metabolites released by the microbes.However, another possible explanation is that the gut microbiota normally aids in macronutrient digestion (Oliphant & Allen-Vercoe, 2019), which becomes less effective in germ-free mice, causing a mild state of undernutrition.In that case, the observed decrease in IGF-I and IGFBP3 with no change in GH is likely to represent GH insensitivity, which is one of the many hormonal changes induced by undernutrition.Importantly, these two explanations are not mutually exclusive and both could be true. To test whether the gut microbiota regulates bone growth via the GH/IGF-I axis, one could try to restore circulating IGF-I in germ-free mice to levels similar to the wild-type mice and ask whether that alone is sufficient to restore bone growth fully.Injection of recombinant IGF-I (for 10 days, twice daily, at 5 mg/kg) was able to improve bone growth in germ-free mice but not to a statistically significant extent in wild-type mice, suggesting that the growth deficit is driven by reduced IGF-I in the germ-free mice (Schwarzer et al., 2016). However, there are some important caveats to this interpretation. Injection of a high level of recombinant IGF-I results in a higher than normal level of circulating IGF-I (Gillespie et al., 1990), which is likely to improve bone growth regardless of whether or not the original growth deficit is IGF-I dependent, similar to when GH treatment was typically used in children without GH deficiency.One could point to the promising observation that recombinant IGF-I preferentially improved bone growth in germ-free mice but not in wild-type mice.However, because the germ-free mice did not grow as well as the wild-type mice before treatment, there was likely to be more growth potential remaining in the growth plate of the germ-free mice, and therefore, recombinant IGF-I might elicit a more prominent growth response (or catch-up growth) owing to delayed growth plate senescence. Possible regulatory mechanisms of the gut microbiota on IGF-I If indeed the gut microbiota regulates the GH/IGF-I axis specifically, what could be the molecular signal by which the microbes stimulate IGF-I secretion?One possibility is short-chain fatty acids (SCFAs), which are major metabolites produced during fermentation of dietary fibre (Wong et al., 2006).Yan et al. (2016) showed that in mice in which the gut microbiota is depleted by broad-spectrum antibiotics or vancomycin, the reduction in serum IGF-I can be reversed by supplementation with SCFAs.The limitations of their findings were that long-term bone growth was not assessed upon treatment with SCFAs, and they did not determine whether SCFAs alone could be used to stimulate IGF-I and bone growth in germ-free mice.The observed induction of IGF-I by SCFAs has since been replicated by later studies (Czernik et al., 2021) and in systems other than bone growth, such as in prostate cancer (Matsushita et al., 2021). If SCFAs were able to stimulate the GH/IGF-I axis, it is unclear which signalling pathways could enable this.The SCFA receptors involved in IGF-I production could be G-coupled protein receptors (GPRs) 41 and 43, whose expression in bone could be induced by faecal microbiota transplantation (Xiao et al., 2022).Ghrelin has been proposed to allow SCFA-induced modulation of the GH/IGF-I axis because it can stimulate GH secretion in the pituitary.Interestingly, an acute increase in colonic SCFAs is negatively associated with the level of ghrelin in humans (Rahat-Rozenbloom et al., 2017).However, ghrelin stimulates GH secretion rather than suppressing it, in which case the presence of the gut microbiota should induce SCFAs, thus lowering (rather than increasing) ghrelin, GH and IGF-I production. The signal by which the gut microbiota stimulates IGF-I might not even be a metabolite.In fact, a recent study showed that bacterial cell walls isolated from L. plantarum were sufficient to stimulate IGF-I and bone growth in mice (Schwarzer et al., 2023), suggesting that sensing of postbiotics by the host might be able to induce growth-promoting metabolic and hormonal signals (Matos et al., 2017).This postbiotic sensing appears to be driven by innate immune receptor nucleotide-binding oligomerization domaincontaining protein 2 (NOD2) expressed in the intestinal epithelial cells, because treatment with bacterial cell walls was unable to induce IGF-I and improve bone growth in Nod2-null mice (Schwarzer et al., 2023). Interestingly, the same study showed that NOD2-activating ligands, such as muramyl dipeptide or the synthetic NOD2-activating adjuvant mifamurtide, alone were sufficient to induce IGF-I and bone growth, suggesting that NOD2 agonists could be a new class of therapeutic agents for improving childhood stunting. Inflammatory cytokines and other possible mechanisms Could the gut microbiota support bone growth by some molecular mechanisms other than those mentioned above?One potential mechanism could be by modulating the effect of inflammatory cytokines on bone growth (Sederquist et al., 2014).It is well established that chronic inflammatory diseases, such as inflammatory bowel disease, Crohn's disease, ulcerative colitis and juvenile idiopathic arthritis, negatively impact childhood bone growth (d 'Angelo et al., 2021). Part of this growth impairment is attributable to undernutrition associated with these conditions and to the adverse side-effects of the glucocorticoid therapy used for treatment.However, another major cause of growth inhibition comes from a local effect of cytokines, which are often elevated in inflammatory diseases.At a systemic level, pro-inflammatory cytokines can inhibit bone growth by suppressing IGF-I.For example, in mice overexpressing interleukin-6 (IL-6), body growth is significantly suppressed, with decreased IGF-I and IGFBP3 but with normal levels of GH (de Benedetti et al., 2002).At a local level, the pro-inflammatory cytokines tumor necrosis factor alpha (TNFα), interleukin-1β (IL-1β) and IL-6 could all inhibit chondrocyte proliferation and hypertrophy while promoting apoptosis in the growth plate (Mårtensson et al., 2004). The gut microbiota has been shown to influence circulating levels of pro-inflammatory cytokines (Mizutani et al., 2022).A recent study by Webster et al. (2022) showed that serum IL-1β and IL-6 levels were correlated with the presence of certain bacterial strains in the gut microbiome.Mechanistically, butyrate, one of the SCFAs produced by the gut microbiota, has been shown to inhibit the inflammatory response elicit by lipopolysaccharides, TNFα and interleukins via GRP41 and GRP43, both in endothelial cells (Li et al., 2018) and in chondrocytes (Pirozzi et al., 2018), suggesting that the gut microbiota could stimulate bone growth by reducing inflammation.It would be interesting to test whether some of the growth-promoting effect of the gut microbiota, including its effect on IGF-I, is mediated by suppression of inflammation in vivo. In addition to the GH/IGF-I axis and inflammatory cytokines, the regulation of bone growth by the microbiome is likely to be multifactorial.Future studies on serum metabolomic analysis and transcriptome profiling of growth plate chondrocytes in germ-free mice recolonized with different gut microbiotas and with various nutritional statuses will provide more comprehensive insights into the role of the gut microbiome and bone growth via other molecular mechanisms and signalling pathways. Healthy maturation of the gut microbiota The gut microbiota is established in newborns under the influence of multiple factors, including the birth mode, the maternal microbiota, breastfeeding and other environmental factors.During early infancy, the biodiversity of the gut microbiota remains narrow.For example, in breastfed infants, many of their gut microbiota are originated from the mother's breat milk oligosaccharide metabolism and originate from the mother's breast milk (Pannaraj et al., 2017). During the first year of life, the infant gut microbiota undergoes massive changes, shaped largely by nutritional availability and the increased dietary complexity.Gradually, a stable and phylogenetically diverse gut microbiota is established by 2-3 years of age (Koenig et al., 2011). This succession and maturation of the gut microbiota plays an important role in the normal development and maintenance of various aspects of human health.In a study of healthy infants in Bangladesh, 24 age-predictive bacterial taxa were identified, whose changes in abundance during the first 2 years of life could be used to define the process of normal gut microbiome maturation (Subramanian et al., 2014).Their findings serve as an important reference to evaluate the maturity of the gut microbiota, in the form of the microbiota-forage z-score, which is significantly correlated with the chronological age of children and with healthy growth phenotypes, such as WAZ and HAZ (Subramanian et al., 2014).A recent study showed that bacterial taxonomy in the gut microbiota might not necessarily predict the future growth trajectory; instead, the functional metagenomic features of the gut microbiome, which could still be taxa dependent, are better indicators for linear and ponderal growth and growth velocities (Robertson et al., 2023). A vicious cycle of childhood stunting Persistent immaturity of the gut microbiota is associated with childhood undernutrition, which is particularly profound in children with severe acute malnutrition (SAM) (Subramanian et al., 2014).In children with SAM, nutritional interventions only partially improved their gut microbiota maturity and were completely ineffective in improving HAZ (Subramanian et al., 2014).Reciprocally, colonization of microbiota from undernourished children in gnotobiotic mice and pigs was sufficient to induce stunted bone growth and body growth (Gehrig et al., 2019;Wagner et al., 2016).Considering that the gut microbiota might also impact nutritional intake, these findings suggest a vicious cycle of childhood stunting, wherein undernutritioninduced gut microbiota dysbiosis itself contributes to undernutrition and childhood stunting (Figure 2).This undernutrition cycle might also be intergenerational, because the maternal microbiota, which could be nutrition dependent, strongly influences early fetal growth, birth weight and the maturation of the microbiota of the infant (Gough et al., 2021).Furthermore, some evidence suggests that the negative impact of undernutrition on the gut microbiota could become harder to correct when it traverses across multiple generations (Sonnenburg et al., 2016).Therefore, consideration should be given to repairing the gut microbiome as part of the prevention and management of undernutrition in solving childhood stunting. Therapeutic strategy for repairing the gut microbiome Microbiota-based therapies (Figure 3) have shown some promising results in animal models.For example, a five-species consortium of growth-supporting bacteria was able to alleviate the growth deficiency in mice colonized with the gut microbiota from an undernourished donor (Blanton et al., 2016).In a similar study mentioned above, Probiotics are live microbes (usually in food) that contribute beneficially to the gut microbiota.Prebiotics are ingredients in food (such as fibre) that help to stimulate growth of gut microbiota.Microbiota-directed food is specially formulated food that helps to shape or direct the growth of certain bacteria or improve microbiota maturation.Synbiotics are mixtures of probiotics and prebiotics.Postbiotics are materials left behind by probiotics, such as the bacterial cell wall.Antibiotics are antimicrobial substances that kill or inhibit microbes, which might help to reset microbiota equilibrium.Faecal microbiota transplant is a procedure that delivers mature gut microbiota from a healthy human donor stool to a recipient, such as an undernourished child with gut microbiota dysbiosis.Abbreviations: FMT, faecal microbiota transplant; MDFs, microbiota-directed foods. introduction of certain strain of L. plantarum (or simply isolated bacterial cell wall) could ameliorate growth phenotypes during undernutrition in mice (Schwarzer et al., 2016(Schwarzer et al., , 2023)), providing a strong rationale for testing different approaches in clinical trials.Here, I highlighted the success stories of several types of microbiota-based therapies tested in humans (Table 1). Therapy with probiotics and synbiotics Previously, two large clinical trials among children with SAM in Malawi (Kerac et al., 2009) and Uganda (Grenov et al., 2017) (Barratt et al., 2022).A follow-up study with a larger number of participants and longer recovery period might help to confirm the effectiveness of probiotic therapy and explain the discrepancy in efficacies between different trials. Microbiome-directed food Similar to prebiotics, which are ingredients in food that promote growth of gut microbes, microbiome-directed foods (MDFs) are specially designed formulations of food with the ability to boost representation of key growth-promoting gut microbes and improve gut microbiota maturity.In an initial trial in 2019, children (mean age, 15.2 ± 2.1 months) in Bangladesh with SAM were treated with ready-to-use supplementary food (RUSF) or three formulations of MDFs (n = 14-17 per group) for 4 weeks, with a 2 week follow-up (Gehrig et al., 2019).One of the formulations, MDCF-2, significantly improved weight gain (measured by MUAC), with increased plasma levels of IGF-I and IGFBP3 (Gehrig et al., 2019).In a subsequent trial (Chen et al., 2021), with a longer period of intervention (3 months) and focusing on MDCF-2, the weekly change in WAZ and weight-forheight z-score both improved in MDCF-treated patients, compared with RUSF-treated patients (MDCF-2, n = 61; RUSF, n = 62).This effect appeared to be mediated by repair of the gut microbiota by the MDF, because bacterial taxa that are reported to be involved in normal gut microbiota maturation were significantly increased in the MDF-treated group.The enrichment of these bacterial taxa was also positively correlated with the weight-for-height z-score in these patients.However, none of these studies has provided long-term follow-up to assess the improvement in bone growth meaningfully, which would be extremely informative. Conflicting effects of antibiotics The World Health Organization recommends the use of a short course of antibiotics, such as amoxicillin, for outpatient management of uncomplicated SAM.However, antibiotics can also indiscriminately target both pathogenic and commensal bacteria, thus perturbing the gut microbiome ecosystem.It was previously shown that even shortterm use of antibiotics in preterm infants could have a potentially harmful and long-lasting effect on the gut microbiota (Gasparrini et al., 2019).In a large clinical trial comparing 7 days of treatment with amoxicillin or placebo in Nigerian children with SAM, shortterm nutritional outcomes (measured by MUAC) were significantly improved after 1 month (Isanaka et al., 2016) but largely disappeared after 1 year (Isanaka et al., 2020).However, in a recently published secondary analysis of the Nigerian trial, Schwartz et al. (2023) focused on antibiotic resistance and the gut microbiota and found that although amoxicillin transiently increased antibiotic resistance genes and decreased the diversity of the gut microbiota, these changes largely subsided within 3 weeks.Surprisingly, 2 years after the initial treatment, the amoxicillin-treated children had increased gut microbiome diversity and richness relative to placebo-treated control children (Schwartz et al., 2023), suggesting an unexpected long-term benefit that might outweigh the short-term risks of antibiotic resistance.A follow-up study on metabolic and anthropometric analysis will help to clarify whether or not antibiotic treatment has any long-term benefit for bone growth. CONCLUSION A deeper understanding of the gut microbiota in general and in the context of the endocrine and nutritional regulation of bone growth has highlighted the importance of a holistic approach in solving undernutrition.In addition to the bare minimum of improving nutritional status, mitigation of gut microbiota dysbiosis, either by introducing growth-stimulating bacterial strains or by promoting gut microbiota maturation, should be considered as coupling therapeutic strategies. As we continue to uncover the mechanistic links in host-microbe interactions, more clinical trials with larger sample sizes, more extensive follow-up and for newer classes of microbiome-based interventions are surely on the horizon.If the gut microbiota has been the missing link for childhood stunting all along, the stage could be finally set for public health policy-makers to develop and implement effective treatment or even preventive care to solve childhood stunting once and for all. AUTHOR CONTRIBUTIONS Sole author. U R E 2 A vicious cycle of childhood stunting.Undernutrition in children could lead to gut microbiota immaturity, which itself could contribute to undernutrition and childhood stunting.This undernutrition cycle can also be intergenerational, because maternal size and maternal undernutrition are risk factors for SGA.Maternal undernutrition could also lead to maternal gut microbiota dysbiosis, which could be transmitted to the infant at birth.Improving nutritional intake by dietary intervention and microbiota-based therapy might both be needed to escape this vicious cycle and achieve healthy childhood growth.Abbreviation: SGA, small for gestational age.Different classes of microbiota-based therapeutics. Recent clinical trials of microbiota-based therapy.
5,311.6
2023-12-29T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Rapid high resolution imaging of diffusive properties in turbid media We propose a laser speckle based scheme that allows the analysis of local scattering properties of light diffusely reflected from turbid media. This turbid medium can be a soft material such as a colloidal or polymeric material but can also be biological tissue. The method provides a 2D map of the scattering properties of a complex, multiple scattering medium by recording a single image. We demonstrate that the measured speckle contrast can be directly related to the local transport mean free path l∗ or the reduced scattering coefficient μt = 1/l∗ of the medium. In comparison to some other approaches, the method does not require scanning (of a laser beam, detector or the sample itself) in order to generate a spatial map. It can conveniently be applied in a reflection geometry and provides a single characteristic value at any given position with an intrinsic resolution typically on the order of 5-50 μm. The actual resolution is however limited by the transport mean free path itself and can thus range from microns to millimeters. © 2011 Optical Society of America OCIS codes: (030.6140) Speckle; (110.6150) Speckle imaging; (170.3880) Medical and biological imaging. References and links 1. E. Paruta-Tuarez, H. Fersadou, V. Sadder, P. Marchal, L. Choplin, C. Baravian, and C. Castel, “Highly concentrated emulsions: 1. average drop size determination by analysis of incoherent polarized steady light transport,” J. Colloid Interface Sci. 346(1), 136–142 (2010). 2. C. Baravian, F. Caton, J. Dillet, and J. Mougel, “Steady light transport under flow: characterization of evolving dense random media,” Phys. Rev. E 71, 066603 (2005). 3. H. M. Wyss, S. Romer, F. Scheffold, P. Schurtenberger, and L. J. Gauckler, “Diffusing-wave spectroscopy of concentrated alumina suspensions during gelation,” J. Colloid Interface Sci. 240, 89–97 (2001). 4. P. Snabre, L. Brunel, and G. Meunier, “Multiple light scattering methods for dispersion characterization and control of particulate processes,” in Particle Sizing and Characterization, ed. T. Provder and J. Texter, (American Chemical Society, Washington DC, 2004). 5. F ormulaction SA (Bordeaux, France) web: http://www.formulaction.com/, LSInstruments AG (Fribourg, Switzerland) web: http://www.lsinstruments.ch. 6. A. Yodh and B. Chance, “Spectroscopy and imaging with diffuse light,” Phys. Today 48(3), 34–40 (1995). 7. F. Bevilacqua, D. Piguet, P. Marquet, J. Gross, B. Tromberg, and C. Depeursinge, “In vivo local determination of tissue optical properties: applications to human brain,” Appl. Opt. 38(22), 4939–4950. #155084 $15.00 USD Received 20 Sep 2011; revised 24 Nov 2011; accepted 28 Nov 2011; published 20 Dec 2011 (C) 2012 OSA 2 January 2012 / Vol. 20, No. 1 / OPTICS EXPRESS 192 8. R. B. Schulz, J. Ripoll, and V. Ntziachristos, “Noncontact optical tomography of turbid media,” Opt. Lett. 28, 1701–1703 (2003). 9. D. A. Weitz and D. J. Pine, “Diffusing wave spectrscopy,” in Dynamic Light Scattering, ed. W. Brown, (Oxford University Press, 1992). 10. P. D. Kaplan, A. D. Dinsmore and A. G. Yodh, “Diffuse-transmission spectroscopy: A structural probe of opaque colloidal mixtures,” Phys. Rev. E 50, 4827–4835 (1994). 11. C. Aegerter and G. Maret, “Coherent backscattering and anderson localization of light,” Prog. Opt. 52, 1–62 (2009). 12. D. Cuccia, F. Bevilacqua, A. J. Durkin, F. Ayers, and B. Tromberg, “Quantitation and mapping of tissue optical properties using modulated imaging,” J. Biomed. Opt. 14(2), 024012 (2009). 13. A. Joshi, W. Bangerth, and E. M. Sevick-Muraca, “Non-contact fluorescence optical tomography with scanning patterned illumination,” Opt. Express 14, 6516–6534 (2006). 14. J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007). 15. D. Magatti, A. Gatti and F. Ferri, “Three dimensional coherence of light speckles: experiment,” Phys. Rev. A 79, 053831 (2009). 16. S. E. Skipetrov, J. Peuser, R. Cerbino, P. Zakharov, B. Weber, and F. Scheffold, “Noise in laser speckle correlation and imaging techniques,” Opt. Express 18, 14519–14534 (2010). 17. M. Erpelding, A. Amon, and J. E. Crassous, “Diffusive wave spectroscopy applied to the spatially resolved deformation of a solid,” Phys. Rev. E 78, 046104 (2008). 18. P. Zakharov and F. Scheffold, “Monitoring spatially heterogeneous dynamics in a drying colloidal thin film,” Soft Mater. 8, 102–113 (2010). 19. L. F. Rojas-Ochoa, S. Romer, F. Scheffold, and P. Schurtenberger, “Diffusing wave spectroscopy and small-angle neutron scattering from concentrated colloidal suspensions,” Phys. Rev. E 65, 051403 (2002), [http://www.lsinstruments.ch/scattering calculator/]. 20. J. Peuser, A. Belhaj-Saif, A. Hamadjida, E. Schmidlin, A. D. Gindrat, A. C. Völker, P. Zakharov, H. M. Hoogewoud, E. M. Rouiller and F. Scheffold, “Follow-up of cortical activity and structure after lesion with laser speckle imaging and magnetic resonance imaging in nonhuman primates,” J. Biomed. Opt. 16, 096011 (2011). 21. P. Zakharov, A. Völker, A. Buck, B. Weber, and F. Scheffold, “Quantitative modeling of laser speckle imaging,” Opt. Lett. 31 (23), 3465 (2006). 22. N. Curry, P. Bondareff, M. Leclercy, N. F. van Hulst, R. Sapienza, S. Gigan, and S. Gresillon, “Direct determination of diffusion properties of random media from speckle contrast,” Opt. Lett. 36(17), 3332–3334 (2011). 23. O. L. Muskens and A. Lagendijk, “Broadband enhanced backscattering spectroscopy of strongly scattering media,” Opt. Express 16(2), 1222 (2008). 24. J. C. Ragain Jr. and W. M. Johnston, “Accuracy of Kubelka-Munk reflectance theory applied to human dentin and enamel,” J. Dent. Res. 80, 449 (2001). 25. B. Weber, C. Burger, M. T. Wyss, G. K. von Schulthess, F. Scheffold, and A. Buck. “Optical imaging of the spatiotemporal dynamics of cerebral blood flow and oxidative metabolism in the rat barrel cortex,” Eur. J. Neurosci. 20(10), 2664 (2004). Introduction The determination of a sample's optical scattering properties is important for a diverse set of fundamental research areas and industrial applications ranging from stability monitoring of complex fluids and formulations (for example to differentiate sedimentation, creaming, phase separation) [1][2][3] to in-vivo biological studies of tissue composition and blood perfusion [4][5][6][7][8].A number of techniques exist which provide a single measurement of the scattering properties of a given sample volume.Diffusing Wave Spectroscopy (DWS) [9] (or "Diffuse Transmission Spectroscopy" [10]) and coherent backscattering [11] are such techniques that can measure the scattering strength of a diffusive medium through determination of a characteristic parameter known as the transport mean free path l * or the reduced scattering coefficient μ t = 1/l * .Light reflected from a turbid medium has typically entered the object up to a depth z of a (few times the) transport mean free path l * .A limitation of this technique is that any individual measurement gives an average of the assumed homogenous and generally rather large sample volume; spatially resolved measurements are not possible.In this article we describe a scheme that provides a 2D map of the diffuse scattering properties of a complex, multiple scattering medium by recording a single image with an exposure time in the milisecond range.Fig. 1.Experimental setup displaying beam path and all components.A single-mode diodepumped solid-state laser operating at 532 nm is deflected onto a ground-glass optical diffuser mounted onto a rotating motor.The coherence length of the laser beam being sufficiently large (l coh l * ) is critical for the proposed technique.The light scattered from the diffuser is collimated by a lens and directed by a semi-transparent beamsplitter onto the sample.A digital camera images the sample surface through an objective.A crossed polarizer is mounted in front of the camera to attenuate specular reflections. Experimental setup In general, measurement of diffuse scattering in the reflection geometry is of great interest for many applications including in-vivo biological imaging as access to only one side of the sample region is required.In such a context, spatially-resolved measurements are desirable and this is traditionally achieved by placing distinct incident and detector optical fiber probes on the sample surface and separated by a given distance [7].Light scattered by the sample and measured by the detector is processed using the known separation distance to recover the sample scattering properties.Alternatively, the diffuse broadening of a point source at a sample surface can be imaged and analyzed to extract l * [1].While useful and already available in early commercial implementations, all of these methods require scanning of either the probe or the sample to map a sample area.In contrast to these real-space approaches spatially patterned illumination has recently been introduced as a method to map tissue optical properties [12].This method bears some similarities to our approach although it lacks the high spatial resolution we can achieve in single frame acquisition.Patterned illumination has also been applied to optimze fluorescence optical tomography [13].The experimental setup is presented in Fig. 1.A collimated laser beam (λ = 532 nm) is slightly focused (or expanded) onto a ground glass diffuser using a first lens.The diffusor is mounted on an electric motor and the light impinges on the diffuser at a distance of about 10 mm from the rotation axis.The size w 0 of the illumination spot on the diffuser can be adjusted by positioning lens (1) at a certain distance relative to the diffuser.The forward-scattered light is collimated using a second lens with a focal length L = 4 cm thus creating a homogeneous speckle beam with a width ≈ 2 cm [14].We obtain longitudinally elongated near-field speckles and use them to illuminate the sample placed at a distance of about 6 − 8 cm from the lens.In this so-called deep-Fresnel regime the speckles have a Gaussian shape with a well defined size b ≈ πλ L/w 0 [15].Thus for a laser beam of diameter 1 mm we expect a speckle size 2b ≈ 100 μm.The actual beam-speckle size b is determined by placing the camera at the sample position.From the raw images we can calculate the spatial correlation function and thus b as described in [16] (Fig. 2).The speckle beam obtained in such a way impinges on the sample.A digital camera focused to the sample surface (Prosilica GC 650, Allied Vision Technology, pixel size a = 7.4μm, 659 × 493 pixel) records images with 0.83 × magnification and with its aperture set to produce image speckle roughly of the order of the size of a camera pixel.We use a semi transparent beamsplitter to speparate the illumination and detection path.The motor is then set to rotate at a fixed frequency f and the camera exposure time τ exp is set to average over a large number of different speckle configurations produced by the diffuser (a more detailed discussion concerning the exposure is given later in the text).It should be clarified that image speckle is of a different origin than beam speckle.The latter is induced by the ground-glass diffuser and its purpose is to impose a random field illumination pattern onto the sample that can be quickly varied.Image speckle, on the other hand, arises regardless of any illumination patterning; even with plane wave illumination, interference effects of the diffuse backscattered coherent laser light yield speckle at the image plane.It is this image speckle which is set at approximately the size of the camera pixel. Such random modulation of the beam-speckle also influences the spatial fluctuations of the diffusely-backscattered image speckle.The proposed method for 2D mapping of l * using the setup described above can best be explained by studying two limiting cases in which beam speckle size is either much smaller (b l * ) or much larger (b l * ) than the sample transport mean free path.It can be shown that half of incident photons leave the sample at a distance of less than ∼ 3l * from the point of entry while the other half leaves at greater distances [17,18].Thus, in the case where b l * , the diffusely backscattered electric field amplitude at any point on the imaged surface is composed of contributions from many incident beam-speckles origi- nating from a surface area ∼ 10πl * 2 b 2 .Rotating the ground glass alters the configuration of the speckle beam randomly and therefore also leads to random fluctuations of the image speckle.By choosing a sufficiently long camera exposure, random temporal fluctuations of the image-speckle arising from fluctuations of the speckle beam are time-integrated.This then results in a spatially homogeneous intensity distribution that is measured by the camera.In the opposite limit b l * , the scattered field amplitude at a given point on the imaged surface is composed of contributions of only a single incident beam-speckle within the typical surface area ∼ 10πl * 2 b 2 .If the field amplitude of the beam-speckle is varied this does not alter the local statistical properties (such as the variance) of image speckle.For a given pixel on the imaging detector (being matched to the image speckle size), this case is similar to just turning the laser light on and off; since the sample properties are time-invariant the local spatial variance of the time-integrated image speckle remains unchanged.The transition between these two limiting cases is continuous and therefore the signature of the residual image speckle can be used to map l * values. Homogeneous samples -calibration curve In order to demonstrate the proposed l * imaging method, both homogenous and spatially heterogeneous samples were studied.In the experiments presented here we only use sufficiently solid samples and therefore intrinsic speckle fluctuations due to Brownian motion of scattering centers can be neglected.In the first experiment we have prepared four samples with different values of l * by mixing a suspension of 990nm diameter polystyrene spheres with water and gelatin (final gelatin concentration 4%, 2% in one case).After one day the samples were solid as verified using DWS (DWS Rheolab, LSInstruments, Switzerland) [5,9].The four different concentrations of polystyrene particles doped into the gelatin (by weight) are 1.25 % (l * = 245μm), 2.08 % (l * = 147μm), 4.15 % (l * = 74μm), 6.20 % (l * = 50μm, gelatine 2% -solid).The values for l * have been calculated analytically assuming a particle and solvent refractive index of n p = 1.59, n s 1.34) [19]. By visually comparing speckle images from two samples (one with a short l * and one with a large l * , diffuser at rest), one can easily observe the phenomena discussed previously as shown in Fig. 3.A random pattern (defined by the incident beam speckle) superimposed on the fine image speckle is apparent in Fig. 3(b), but not Fig. 3 allows one to extract an estimate of l * for a given sample.Such an approach would rely on the same principles as the structured illumination method discussed in reference [12].However, the spatial resolution of that method is limited since the illuminating speckle must be on the order of l * or larger to produce a measurable effect.It should be emphasized that the method presented here provides a much higher intrinsic resolution.This is made possible by analyzing the properties of the residual image speckle as the pattern of the illumination field is rapidly varied.Figure 4 To confirm the trend and demonstrate that the measured contrast is in fact a function of l * normalized to the beam speckle size b, a further study of the polystyrene in gelatin samples was performed for varied beam speckle sizes.Figure 5 summarizes the results and confirms the quantitative relationship of K with the normalized reduced scattering coefficient b • μ t = b/l * and reveals that the image speckle contrast increases linearly for small and intermediate values b/l * ≤ 1.For higher values it must saturate since it cannot exceed the speckle contrast for plane wave illumination, in the present case 0.295.Overall the data can be described empirically by a tanh-fit.With a single setting of the incident speckle-beam forming optics, it is possible to cover roughly the range b/l * =0.05 -5, corresponding to two orders of magnitude in μ t .For example with b = 50μm one can probe scattering coefficients roughly from μ t = 1 mm −1 [l * = 1 mm] to 100 mm −1 [l * = 10 μm]. High resolution imaging While the previous experiments exploited only an average of the local measured image contrast for a homogenous sample, it is possible to measure and image local scattering properties with high resolution.Figure 6 shows a contrast map [20] of white paper (l * 11 ± 1 μm), the right side of which is covered with correction tape (l * ≈1-2 μm) with part of the correction tape removed (single scratch).The spatially averaged l * of paper and correction tape were determined using DWS.The scattering properties of the two materials are well-differentiated and the structure of the sample is well resolved.The local l * can be extracted from the speckle contrast image via the empirical tanh fit. Minimum exposure time and applications to samples with internal motion The product of camera exposure time τ exp and ground-glass rotation frequency f dictates (for a solid sample) the number of beam-speckle configurations over which the image represents an average.By varying the motor speed and observing the variance of the resulting beam-speckle images taken at the sample position (as shown in Fig. 7) we can give an estimate for the value τ exp f required for a decent average.In our experiment, sufficient averaging is observed for the product τ exp f > 0.05.For a maximum value of f = 200 Hz in our case this corresponds to a lower limit for the exposure time of τ exp ∼ 250 μs.Using a high-speed brushless DC electric motor at f ∼ 1000 Hz and illuminating the ground-glass at a larger radial distance from the rotation axis, exposure times on the microsecond scale could be realized.Such a high-speed version of our experiment would also be suitable for the study of liquid samples since the typical time scale for speckle fluctuations of multiply scattered light is on the order of 100 μs or more in the reflection geometry [9].Microsecond acquisition times will however additionally demand a fast and sensitive camera as well as sufficient laser power. Summary and conclusions In summary we have presented a simple scheme to map the diffuse scattering properties of a turbid medium.We have demonstrated its feasibility and established and verified the main principles.Namely we have shown that we can use the post-averaging residual speckle contrast to construct a map of the reduced scattering coefficient μ t .A linear relationship between the speckle contrast and the value of l * has been found.This value of l * is representative of the local superficial scattering properties of the medium, wherein the contributions to this local average are dominated by the material at the surface and decay strongly in the vertical direction.This effect as well as the ability of the technique to resolve sub-surface inhomogeneities has been characterized previously [21].The advantages or our approach are its simplicity and consequently low cost, high resolution and extremely short acquisition time.While currently limited to solid samples, an extension of the method to liquid media is straightforward though technically more demanding and thus more costly.A further limitation of the method is that it does not provide information about the sample absorption coefficient μ a [6].However, the latter could be included via an analysis of the reflected intensity at least for the case of weak absorption where μ a μ t .Since we are working with time-averages and a broad statistically distributed speckle beam, an extension to diffuse 3D tomography is precluded.Equally the method cannot be applied to diffuse imaging of fluorescence due to the absence of interference speckle.Coherence requirements also complicate the measurement of spectral properties.Limited spectroscopic information could be obtained by using a set of two or three lasers and a color digital camera.It would also be feasible to use a supercontinuum source along with a set of filters to extract both scattering and spectroscopic properties in a similar fashion as has been implemented for transmission speckle contrast [22] and coherent backscattering measurements [23]. Although most previous methods of this kind have been targeted to biomedical imaging we think the present method might be particularly suited for studies of soft materials for applica- tions such as phase separation, sedimentation or creaming of highly turbid suspensions, slurries or emulsions.It might also be applicable to dental imaging [24] or studies of bone tissue and related questions.An interesting outlook would be to combine the method with laser speckle imaging [25].Keeping the diffuser at rest would allow one to study dynamic properties of the medium while spinning the ground glass would provide access to diffuse scattering properties in a single experimental configuration. Fig. 2 . Fig. 2. (a) Direct image of speckle beam taken by placing camera at the sample position for the smallest speckle size considered.(b) Normalized intensity correlation function g 2 (Δr) obtained by the inverse Fourier transform of the speckle power spectrum.The speckle size is varied from 2b = 36 μm to 126 μm. Fig. 3 . Fig. 3. Recorded image speckle with motor at rest for a sample with l * = 245 μm (a) and l * = 50 μm (b).The size of the incident beam-speckle is 2b = 3.4 pixel (32 μm).A random pattern (defined by the incident beam speckle) superimposed on the fine image speckle is apparent in (b) but not (a). illustrates how the resulting image speckle (diffuser in motion at 50 Hz) evolves as a function of l * , where l * decreases from Figs. 4(a) to 4(b) and further to Fig. 4(c).Figures 4(d)-4(f) give the associated maps of the local speckle contrast K (standard deviation / mean) calculated for each pixel position over the surrounding 25-pixel area.It is shown that a shorter l * leads directly to an increase of the average measured local variance. Fig. 5 . Fig. 5. Speckle contrast K of image speckle as a function of b/l * (symbols).Data for three different speckle beam settings (b) is shown.The transport mean free path is l * = 245, 147, 74, 50 μm for the polystyrene in gelatin samples and l * 11μm for white paper.Motor spinning at 50Hz and camera acquisition set to τ exp = 30 ms exposure.Solid line (inset) : An empirical tanh-fit provides a quantitative link between the measured contrast and sample scattering properties (K = 0.285 • tanh [0.38 • b/l * ] + 0.01).For b/l * < 1 the speckle contrast scales linearly (dotted lines).For b/l * 1 the speckle contrast for planewave illumination, K = 0.295, is recovered (dashed-dotted horizontal line). #Fig. 6 . Fig.6.High resolution greyscale coded map of speckle contrast K for white paper (left) covered with a correction tape (right) that is scratched once with a knife.Motor spinning at 50 Hz, 230 ms exposure, beam speckle size b = 16μm, 5 × 5 -pixels used for local contrast analysis[20].Sample also shows a slight intensity contrast -not shown -due to the finite reflectivity of the correction tape.The local l * can be extracted from the speckle contrast image via the empirical tanh fit, Fig.5. Fig. 7 . Fig. 7. Speckle contrast of beam-speckle imaged for varying combinations of camera exposure time (τ exp )and ground-glass diffuser rotation frequency ( f ).Images are shown for three data points, demonstrating the averaging effect at larger exposure-time/rotationfrequency combinations.
5,345.2
2012-01-02T00:00:00.000
[ "Physics" ]
Improved ozone profile retrieval from spaceborn UV backscatter spectrometers Improved ozone profile retrieval from spaceborn UV backscatter spectrometers B. Mijling, O. N. E. Tuinder, R. F. van Oss, and R. J. van der A Royal Netherlands Meteorological Institute (KNMI), P.O. Box 201, 3730 AE, De Bilt, The Netherlands Received: 12 February 2010 – Accepted: 15 March 2010 – Published: 25 March 2010 Correspondence to: B. Mijling<EMAIL_ADDRESS>Published by Copernicus Publications on behalf of the European Geosciences Union. Introduction There is a great need for information on the state and evolution of the global threedimensional distribution of ozone in the atmosphere.Time series of ozone spanning years or even decades are important to detect changes in ozone which are coupled to of stratospheric ozone.Stratospheric ozone measurements are also used for operational numerical weather prediction models to constrain the energy balance in the stratosphere allowing improved forecasts.Knowledge on the distribution of ozone in the upper troposphere is important to quantify its contribution to radiative forcing and thus improve the understanding of climate change.Ozone in the boundary layer has adverse health effects and is an important species in air quality.Although boundary layer ozone is difficult to detect using the ultraviolet backscattered spectra measured from space, the inferred information on the tropospheric abundance is relevant for air quality modelling. The key to retrieve the vertical distribution of ozone in the atmosphere from ultraviolet (UV) backscattered sunlight is the sharp decrease in the ozone absorption crosssection between 265 and 330 nm.Photons at the shortest wavelengths only penetrate the upper part of the atmosphere; therefore back-scattered short-wave photons contain information only of the upper layers of the atmosphere.Moving to longer wavelengths, deeper layers start to contribute to the back-scattered radiance.Beyond 300-310 nm (depending on the solar zenith angle) a sizeable fraction of the solar light reaches the surface.Combing the radiances over the whole wavelength range thus provides information of the ozone profile. The Global Ozone Monitoring Experiment (GOME) was launched on the European Space Agency's second Earth Remote Sensing (ERS-2) satellite in April 1995 to measure backscattered ultraviolet (UV) and visible light at moderate spectral resolution (0.2-0.4 nm).To retrieve ozone profiles from its measurements in channels 1 and 2 (240-404 nm) several algorithms have been developed based on different techniques: optimal estimation (Barthia et al., 1996;Chance et al., 1997;Munro et al., 1998;Hoogen et al., 1999;Van der A et al., 2002), Philips-Tikhonov regularization (Hasekamp and Landgraf, 2001), and a neural network approach (Del Frate et al., 2002;M üller et al., 2003).An extensive intercomparison of these algorithms has been done by Meijer et al. (2006).Similar algorithms have also been applied to measurements from SBUV, SCIAMACHY, OMI and GOME-2.At the Royal Netherlands Meteorological Institute (KNMI) the Ozone Profile Retrieval Algorithm (OPERA) (Van Oss and Spurr, 2002;Van der A et al., 2002) has been developed, based on the optimal estimation technique and able to ingest data from the satellite instruments GOME, GOME-2, and OMI.The quality of the OPERA retrievals from GOME data has been validated extensively by De Clerq et al. (2007) by comparing them with sonde, lidar, microwave and other satellite measurements.Since 2007, OPERA is used operationally for ozone profile retrievals from GOME-2 data in near real time. In our study, we evaluate the global performance of the algorithm by its convergence behaviour.Bad convergence statistics indicates where the algorithm has problems to retrieve an ozone profile.In this way we can isolate geographical problem areas such as South America (Sect.4) and deserts (Sect.5).Studying the convergence statistics also allows us to assess the influence of the input data (such as the ozone cross section and the ozone climatology) on the retrieval result, as will be shown in Sects.6 and 7.By implementing the algorithm adaptations and selecting better input data, the larger number of successful retrievals improves global coverage and increases average retrieval speed, facilitating the use of retrievals in near real time applications and the reprocessing of large datasets. Algorithm overview and retrieval configuration OPERA solves the inverse problem of retrieving the vertical ozone distribution from the measured radiance spectrum.In the forward step, it uses a radiative transfer model to calculate the spectrum from a model atmosphere including a first-guess ozone profile and estimates of cloud fraction and cloud height.The single scattering part of the radiation is computed with a fast single scattering model; the multiple scattering part is computed with the Linearized Discrete Ordinate Radiative Transfer model (LIDORTA) in four streams (Van Oss and Spurr, 2002), taking the sphericity of the atmosphere into account in the pseudo-spherical approximation.time, LIDORTA calculates scalar radiation fields.Polarization effects, which cannot be neglected (Hasekamp et al., 2002), are included afterwards by correcting with values from a look-up table that is created with the doubling-adding radiative transfer model which does include polarization (de Haan et al., 1987).In the inverse step, the difference between measured and simulated measurement is used to update the ozone profile and auxiliary parameters such as the effective surface albedo.The retrieval is ill-posed in the sense that many profiles give similar simulated spectra within given error bars.These profiles differ in their small-scale structures, which are not well constrained.By using the optimal estimation technique (Rodgers, 2000) an optimal solution is selected that combines information from the measurement with an a priori, climatological, ozone profile.Because of the non-linearity of the problem, an iteration scheme is needed, repeating the forward model and inverse step until certain convergence criteria are met on the solution (see below).Throughout this article the ozone climatology will be used to provide both the a priori for the optimal estimation and the first guess for the iteration. In this paper we test the performance of our algorithm (OPERA version 1.0.9) with GOME data, taking advantage of the extensive calibration effort put into the measurements of this satellite instrument ( Van der A et al., 2002;Krijger et al., 2005).Retrieval of ozone profiles requires absolutely calibrated reflectivities which makes it very sensitive to the accuracy and precision of the reflectivity calibration (van der A et al., 2002). Our results are based on the level 1b product extracted with the GOME Data Processor, version 4.01 (Slijkhuis, 2006), which contains corrections for the degradation of the reflectance, the radiance offset, and the polarization sensitivity.We restrict ourselves to data from 1998, i.e. before degradation of the sun-normalised radiance sets in. Table 1 summarizes the most important retrieval settings and input data.The ozone profile is fitted for 40 atmospheric layers distributed homogeneously over altitude from surface level to 0.1 hPa.Furthermore, OPERA fits either the surface albedo or the cloud albedo, depending on the cloud fraction.Validation studies show that retrieval quality is improved further when OPERA fits a radiometric offset between measurement and simulated reflectance in the Band 1a window.Cloud fraction and cloud pressure are derived from the Oxygen-A band using the FRESCO algorithm (see Sect. 5). Convergence criteria and global retrieval performance Due to the non-linearity of the retrieval problem, the optimal estimation and its covariance are calculated numerically by using an iteration scheme which is based on the Gauss-Newton method (Rodgers, 2000).This requires a convenient criterion for stopping the iteration.Here, we break off the iteration when the difference between the error-weighted lengths of two consecutive state vectors ) is below a fixed threshold ε: in which S x indicates the covariance matrix of state vector x i , and n is the dimension of the "state space", the vector space spanned by the fit parameters.Throughout this study we will use ε=0.02.A stricter convergence criterion (e.g.taking ε=0.01) will slightly increase the mean iteration steps needed to reach convergence, but does not change the retrieval results significantly. To investigate the algorithm behaviour for the full range of atmospheric and observational conditions, we use the algorithm with the default settings of Table 1 on a reference dataset of all retrievals in February 1998 (∼69 000 retrievals in orbit number 14557 to 14957, excluding narrow swaths).On average, 5.15 iteration steps were needed for convergence; 11.4% of the retrievals did not converge within 10 steps.As can be seen from the convergence statistics in Fig. 1, it is reasonable to break off the iteration after 10 steps when convergence criteria are still not met, since these retrieval apparently never converge. We construct global monthly average fields of various parameters by projecting all GOME footprints on a grid of 1 • × 1 • .Figure 2a maps the mean number of iteration steps per grid cell, while Fig. 2b shows the mean fraction of not-converged retrievals; a fraction of 1 indicates that for this grid cell all retrievals did not converge.In this way, different problem areas are uncovered.Apparently, the algorithm suffers from retrieval problems in distinct areas such as South America, deserts (such as Sahara and West Australia), Antarctic snow and ice.The band-like structure over the Pacific, roughly from Ecuador to Papua New Guinea, appears to correspond with the position of the intertropical convergence zone (ITCZ) for this month.These non-convergence issues will be addressed in the next sections.Furthermore, it will be shown that nonconvergence can also originate from desert dust and from ozone anomalies.The degrees of freedom for signal (DFS) is another useful measure to investigate the overall performance of the algorithm.Qualitatively, the DFS indicates how much information has been inferred from the measurements.If n is the dimension of the state space, we have DFS=n if the measurements completely determine the state vector, and DFS=0 if there is no information at all in the measurement, and the retrieval is completely determined by the a priori information.Typically, the DFS for the ozone profile fit parameters retrieved by OPERA for GOME measurements varies between 4 and 7, governed by the a priori errors, the measurements errors and the sensitivity of the radiation to the profile.The latter varies mainly with the solar zenith angle, the cloud fraction and the surface albedo. South Atlantic anomaly The area of non-convergence over South America in Fig. 2b corresponds with the location of the South Atlantic Anomaly (SAA).This is the region where the sun-synchronous satellite orbit (typically at 800 km altitude) intersects the inner Van Allen radiation belt.The high energetic particles (mainly protons and electrons) trapped inside this belt interact with the instrument, causing additional noise and spikes in the measurements.Due to the weak signal level at short wavelengths, especially radiance measurements in Band 1a are affected by the SAA (see Fig. 3).The distorted spectra can not be simulated by the radiative transfer model (RTM), and causes convergence problems for the algorithm. In order to avoid this type of non-convergence, an SAA filter was implemented.Basically, it disqualifies all measurements from the Band-1a fit which are affected by the impact of high energetic particles.This is done by considering the reflectance R ref at a certain wavelength λ 0 as a reference which is used to evaluate the validity of the neighbouring reflectance at the shorter wavelength side R(λ i −1 ).This measurement is considered a statistical outlier and disqualified if its value is higher than R ref plus n times the reflectance error σ: If not, R(λ i −1 ) is taken as a new reference value to evaluate the next measurement. Note that we only test an upper boundary condition; testing for a lower boundary condition is not as straightforward due to the decreasing reflectance values towards shorter wavelengths. Figure 4a shows the retrieval results for February 1998, using the SAA-filter with parameters λ 0 =290 nm and n=3.By comparing with Fig. 2a, one can see that the SAA filter works well.For the region containing the SAA (here taken between 5 • S-40 • S latitude and 5 • W-75 • W longitude), the mean number of iteration steps is reduced from 7.51 to 6.90, mainly caused by a drop in non-convergence from 52.6% to 40.9%.Globally, non-convergence drops from 11.4% to 10.8% for this month. The effectiveness and selectivity of the filter can be seen in Fig. 4b, which shows the mean number of spectral measurements used for the retrieval.Outside the SAA all measurements are used for retrieval; deep in the SAA the measurements become so noisy that almost all measurements in Band 1a are discarded: the total number of used measurements drops from 587 to 370.This causes the DFS to decrease, as can be seen in Fig. 4c.Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Low cloud fractions at deserts Since clouds in the field-of-view strongly affect the measured reflectance, they need to be included in the radiative simulation of the atmosphere.The convergence problems above deserts, as revealed by Fig. 2, can be related to the used cloud parameters. OPERA retrieves its cloud parameters by its in-built FRESCO algorithm, version 4 (see Koelemeijer et al., 2001).The effective cloud pressure P c and the effective cloud fraction f c are derived from the reflectivities in the oxygen-A band (758-766 nm), based on the principle that clouds screen the oxygen below the cloud.In the continuum the reflectivity depends mainly on the cloud fraction, the cloud albedo (here assumed to be 0.8) and the surface albedo (taken from a monthly global minimum-reflectivity database (Fournier et al., 2006).The depth of the absorption band, however, depends also on the cloud pressure. The cloud parameters are used by the RTM, which performs two calculations: one cloud-free (R clear ) and one fully clouded (R cloud ) with clouds at pressure level P c .The reflectance for the partially cloudy scene is computed from Depending on the cloud fraction, either surface or cloud albedo is included in the state vector.For f c ≥ 0.2, backscattered radiation is dominated by the bright clouds: OPERA will fit the cloud albedo and takes the surface albedo from a database.For f c < 0.2 backscattered radiation from the surface becomes dominant, and OPERA will fit the surface albedo and sets the cloud albedo at 0.8. The majority of convergence problems over deserts is caused by a surface albedo which is fitted to negative values.FRESCO overestimates small cloud fractions (see Fournier et al., 2006), because its minimum-reflectivity surface albedo database is not sufficiently decontaminated from the presence of absorbing desert dust aerosols and is therefore too low.The overestimated cloud fraction results in a simulated spectrum in which radiances are too high.In order to match the measured spectrum, the inverse step will lower the surface albedo.Because the initial surface albedo is small (typically Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion 0.05 in the UV spectrum) and negative values are not allowed, there is not enough flexibility to compensate for the difference in radiance: the algorithm will not converge. To prevent this problem, the following workaround has been implemented: if FRESCO retrieves f c < 0.2 and in one of the consecutive iterations the surface albedo is fitted below zero, then f c is set to 0 and the retrieval process is restarted.By doing so, the presence of clouds will be compensated by adjustments in the surface reflectance. Although restarting to clear sky conditions takes at least one RTM-inversion cycle more, overall computation time is gained by avoiding non-convergence.Compared with Fig. 4a, Fig. 5a shows the improvement of retrieval results above deserts; global non-convergence statistics drop from 10.8% to 6.5%.The selectivity of this workaround is shown in Fig. 5b by mapping for each location the fraction of retrievals to which it has been applied.As can be easily seen, it applies mainly to desert areas, especially Sahara and Australia, but also the Namibian and Atacama desert and dry, sparsely clouded areas in Mexico and India. The workaround also solves the convergence problem due to a dust outbreak event in February 1998 flowing out from West Africa towards South America; as a comparison Fig. 5c shows the mean aerosol optical depth at 500 nm for the same month.FRESCO attributes the increased reflectance (around 750 nm) due to the presence of the dust cloud over a dark ocean to an increased effective cloud fraction.The same dust cloud absorbs radiation in UV, lowering the reflectance measured in this regime. These two effects will force OPERA to retrieve the surface albedo below zero, which can be avoided by assuming a cloud-free model atmosphere. To investigate the impact of the error which is introduced by neglecting a small cloud fraction, we select a representative not-converging desert pixel, the centre of its footprint 900 km west from Lake Chad, with cloud parameters f c =0.105 and P c =824 hPa according to FRESCO.We perform a set of retrievals for this pixel with P c fixed at 824 hPa and f c ranging from 0 to 0.2. Figure 6a f c > 0.07, the retrieved surface albedo becomes negative.Assuming a realistic surface albedo of 0.05 for our pixel we estimate the true cloud fraction to be f c =0.025.In Fig. 6b all retrievals are plotted; absolute deviations from the reference retrieval at f c =0.025 are shown in Fig. 5c.By switching to a cloud fraction of 0, the retrieved surface albedo increases to unrealistic high values (0.08 in our example).The retrieved total ozone column, however, decreases with less than 0.2%.This decrease is caused by a decrease in partial ozone column of the lower model layers up to 17 km; above the ozone bulk the profile doesn't change significantly. Ozone cross-sections Another important quantity that determines the accuracy of the radiative transfer calculation is the ozone absorption cross-section at vacuum.In OPERA, cross-section values are calculated from a lookup table, which is parameterized by wavelength and temperature.Errors in the used cross-sections can change the total retrieved ozone and the vertical distribution of this ozone significantly, as shown by Liu et al. (2007). Wrong cross-sections introduce an additional forward model error which influences the convergence statistics of the algorithm. Switching from BP to BR ozone cross-sections improves the convergence statistics of the retrieval algorithm considerably: for February 1998 the non-convergence drops from 6.5% to 5.0% (compare Fig. 5a To investigate whether this improved convergence affects the retrieval quality we validate the retrievals with ground measurements.A comprehensive validation study of OPERA retrievals (though based on an older algorithm version) has been done by De Clerq et al. (2007).Here, we restrict ourselves to 95 collocated microwave measurements (within 2 h and 400 km) done in Bern in 1998 with the GROMOS instrument (Dumitru, 2006).We prefer microwave above balloon sonde measurements for its ample altitude range (15-75 km compared to 0-30 km, respectively) and above lidar measurements for its short time interval between satellite overpass and ground measurement (0-2 h compared to 8-12 h). GROMOS, operated since 1994 as part of the Network for the Detection of Stratospheric Change (NDSC), retrieves the ozone profiles using the optimal estimation method.Between 20-70 km the contribution of a priori profiles is less than 10% (Dumitru, 2006).The altitude resolution varies between 10-12 km at 30 km altitude level and 20-25 km at altitude levels above 60 km. Figure 7 shows the validation results for retrievals performed with BP and with BR, compared with their corresponding microwave profiles which are smoothed with the averaging kernels from the retrieval method.For all retrievals the relative difference for each atmospheric layer is shown, together with the mean and the standard deviation.Errors in the forward model or model parameters show up as biases of the mean values.From Fig. 7 it can be seen that the GOME profiles correspond well with the microwave profiles in the mesosphere and around the ozone bulk at 10 hPa.In between the OPERA retrievals show an underestimation of ∼5% for both cross sections, in correspondence with the validation study by De Clerq et al. (2007).Below the bulk, between 40-100 hPa, retrievals with BR underestimates the amount of ozone, while BP overestimates ozone in this range. Due to the lower values of the BR cross-sections, the algorithm puts more ozone in its model atmosphere to compensate for the loss of absorbed radiance.The mean retrieved total ozone column in February 1998 therefore increases from 343 DU to Introduction Conclusions References Tables Figures Back Close Full 346 DU (considering only retrievals which converge for both cross-sections), which is in accordance with the findings of Liu (2007).Compared with BP, BR cross sections cause the average profile to increase 14 DU in the 1000-100 hPa range and to decrease 12 DU in the 100-20 hPa range. Ozone climatologies In OPERA, the ozone climatology is used to select an ozone profile at the latitude and time of retrieval which serves both as a priori information and the initial state for the state vector.Ozone retrieval algorithms based on optimal estimation benefit from an accurate a priori in altitude regions where the measurement is less sensitive to the presence of ozone since the retrieval tends to the a priori in that case.Furthermore, it offers a convenient starting point for the iteration, taking the initial state vector close to the assumed true state.To study the effect of the ozone climatology on the retrieval behaviour, we performed retrievals for GOME measurements of February and October 1998, using three different ozone climatologies.To get optimal performance, the algorithm was set to use the SAA filter, the low cloud fractions work-around, and the BR cross sections. Fortuin and Kelder The Fortuin and Kelder (FK) climatology (see Fortuin and Kelder, 1998) is based on measurements of 30 ozone sonde stations between 1980-1991, covering the appearance of the ozone hole period but excluding the Pinatubo eruption.It describes the monthly mean ozone volume mixing ration for 17 zonal bands, ranging from 80 • S to 80 • N, at 19 pressure levels.The sonde measurements (from surface up to 10 hPa) are extended with the SBUV-SBUV/2 climatology (described in Randel and Wu, 1995) from 30-0.3 hPa.The standard deviation used here is the natural variability of ozone at each zonal band and at each pressure level for a certain month (Fortuin, 1996).For FK Introduction Conclusions References Tables Figures Back Close Full this is given up to 10 hPa; for higher atmospheric layers, OPERA extrapolates the error towards 0 at the top level.These standard deviations σ i determine the diagonal elements of the a priori covariance matrix S a .To allow for cross-correlations, off-diagonal elements are calculated using in which d is the ozone profile correlation length per pressure decade, here taken 0.5. TOMS version 8 The TOMS version 8 ozone climatology (TOMS) (Frith et al., 2004) describes the monthly partial columns for 11 atmospheric layers for 18 zonal bands.In addition, it includes the total ozone column as an extra parameter to select the most appropriate profile when the total column is known.This prevents problems at ozone anomalies such as in the ozone hole where the real profile differs too much from the monthly averaged profile.The dependence on total ozone is exploited in OPERA by using a fast algorithm for a total ozone column estimate, which is used to select the appropriate profile from the TOMS climatology.To make a fast estimation of the total ozone column, we implemented the Temperature Independent Differential Absorption Spectroscopy (TIDAS) algorithm by Zehner (2002).The principle of TIDAS is to use the difference of reflectance ∆R at two wavelengths λ 1 and λ 2 .By selecting λ 1 =325.944nm and λ 2 =326.746nm for the GOME instrument, a compromise is made in which broadband spectral features can be neglected and ∆R is relatively insensitive to the temperature dependence of the ozone cross section, the influence of the Ring effect, and interfering trace gas species such as NO 2 , ClO 2 , SO 2 and BrO.∆R becomes proportional to the ozone slant column and with help of a geometric air mass factor the total column can be estimated. Comparisons of the TIDAS estimates with the vertically integrated column values retrieved by OPERA show an agreement within 8% for the latitude range of −60 • to +60 • .For higher latitudes, the difference can increase to 11%, but the TIDAS estimate is still appropriate for our purpose: selecting an a priori and initial profile from the climatology based on the estimated total column.The TOMS climatology does not contain error estimation, and has to be postulated. Small errors express confidence in the a priori, resulting in good convergence but a low degree of freedom.Large errors put more weight to the measurements, resulting in a high DFS, but poor convergence.The FK and MLL climatologies typically have a relative error of ∼15% for altitudes within and above the ozone bulk, and ∼25% in the troposphere.Between 80 and 300 hPa the relative error increase to 30-50% to describe the variability in the altitude of the base of the ozone bulk.Because the base of the ozone bulk is better constrained when an ozone profile is selected based on its total column, there is less need for increased error estimates in this range.We find a good compromise by taking a fixed relative error between 15-25% for all layers, months and latitudes, and constructing the covariance matrix as in FK and MLL. Intercomparison To study the impact of different ozone climatologies on the retrieval, we focus on convergence behaviour and the degrees of freedom of the retrieval.problems may arise when the a priori is not representative (either in shape, total ozone column value, or error) for the actual ozone distribution, or when the initial state vector is taken too far away from the true state. Figure 8 shows the mean number of iteration steps for February and October 1998 (∼78 000 retrievals in orbit number 18021 to 18464; excluding narrow swath) for the FK, MLL and TOMS climatology.As can be seen, the FK and MLL climatology give rise to important retrieval problems above Northern Europe in February and in the ozone hole in October.In both situations the climatology deviates too much from the truth, overestimating the total ozone column up to 70 DU for Northern Europe in February, and even more for the ozone hole.As a consequence, the retrieval needs more iteration steps, or does not converge at all.Because the TOMS climatology uses an a priori profile with a corresponding total ozone column, it offers a more accurate profile in these anomalous situations, which facilitates convergence here. The left panels in Fig. 8 also reveal convergence problems around the equator related to the ITCZ in February 1998, especially for FK and MLL.The powerful convection in the ITCZ gives rise to a strong gradient in ozone concentrations between the bulk of the ozone and the very small concentrations in the troposphere.Here, the retrieval typically tends towards negative values for tropospheric atmospheric layers.Apparently, the real ozone distribution is better described by the TOMS climatology, resulting in better convergence.In October 1998 the retrieval problems due to the ITCZ are less pronounced for all three climatologies. The convergence statistics and the mean DFS for February and October 1998 are summarized in Table 2. To give a representative value, the mean DFS is calculated for latitudes between 60 • S and 60 • N only, excluding the SAA.The convergence statistics for October 1998 with FK and MLL are dominated by the ozone hole region.Because in the ozone hole the algorithm performs better with TOMS (taking an a priori profile close to the real situation), global non-convergence drops from ∼9% to ∼4%.For February, convergence with TOMS at 15% error is comparable to MLL, and TOMS at 25% error is comparable to FK.All TOMS climatologies result in retrievals with a mean DFS significantly better than retrievals done with MLL. Figure 9 shows the zonal behaviour of the DFS for the different climatologies.For both February and October, retrievals with MLL have the lowest DFS, due to its small error estimate.Retrievals with TOMS at 25% error have the highest DFS.The drop in DFS below 60 • S in October 1998 is due to the ozone hole.The few retrievals done with FK and MLL which do converge here are based on an a priori with a large error, showing therefore a sharp increase in DFS. The choice of an ozone climatology depends on the application of the algorithm.TOMS at 15% is a good option when calculation speed is crucial, for instance in near real time applications.It has about the same fraction of successful retrievals as MLL, but at a higher DFS.Furthermore, it suffers less from retrieval problems at ozone anomalies.TOMS at 25% has the highest DFS, and can therefore be used in less time-critical applications were a maximum of information from the measurements is appreciated, such as in the processing of data assimilated time series. Discussion and conclusion Studying its convergence behaviour is an appropriate way to test the global performance of an ozone profile retrieval algorithm.Here we applied the OPERA algorithm on GOME measurements, taking advantage of the calibration effort done for the spectral measurements of this instrument.By taking data from 1998 we avoid degradation issues which affects GOME data of more recent years. The convergence statistics for February and October uncover different classes of retrieval problems related e.g. to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, the intertropical convergence zone and ozone anomalies such as the ozone hole. An algorithm adaptation has been implemented to filter out spiky measurements in the South Atlantic Anomaly region.The filter is selective, affecting predominantly measurements within the SAA.The convergence statistics improve for the SAA area, Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion at the cost of loss of DFS.More elaborate SAA filter schemes can be implemented, but should always consider the balance between gain of speed and loss of information.Problems with small clouds fractions above deserts can be avoided by neglecting clouds and switching to clear sky retrieval.The hereby introduced errors are acceptable (a decrease of ozone in tropospheric layers cause the total ozone column to reduce ∼1%).The workaround is selective and mainly affects desert areas like Sahara and Australia, and fixes retrieval problems at desert dust clouds above oceans.Switching to the new FRESCO+ cloud algorithm (Wang, 2006) in the OPERA software could also improve retrieval results above deserts.By taking Rayleigh scattering in the atmosphere into account, the global average of the retrieved cloud fraction becomes 0.01 lower.Clouds levels also drop, depending on cloud fractions (e.g. 100 hPa at f c =0.2).Both effects will decrease the shielding of ozone by clouds in the atmosphere model.The simulated radiance at the top of atmosphere will decrease, reducing the mismatch between simulation and measurement. Using Brion ozone cross-sections instead of Bass-Pauer cross-sections strongly reduces the non-convergence of the algorithm.Validation with the microwave measurements in Bern shows that retrievals done with BR are comparable with retrievals done with BP.For both cross-sections the retrieved profiles show an underestimation of ∼5% in the upper stratosphere.BR cross-sections, however, tend to reduce ozone in the lower part of the ozone bulk by shifting it to the troposphere and lower stratosphere. The selection of ozone climatology in the optimal estimation method (here used as a priori and as initial state vector) importantly influences the retrieval results, such as convergence statistics, total column, profile shape, retrieval error, and DFS.We investigated this influence by comparing retrievals done with column as an extra parameter to select a suitable a priori ozone profile.Implementation of the TIDAS algorithm gives a quick estimate of the total column, accurate within 8% for the −60 • to 60 • latitude range.The TOMS climatology does not include an error estimate, but by postulating a relative error of 20% its retrievals have higher DFS and comparable or better convergence characteristics than both FK and MLL.The TOMS climatology also prevents convergence problems related to the ITCZ, which profiles apparently describe better the sharp gradient between tropospheric and ozone bulk in the ITCZ.The results with TOMS can be further improved by using an improved TOMS climatology, such as by Lamsal (2004), which solves some discontinuity issues over latitude, and includes a more realistic standard deviation, allowing for larger variability in the troposphere.By implementing these algorithm improvements, more valid profile retrievals can be achieved in less computational time.For February 1998, non-convergence was brought down from 11.4% to 5.0% using FK climatology, or even further to 4.3% when using TOMS version 8 climatology with a fixed relative error of 20%.The computational time is dominated by the number of iteration steps, which in February on average dropped from 5.15 to 4.99 for FK, and down to 4.76 for TOMS.Radiat. Transfer, 75, 177-220, 2002. Wang, P., Stammes, P., and Fournier, N.: Test and first validation of FRESCO+ (2006), Proceedings of SPIE volume 6362, Remote Sensing 2006, Stockholm, Sweden, 11-16 September 2006. Zehner, C., Casadio, S., di Sarra, A., and climate change, and to monitor and understand the depletion and expected recovery shows the dependence of the retrieved ozone column and the surface albedo on the cloud fraction: overestimation of the real cloud fraction is compensated by a darker surface and more absorbing ozone.For Figures with 8a), while the mean number of iteration steps Fortuin and Kelder, McPeters and Labow, and the TOMS version 8 climatologies.Both FK and MLL cause convergence problems at ozone anomalies (such as low ozone concentrations above Northern Europe in February 1998 and the ozone hole event of October 1998) which are not accurately enough described by the climatology.These problems are avoided with the TOMS climatology, which has the total ozone Introduction Fig. 2 .Fig. 3 .Fig. 4 . Fig. 2. (a) Mean number of iteration steps for all retrievals of February 1998.Non-converging retrievals are broken off after 10 iteration steps.(b) Fraction of not converged retrievals.Note the distinct areas of non-convergence. Fig. 5 . Fig. 5. Retrieval results for February 1998 with the desert workaround.(a) shows the improvement of retrieval results above deserts when compared with Fig. 4a; (b) shows the selectivity of the workaround by mapping for each grid cell the fraction of retrievals to which it has been applied.The convergence problem due to the dust outbreak flowing out from West Africa towards South America is also solved; as a comparison (c) shows the mean aerosol optical depth (AOD) at 500 nm for the same month (taken from http://www.temis.nl). Figure 7 Figure 7 Validation of 95 collocations (within 2 hours and 400 km) of GOME retrievals with microwave measurements in Bern, 1998.The left panel shows the retrieval results done with Bass-Pauer ozone cross-sections, the right panel the results for Brion cross-sections.For each atmospheric layer the relative difference of the retrievals with the ground-based measurements is given, smoothed with the averaging kernel.The thick line connects the layer averages; the thin lines indicate the ± 1σ standard deviation. Fig. 7 .Fig. 8 .Fig. 9 . Fig. 7. Validation of 95 collocations (within 2 h and 400 km) of GOME retrievals with microwave measurements in Bern, 1998.The left panel shows the retrieval results done with Bass-Pauer ozone cross-sections, the right panel the results for Brion cross-sections.For each atmospheric layer the relative difference of the retrievals with the ground-based measurements is given, smoothed with the averaging kernel.The thick line connects the layer averages; the thin lines indicate the ±1 σ standard deviation. Table 1 . Putz, E.: Temperature Independent Differential Absorption Spectroscopy (TIDAS) and Simplified Atmospheric Air Mass Factor (SAMF) Techniques for the Measurement of Total Ozone Content using GOME Data, Proceeding of Introduction Overview of the retrieval settings and input data. Table 2 . Convergence statistics and mean DFS for different climatologies.Retrieval statistics for February 1998 with the settings of Table 1.11.4% of the retrievals do not converge after 10 iteration steps.Introduction
8,215.2
2010-03-25T00:00:00.000
[ "Environmental Science", "Physics" ]
Open data and algorithms for open science in AI-driven molecular informatics Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future. Graphical Abstract Introduction Considerable improvements in artificial intelligence (AI) research through the introduction of deep neural networks promises to transform society [1][2][3][4] and the way research is conducted [5,6].However, in most areas of molecular informatics, the amount of training data available is insufficient for the use of today's most powerful deep neural network architectures, which demonstrate superior performance only by training with large amounts of data [7].In addition, a thorough assessment of a model's true predictive performance in practice is a rare exception (e.g. the Critical Assessment of Protein Structure Prediction (CASP) [8]). Because of this lack of accessible experimental data [9,10], machine learning predictions in chemistry are generally too error-prone to realize the potential of the new methods at this time.This necessitates a change in the way chemists publish their data and the type of data published [11,12].The call for open data, open source, and open science (ODOSOS) in chemistry is not new [13,14], but with the advent of more powerful data-driven algorithms, it has never been more important. Journals and funders demanding the deposition of research data and the necessary establishment of suitable research data infrastructures will inevitably alleviate the data shortage problem in the future [15,16].The German government, for example, has recently decided to implement and long-term-fund a national research data infrastructure (Nationale Forschungsdateninfrastruktur, NFDI) [17] with 30 consortia in all areas of science, collaboratively developing open research data management (RDM) e-infrastructures, coordinated by an umbrella process and a joint directorate.One of those consortia is NFDI4Chem which is building an RDM e-infrastructure for chemistry that follows FAIR data principles [18] to make chemical data findable, accessible, interoperable, and reusable [19,20].One flagship project of NFDI4Chem is nmrXiv, an open and FAIR repository and analysis platform for NMR spectroscopy data [21]. In recent years, advances in artificial intelligence and data-driven applications in molecular informatics have provided a glimpse into the magnitude of future accomplishments, which have made open data a necessity for machine learning algorithms.Here, we attempt to present some of the major milestones of the past years and discuss obstacles that are yet to be overcome to enable similar AI-driven progress in (nearly) every area of chemistry. The importance of openly available resources and data One cause of the dissatisfying data shortage situation has been the lack of a culture of data deposition and sharing in chemistry in the past, where at least from the early 1990s onwards, with the advent of the internet, widespread data deposition and sharing would have been possible.There have been notable exceptions, such as the crystallography community, that have developed data deposition cultures even earlier.Both small molecules and biomacromolecule structures have been and are being deposited in the Protein Data Bank (PDB) [22,23] and the Cambridge Crystallographic Database (CCD) [24].Of particular note, the open PDB in combination with the openly available protein sequence information (for multiple sequence alignments) formed the basis for the outstanding success of the AlphaFold protein 3D structure prediction system [5].Similarly, open databases such as PubChem [25], ChEMBL [26], ChEBI [27], Drugbank [28], the Human Metabolome Database (HMDB) [29], the Collection of Open Natural Products (COCONUT) [30], the Natural Products Atlas [31], the Natural Products Magnetic Resonance Database [32], and ZINC [33] fundamentally broaden the research opportunities [34].The PubChem database is used by millions of users every month [35].An example for the usage of the referenced databases is the creation of a classifier that determines whether a Natural Product (NP) originates from funghi, plants, or bacteria based on its chemical structure with data obtained from the COCONUT database [36].The ZINC database has recently been used for the in silico determination of drug candidates that inhibit the main protease of SARS-CoV-2 [37]. Another crucial aspect is the availability of open software libraries to handle and process chemical information, like the Chemistry Development Kit (CDK) [38], Indigo [39], RDKit [40], or OpenBabel [41], as well as the recently published Python-based Informatics Kit for Analysing Chemical Units (PIKAChU) [42].Without these open-source projects, the research community would lack basic tools for programmatically reading, modifying, and processing chemical information.Accordingly, they are fundamental for every researcher in the field of molecular informatics. Molecular string representations, such as DeepSMILES [43] and SELFIES [44], enable processing chemical structures using models like transformers that are designed to process sequential data.Recently, a study investigated the performance of transformers on different tasks using SMILES, DeepSMILES, and SELFIES.The amount of returned invalid chemical structures could be decreased when using DeepSMILES and especially SELFIES compared to SMILES, although the overall best performance was achieved using SMILES [45]. Without open libraries such as Tensorflow [46] and Pytorch [47] for the implementation and training of neural networks as well as the ubiquitous availability of Graphical Processing Units (GPU) and Tensor Processing Units (TPU) in cloud environments [48], the big leaps in molecular AI research would not have been possible. An approach to the protein folding problem -AlphaFold The problem of protein folding is considered one of the fundamental challenges of molecular biology because a large number of degrees of freedom of bonds and atoms in a protein leads to a combinatorial explosion in the number of possible low-energy arrangements [49].In 2020, the DeepMind team announced a widely recognised breakthrough in the prediction of spatial protein 3D structures from their amino acid sequence with their deep learning-based system AlphaFold [5].The system participated in the 13th and 14th Critical Assessment of Protein Structure Prediction (CASP) competition [8], outperforming all competitors.Since then, it has been made openly available and used to fill the open AlphaFold Protein Structure Database [50] which contains more than 200 million predicted protein 3D structures, covering nearly every known protein on earth [51].Within a short period of time, the structures of 98.5% of the human proteome have been predicted using AlphaFold, while the previous decades of experimental research yielded 17% [52].The system was trained on structural data openly deposited in the Protein Data Bank [22,23], which was founded and announced in 1971 [53].The success story of AlphaFold illustrates what is possible today when researchers are able to access the data that scientists have produced over the course of 50 years. It is important to mention that challenges like the prediction of the relative positions of protein domains and their changes when an external stimulus is applied remain partially unsolved.Additionally, the transition from disordered to ordered domain states cannot be elucidated using AlphaFold, and it is limited to structures with less than 2700 amino acids [54].Nevertheless, the high impact of its accurate protein structure predictions is indisputable [55].For example, the predicted structural information about nucleoporins has been combined with cryo-electron tomography (cryo-ET) to generate a model that precisely explains 90% of the scaffold of the human nuclear pore complex (NPC) [56].Another example is the identification of tens of thousands of unknown potential binding sites for iron-sulfur clusters and zinc ions in more than 360,000 proteins [57]. Digital transformation of synthetic chemistry Similar to other fields, the foundation for successful machine learning applications in synthetic organic chemistry is the availability of extensive experimental data [58].Recently, Strieth-Kalthoff et al. demonstrated the benefit that emerges from the usage of real experimental data for machine learning-based chemical yield predictions [12] while the prediction of reaction outcomes and yields remains a challenge in general [59].Nonetheless, there have been impressive developments using attention-based deep learning methods to explore the chemical reaction space [60].Schwaller et al. trained a transformer to predict chemical reaction outcomes with state-of-the-art results [61].The resulting model which is referred to as molecular transformer was then used in combination with hypergraph exploration to automatically plan retrosynthesis routes [62].Since then, the molecular transformer has been extended to predict the products of enzymatic reactions [63].Based on the aforementioned retrosynthesis planning system, Probst et al. have published a biocatalysed synthesis planning system [64]. Schwaller et al. have also shown that the attention matrix weights of transformers that have been trained on unlabelled chemical reaction data can be used to determine accurate atom mappings [65].Additionally, they demonstrated that attention-based models are highly suitable for the classification of chemical reactions [66].Similar model architectures were successfully used to generate specific synthesis instructions [67] and to determine the yield of a given chemical reaction formula [68].Andronov et al. successfully demonstrated the prediction of reagents based on given reaction SMILES strings using transformers.They were then able to use the reagent prediction model to fill in missing reagents in incomplete reaction data from US patents leading to an improved state-of-the-art model [61] for the prediction of reaction products [69].Recently, Rohrbach et al. demonstrated the translation of synthesis protocols in the literature into a standardized chemical language, which could then be executed by their automated synthesis system [70]. Again, the described advances are exemplary cases of the synergy of deep learning-based models and the availability of training data.There are datasets extracted from US patents [66,[71][72][73][74], the scientific literature [75], and high-throughput experiments (HTE) [76] available [60].Recently, the Open Reaction Database (ORD) has been launched as a platform to replace unstructured reaction data in the supporting information of publications [77].If it is accepted by the research community, the ORD may become a part of the solution to problems caused by the aforementioned lack of data and report bias [11,12].Providing structured data in standardized formats may become a key step towards the digital transformation of synthetic chemistry. Extraction of chemical information from the scientific literature Besides enforcing FAIR data publication standards today and in the near future, it is important to tackle the damage that has already been done by publishing chemical data almost exclusively in a human-readable form with unstructured text and images in the past decades.The advances in the fields of natural language processing (NLP) [78][79][80] and computer vision (CV) [81][82][83] have made a new generation of chemical literature mining tools possible.These can be considered AI-driven solutions that enable further AI-driven advances by making concealed data accessible in structured, machine-readable formats. The field of optical chemical structure recognition (OCSR) deals with the translation of images of chemical structures as they are published in the scientific literature into machine-readable representations of the underlying molecular graph [84,85].In the past two years, a variety of deep learning-based OCSR methods [86][87][88][89] has been published, where DECIMER Image-Transformer [90], Img2Mol [91] and SwinOCSR [92] provide openly available source code and trained models.For the segmentation of chemical structure images from whole pages, the open-source tool DECIMER Segmentation can be used [93].With the publication of the open-source depiction generation tool RanDepict, efforts have been made to standardize and diversify the training data for deep learning-based OCSR tools [94].The newest version of DECIMER was trained on more than 400 Million images using the latest Tensor Processing Units [95] available on the Google cloud platform.Currently, DECIMER performs with an accuracy rate of above 90% and is regarded as an important point of reference for future work [85].Without open databases like PubChem, where one can download over 100 million chemical structures for free, this would not have been possible. Since its original release in 2016, the chemical literature mining toolkit ChemDataExtractor [96] has been continuously developed [97,98].The highly adaptable toolkit uses user-defined models of the information to be extracted in a pipeline with readers for different publisher formats and a system for interdependency resolution with a set of parsers and a sophisticated chemical named entity recognition system [99] to extract chemical information in a structured data format [97].In the past years, ChemDataExtractor has been extensively used to automatically generate databases about refraction indices and dielectric constants [100], battery material properties [101], properties of semiconductors for building solar cells [102], magnetic properties [103], as well as UV/Vis spectra [104]. In addition to the technical obstacles, scientific publishers hinder literature mining essentially by hiding publications behind paywalls and limiting the number of publications that can be downloaded and used even if a subscription is available.Some publishers like Elsevier offer markup versions of their publications for text mining purposes to academic researchers [105], but there is a long way to go to truly make all published chemical information available.In 2018, an international group of research funders announced the initiative Plan S which requires scientists who benefit from their funding to publish in open-access journals [106].Recently, the US government announced that they will require all publicly funded research to be openly accessible from 2026 on [107].With RDM e-infrastructures being established as the mandatory scientific data publication standard, the kind of literature mining methods described herein will become obsolete in the future.For now, they are indispensable for artificially intelligent data-driven applications. AI in natural product-based drug discovery The field of drug discovery has shifted towards implementing approaches based on the analysis of large amounts of data and deep learning [108].As a result of the growing demand for efficient new drugs, the field has experienced rapid growth in the last few years.NP are attractive to drug developers due to their availability and their potential affinity to protein drug targets [109,110]. There have been significant advances in various areas of the field, such as the prediction of biochemical effects of NP based on their molecular structure [111], in the field of genome mining for the discovery of bioactive compounds [112], the mining of mass spectrometry-based metabolomics data [113], and integrative approaches that combine metabolomics and genomics data [114]. The initial hope that large-scale data analysis in the different omics-related research fields would boost the drug discovery rate has not yet materialised [115], but the methods are progressing continuously.The open access to databases and repositories such as MetaboLights [116], the HMDB [29], the Metabolomics Workbench [117], and METASPACE [118] is crucial for the identification of metabolites and NP [112].In 2021, the Paired Omics Data Platform (PODP) was launched as a community-driven platform that provides linked metabolome and genome data according to the FAIR principles [119]. NP-based drug discovery has greatly benefited from models developed for NLP [120].For example, in 2021, Huang et al. published MolTrans, a state-of-the-art deep learning-based framework for the in silico prediction of Drug-Protein Interactions (DPI) [121].In the following year, Wang et al. presented their structure-aware multimodal deep DPI prediction model STAMP-DPI, which outperforms MolTrans.The tool has been published along a large high-quality training and benchmarking dataset [122].The adaptation of sequence models like the transformer [78] for AI-based drug design requires large amounts of well-curated, high-quality data. The recent development in the field of deep generative models helps researchers generate molecules with desired properties [123], but a model that can generalise well and can generate molecules with desirable properties requires a large amount of training data.When dealing with artificially generated structures, it is also necessary to consider their synthetic accessibility.To successfully use deep learning on published NP structures, well-curated data is essential.Published data resources are often incomplete, inaccessible, or no longer available [124] which makes available resources like the Natural Products Atlas [31], LOTUS [125], and COCONUT [30] even more important. The development of deep learning-based models has assisted the advancement of drug discovery overall, with more advancements being made in the development of models and increasing access to open data and open databases helping this field grow.We hope that the research community will continue to actively contribute to openly available data sources to enable further progress in the field. Conclusions The developments of the past years demonstrate the potential of data-driven machine learning applications in the field of molecular informatics in an impressive manner [5, 65,70].An obvious requirement to benefit from this development is the availability of open structured experimental data [11,12].The integration of open data infrastructures will enable AI to be used in nearly every field of chemistry.The application of deep learning methodologies and the sharing of code and data in the field of chemistry are still in their early stages and require more community standards to be developed.Many of the models are still being trained from scratch using in-house servers and GPUs, which is a time-consuming and restrictive process.The rapid growth of the field will be enabled by the sharing of already-trained models and curated data with the public.When sharing code or data, high quality and data standards must be maintained [126].Using the public cloud infrastructures will readily allow researchers to take advantage of the latest developments in hardware and software, which will lead to faster growth and a reduction in energy consumption.There are several initiatives working continuously to implement open data, open-source, and open science in their individual research area [13,14,17,18,20,21,77,106,107,127,128].Fueled by the availability of more and more open research data, AI-powered molecular informatics will be a key driver of the digital transformation of chemistry in the coming years. • AI: Artificial Intelligence • CASP: Critical Assessment of protein Structure Prediction • CCD: Cambridge Crystallographic Database • CDK: Chemistry Development Kit • cryo-ET: cryo-Electron Tomography • COCONUT: COlleCtion of Open Natural Products • CV: Computer Vision • DECIMER: Deep lEarning for Chemical ImagE Recognition • DPI: Drug-Protein Interaction • FAIR: Findable, Accessible, Interoperable, and Reusable • GPU: Graphics Processing Unit • HTE: High-Throughput Experiments • HMDB: Human metabolome database • NFDI: National Research Data Infrastructure • NFDI4Chem: National Research Data Infrastructure for Chemistry • NLP: Natural Language Processing • NP: Natural Products • NP-MRD: Natural Products Magnetic Resonance Database • NPC: Nuclear Pore Complex • OCSR: Optical Chemical Structure Recognition • ODOSOS: Open Data, Open Source and Open Science • ORD: Open Reaction Database • PDB: Protein Databank • PODP: Paired Omics Data Platform • PIKAChU: Python-based Informatics Kit for Analysing CHemical Units • RDM: Research Data Management • TPU: Tensor Processing Unit Declarations
4,408.6
2023-02-17T00:00:00.000
[ "Computer Science", "Chemistry" ]
Bounds for Coding Theory over Rings Coding theory where the alphabet is identified with the elements of a ring or a module has become an important research topic over the last 30 years. It has been well established that, with the generalization of the algebraic structure to rings, there is a need to also generalize the underlying metric beyond the usual Hamming weight used in traditional coding theory over finite fields. This paper introduces a generalization of the weight introduced by Shi, Wu and Krotov, called overweight. Additionally, this weight can be seen as a generalization of the Lee weight on the integers modulo 4 and as a generalization of Krotov’s weight over the integers modulo 2s for any positive integer s. For this weight, we provide a number of well-known bounds, including a Singleton bound, a Plotkin bound, a sphere-packing bound and a Gilbert–Varshamov bound. In addition to the overweight, we also study a well-known metric on finite rings, namely the homogeneous metric, which also extends the Lee metric over the integers modulo 4 and is thus heavily connected to the overweight. We provide a new bound that has been missing in the literature for homogeneous metric, namely the Johnson bound. To prove this bound, we use an upper estimate on the sum of the distances of all distinct codewords that depends only on the length, the average weight and the maximum weight of a codeword. An effective such bound is not known for the overweight. Introduction Coding theoretic experience has shown that considering linear codes over finite fields often yields significant complexity advantages over the nonlinear counterparts, particularly when it comes to complex tasks such as encoding and decoding. On the other side, it was recognized early [1,2] that the class of binary block codes contains excellent code families, which were not linear (Preparata, Kerdock codes, Goethals and Goethals-Delsarte codes). For a long time, it could not be explained why these families exhibit formal duality properties in terms of their distance enumerators that occur only on those among linear codes and their duals. A true breakthrough in the understanding of this behavior came in the early 1990s when, after preceding work by Nechaev [3], the paper by Hammons et al. [4] discovered that these families allow a representation in terms of Z 4 -linear codes. A crucial condition for this ring-theoretic representation was that Z 4 was equipped with an alternative metric, the Lee weight, rather than with the traditional Hamming weight, which only distinguishes whether an element is zero or non-zero. The Lee weight is finer, assigning 2 a higher weight than the other non-zero elements of this ring. The fact that the traditional settings of linear coding theory (finite fields endowed with the Hamming metric) are actually too narrow, which suggests expanding the theory in at least two directions: on the algebraic part, the next more natural algebraic structure serving as alphabet for linear coding is that of finite rings (and modules). On the metrical part, the appropriateness of the Lee weight for Z 4 -linear coding suggests that the distance function for a generalized coding theory requires generalization as well. Since these ground-breaking observations, an entire discipline arose within algebraic coding theory. A considerable community of scholars have been developing results in various directions, among them code duality, weight-enumeration, code equivalence, weight functions, homogeneous weights, existence bounds, code optimality and decoding schemes, to mention only a few. The paper at hand aims at providing a further contribution to this discipline, by introducing the overweight on a finite ring. This weight is a generalization of the Lee weight over Z 4 , as well as of the weight introduced in [5] by Krotov over Z 2 s for any positive integer s, which was further generalized to Z p k in [6]. We study the relations of this new weight to other well-known weights over rings and state several properties of the overweight, such as its extremal property. We also develop a number of standard existence bounds, such as a Singleton bound, a sphere-packing bound, a Plotkin bound and a version of the (assertive) Gilbert-Varshamov bound. In the final part of this article, we derive a general Johnson bound for the homogeneous weight on a finite Frobenius ring. This result is important, as it is closely connected to list decoding capabilities. Preliminaries Throughout this paper, we will consider R to be a finite ring with identity, denoted by 1. If R is a finite ring, we denote by R × its group of invertible elements, also known as units. Let us recall some preliminaries in coding theory, where we focus on ring-linear coding theory. For a prime power q, let us denote by F q the finite field with q elements and, for a positive integer m, we denote by Z m the ring of integers modulo m. In traditional coding theory, we consider a linear code to be a subspace of a vector space over a finite field. Definition 1. Let q be a prime power, and let k ≤ n be non-negative integers. A linear subspace C of F n q of dimension k is called a linear [n, k] code. We define a weight in a general way. Definition 2. Let R be a finite ring. A real-valued function w on R is called a weight if it is a non-negative function that maps 0 to 0. It is natural to identify w with its additive extension to R n , and so we will always write w(x) = ∑ n i=1 w(x i ) for all x ∈ R n . Every weight w on R induces a distance d : R × R −→ R by d(x, y) = w(x − y). Again, we will identify d with its natural additive extension to R n × R n . If the weight additionally is positive definite, symmetric and satisfies the triangular inequality, that is, then the induced distance inherits these properties, i.e., 1. d(x, y) ≥ 0 for all x, y ∈ R and d(x, y) = 0 if and only if x = y. The most prominent and best studied weight in traditional coding theory is the Hamming weight. Definition 3. Let n ∈ N. The Hamming weight of a vector x ∈ R n is defined as the size of its support w H (x) = |{i ∈ {1, . . . , n} | x i = 0}|, and the Hamming distance between x and y ∈ R n is given by The minimum Hamming distance of a code is then defined as the minimum distance between two different codewords Note that the concept of minimum distance can be applied for any underlying distance d. In the paper at hand, we focus on a more general setting where the ambient space is a module over a finite ring. Definition 4. Let n ∈ N and let R be a finite ring. A submodule C of R R n of size M = |C| is called a left R-linear (n, M) code. The most studied ambient space for ring-linear coding theory is the integers modulo 4, denoted by Z 4 , endowed with the Lee metric. Definition 5. For x ∈ Z m , its Lee weight is defined as One of the most prominent generalizations of the Lee weight over Z 4 is the homogeneous weight. Definition 6. Let R be a Frobenius ring. A weight w : R −→ R is called (left) homogeneous of average value γ > 0, if w(0) = 0 and the following conditions hold: (i) For all x, y with Rx = Ry, we have that w(x) = w(y). (ii) For every non-zero ideal I ≤ R R, it holds that We will denote the homogeneous weight with wt. The homogeneous weight was first introduced by Constantinescu and Heise in [7] in the context of coding over integer residue rings. It was later generalized by Greferath and Schmidt [8] to arbitrary finite rings, where the ideal I in Definition 6 was assumed to be a principal ideal. In its original form, however, the homogeneous weight only exists on finite Frobenius rings. It can be shown that a left homogeneous weight is at the same time right homogeneous, and for this reason, we will omit the reference to any side for the sequel. In [9], Honold and Nechaev finally generalized the notion of homogeneous weight to some finite modules, called weighted modules, over a (not necessarily commutative) ring R with identity. Since we will establish a Plotkin bound for a new weight, let us recall here the Plotkin bound over finite fields equipped with the Hamming metric. Theorem 1 (Plotkin bound). Let C be an (n, M) block code over F q with minimum Hamming distance d. If d > q−1 q n, then For the homogeneous weight, the following Plotkin bound was established in [10]. Theorem 2 (Plotkin bound for homogeneous weights, [10]). Let wt be a homogeneous weight of average value γ on R, and let C be an (n, M) block code over R with minimum homogeneous distance d. If γn < d, then Overweight As the Hamming weight defined over the binary can be generalized to larger ambient spaces in different ways resulting in different metrics, such as the Hamming weight over F q or the Lee weight over Z p s ; in addition, the Lee weight over Z 4 can be generalized in different ways. For example, the weight defined in [5] over Z 2m for any positive integer m is a possible generalization, but the most prominent generalization is the homogeneous weight (see for example [10]). In this section, we introduce a new generalization, called the overweight. This weight shows some interesting properties and relations to the homogeneous weight and can additionally be seen as a generalization of the weight defined in [5] over Z 2 s for any positive integer s and the weight defined in [6] over Z p s . Definition 7. Let R be a finite ring. The overweight on R is defined as We also denote by W its additive expansion to R n , given by W(x) = ∑ n i=1 W(x i ). Let us call the distance which is induced by the overweight the overweight distance, and denote it by D, i.e., D(x, y) = W(x − y). The motivation of introducing this new weight is twofold: on one hand, it is theoretically interesting to explore a new generalization of the Lee weight over Z 4 and its connections to other known weights over rings. On the other hand, the overweight would also be perfectly suitable for a channel, where unit errors are more likely. Note that the overweight is designed to satisfy the following criteria: it is positive definite, symmetric, satisfies the triangular inequality and distinguishes between units and non-zero non-units. Furthermore, it is extremal in the sense that, on a big family of rings, any increase of the weight of non-zero non-units would violate the triangular inequality, thus the name overweight. We will now study this extremal property in more details. We can consider weights with values in {0, 1, α}, for some α > 0, without fixing the subsets of R where these values are attained. Thus, we are considering the generic weight function . Such a weight is always positive definite. In addition, the weight is symmetric if and only if A 1 and A 2 contain all additive inverses of their elements. Let us now consider the triangular inequality: if there exist x, y ∈ A 1 such that x + y ∈ A 2 , then we must have Thus, in order for f to be an extremal weight, one chooses α = 2. The overweight is a special case of such a weight function f with the choice is satisfied for many rings-for example, for rings with a non-trivial Jacobson radical. Relations to Other Weights Clearly, the homogeneous weight and the overweight coincide with the Lee weight on Z 4 , with the Hamming metric on finite fields F q , and finally with the weight [6] on Z p s . Proposition 1. The overweight over finite chain rings gives an upper bound on the normalized homogeneous weight. Proof. Over a finite chain ring with socle S and residue field size q, we have that the normalized homogeneous weight is defined as . which implies the result. In [11], Bachoc defines the following weight on F p -algebras A, with units A × as follows: This is in the same spirit as the overweight. The weight of Bachoc is, however, only assuming positive definiteness. We note that, whenever we have a F 2 -algebra, the two weights coincide. The overweight can thus also be seen as a generalization of Bachoc's weight to a general finite ring. Let us illustrate this connection with some examples: we consider the ring M 2 (F p ) of 2 × 2 matrices over F p and the ring F p [x]/(x 2 ). In both cases, the Bachoc weight only coincides with the homogeneous and the overweight in the case p = 2. Finally, in [5], Krotov defines the following weight over Z 2m , for any positive integer m: Clearly, this is a further generalization of the Lee weight over Z 4 and thus coincides there with the homogeneous and the overweight. However, even more is true: the weight of Krotov and the overweight coincide over Z 2 s , for any positive integer s. Thus, the overweight may be considered as a generalization of Krotov's weight over Z 2 s for any positive integer s. Let us give some examples to illustrate the differences between the above-mentioned weights. Example 1. In the following table, w H denotes the Hamming weight, wt the normalized homogeneous weight, w L denotes the Lee weight, w K denotes Krotov's weight, w B denotes Bachoc's weight and finally W denotes the overweight. Let us consider two easy but pathological cases, namely Z 6 for Table 1 and Z 2 × Z 2 for Table 2. Finally, another interesting connection to the Hamming weight arises by considering the following linear injective isometry. is a linear isometry. Recall that, over F 2 [x]/(x 2 ), the overweight coincides with the weight of Bachoc and the homogeneous weight. Bounds for the Overweight In this section, we develop several bounds for the overweight, such as a Singleton bound, a sphere-packing bound, a Gilbert-Varshamov bound and a Plotkin bound. For this, let us first define the minimum overweight distance of a code. Definition 8. Let C ⊆ R n be a code. The minimum overweight distance of C is then denoted by D(C) and defined as D(C) = min{D(x, y) | x, y ∈ C, x = y}. A Singleton Bound The Singleton bound usually follows a puncturing argument, which is possible for the overweight, but gives the same result as applying the following observation: Remark 1. For all x ∈ R, we have that where w H denotes the Hamming weight. Hence, using the Singleton bound for the Hamming metric directly gives a Singleton bound for the overweight. However, if we define the rank of a linear code C, denoted by rk(C), to be the minimal number of generators of C, then the following bound is known for principal ideal rings [12,13] d H (C) ≤ n − rk(C) + 1. Codes achieving this bound are called Maximum Distance with respect to Rank (MDR) codes, in order to differentiate from MDS codes. This is a sharper bound than the usual Singleton bound, since for non-free codes we have rk(C) > log |R| (M). In the case of linear codes, the rank thus also leads to a sharper Singleton-like bound for the overweight. Proposition 3. Let R be a principal ideal ring. Let C ⊆ R n be a linear code of rank rk(C) and minimum overweight distance d. Then, d ≤ 2(n − rk(C) + 1). A Sphere-Packing Bound The sphere-packing bound as well as the Gilbert-Varshamov bound are generic bounds, and we are able to provide them for the overweight in a simple form involving the volume of the balls in the underlying metric space. We begin by defining balls with respect to the overweight distance. for all x, y ∈ R n . Moreover, setting u := |R × | and v := |R| − 1 − u, we have the generating function f W (z) = 1 + uz + vz 2 for this weight function, so that the generating function for W on R n takes the form where we have set k = k u and = k v , and where the condition k 0 + k u + k v = n is transformed in 0 ≤ k ≤ n, 0 ≤ ≤ n − k. Now, setting t = k + 2 , we obtain the simplified expression for the generating function Lemma 2. The foregoing implies that the ball of radius e (centered in 0) has volume exactly We thus provided an explicit formula for the cardinality of balls in R n with respect to the overweight distance. We now obtain the sphere-packing bound for the overweight distance by combining the previous results. As before, R is a finite ring and u = |R × |, whereas v = |R| − 1 − u represents the number of non-zero non-units. If the minimum distance is even and R is a finite local ring with maximal ideal J, this bound can be adapted as follows. Corollary 2. Let R be a local ring with maximal ideal J, q = |R/J| and C ⊆ R n+1 be a (not necessarily linear) code of length n + 1 and minimum overweight distance d = 2e + 2. Then, where B e,D (0) is the overweight ball of radius e in R n , and its volume is given in Equation (1). Notice that the sets S m form a partition of R and that all elements of S m have mutual overweight distance 1. Thus, given r ∈ R, we denote with S(r) the unique set S m that contains r. Furthermore, let π : R n+1 → R n be the projection that removes the n + 1'th coordinate and Now, if x = y ∈ R n+1 are two codewords, then Z(x) and Z(y) are disjoint. Indeed, if z ∈ Z(x) ∩ Z(y), then S(x n+1 ) = S(y n+1 ) as they cannot be disjoint. Hence, D(x n+1 , y n+1 ) ≤ 1. Furthermore, both D(π(x), π(z)) and D(π(y), π(z)) are less than or equal to e, implying that D(π(x), π(y)) ≤ 2e. It follows that D(x, y) ≤ 2e + 1, which is a contradiction. To find non-trivial examples of perfect codes is as notoriously hard as over finite fields in the Hamming metric. Clearly, in the case R = F q , there are non-trivial perfect codes, as the overweight coincides with the Hamming weight. Examples of such codes can be found in [5] (Section IV). Furthermore, in the case R = Z p k , linear 1-perfect codes are classified in terms of their parity-check matrix in [6] (Theorem IV.1). A Gilbert-Varshamov Bound With arguments similar to those for the sphere-packing bound, we can also obtain a lower bound on the maximal size of a code with a fixed minimum distance. Proposition 4 (Gilbert-Varshamov bound). Let R be a finite ring, n a positive integer and d ∈ {0, . . . , 2n}. Then, there exists a code C ⊆ R n of minimum overweight distance at least d satisfying where the volume is given in Equation (1) for e = d − 1, i.e., Proof. Assume C ⊆ R n of minimum overweight distance of at least d is a largest code of length n and minimum distance d. Then, the set of balls B d−1,D (x) centered in the codewords x ∈ C must already cover the space R n . Since, if this were not the case, one would find an element y ∈ R n that is not contained in the ball of radius d − 1 around any element of C. This word y would have distance at least d to each of the words of C, and thus C ∪ {y} would be a code of properly larger size with distance at least d, a contradiction to the choice of C. From the covering argument, we then see that Let us consider the special case where R is a finite chain ring. Since the overweight is an additive weight, and the conditions of [14] are easily verified, we can use [14] (Theorem 22) to obtain that random linear codes over R n achieve the (asymptotic) Gilbert-Varshamov bound with high probability. Example 4. As an easy example, we can consider R n = Z 2 8 . The maximal minimum overweight distance is given by d = 2n = 4. The Gilbert-Varshamov bound states for this example that there exists a code C with | C |> 1, as | B 3,D (0) |= 55. For example, the code C = (2, 2) has four elements. A Plotkin Bound Over a local ring, we can use methods similar to the ones used for the classical Plotkin bound to obtain an analogue of the Plotkin bound for (not necessarily linear) codes equipped with the overweight. For the rest of this section, R is a finite local ring with maximal ideal J. The notation stems from the Jacobson radical of the ring R. Note that the factor ring R/J is a finite field, whose cardinality will be denoted by q. Similarly to the Hamming case, for a subset A ⊆ R, we will denote by the average weight of the subset A. Lemma 3. Let I ⊆ R be a left or right ideal. Then, Proof. Note that the last case is trivial as I = {0}. If {0} I R, then all non-zero elements of I have weight 2, so this case follows as well. Corollary 3. Let R be a local ring with maximal ideal J and assume that |J| ≥ 2. Then, we have that W(J) ≥ W(I) for all left or right ideals I ⊆ R. Proof. We immediately see that W(J) ≥ W(I) for all I ⊆ J. Now, consider the case I = R. We have that where we used that 2 |J|−1 |J| ≥ 1. To ease the notation, let us denote by η the following In what follows, we provide a Plotkin bound for the overweight over a local ring R with maximal ideal J. The case |J| = 1 is already well studied, since, in this case, R is a field and D is simply the Hamming distance. Hence, we will assume that |J| ≥ 2. We start with a lemma for the Hamming weight. The proof of it follows the idea of the classical Plotkin bound, which can be found in [15], and for the homogeneous weight in [10]. Lemma 4. Let I ⊆ R be a subset and P be a probability distribution on I. Then, we have that If we apply the Cauchy-Schwarz inequality to the latter sum, we obtain that From which we can conclude. We are now ready for the most important step of the Plotkin bound. As before, R is a local ring with non-zero maximal ideal J and η = W(J). Proposition 5. Let P be a probability distribution on R. Then, it holds that Proof. Let q = |R/J| and pick x 1 , . . . , x q such that x i + J = x j + J if i = j. Then, it follows that the cosets x i := x i + J form a partition of R. For all k ∈ {1, . . . , q}, we denote by It follows that q ∑ k=1 P k = 1. By rewriting the initial sum as sum over all cosets, we obtain that If P k = 0, thenP(x) := P(x)/P k defines a probability distribution on x k . In this case, we apply Lemma 4 to obtain that Note that the same inequality also trivially holds if P k = 0. Applying this and using that ∑ x∈x k P(x) = P k , we obtain that where we used that 2 1 − 1 |J| ≥ 1 since |J| ≥ 2 in the last inequality. To complete the Plotkin bound for the overweight, we now follow the steps in [10]. Using Proposition 5, we obtain the following result: Proof. The first inequality follows since the distance between all distinct pairs of C is at least d. For the second inequality, let p i : R n → R be the projection onto the ith coordinate. Note that defines a probability distribution on R for all i ∈ {1, . . . , n}. Using Proposition 5, we obtain that Thus, we obtain the claim. From this inequality, we obtain a Plotkin bound for the overweight distance. As before, R is a local ring with non-zero maximal ideal J and η = 2 1 − 1 |J| . Theorem 3 (Plotkin bound for the overweight distance). Let C ⊆ R n be a (not necessarily linear) code of minimum overweight distance d = D(C) and assume that d > nη. Then, Proof. We divide both sides of the inequality in Proposition 6 by |C| to obtain that The result then follows from the assumption that d − nη > 0. By rearranging the same inequality, we also obtain the following version of the Plotkin bound, which does not require the assumption that d > nη. Proof. We obtain this by dividing both sides of the inequality in Proposition 6 by |C|(|C| − 1), which is non-zero by assumption. Remark 2. Note that W is a homogeneous weight on Z 4 , and thus our bound coincides with the bound from [10] for the homogeneous weight on Z 4 . Example 5. If we consider codes over Z 9 and fix |C| = 9, n = 3. We obtain that d ≤ 9/2 and hence by the integrality that d ≤ 4. The linear code attains this bound. A Johnson Bound for the Homogeneous Weight Another interesting bound is the Johnson bound due to its relation with list-decodability. In the classical form, the Johnson bound gives an upper bound on the largest size A q (n, d, w) of a constant-weight w code over F q of length n and minimum Hamming distance d. However, for the list-decodability of a code, we are interested in codes having codewords of weight at most w. In fact, if the largest size of such a code A q (n, d, w) is small, e.g., at most a constant L, then every ball of radius w contains at most L codewords and hence one can decode a list of a size at most L. In more detail, the Johnson bound for list-decodability in the Hamming metric states that, if where δ denotes the relative minimum distance, then A q (n, d, w) ≤ n(d − 1). This famous bound is still missing for the well-studied homogeneous weight, which is, like the overweight, a generalization of the Lee weight over Z 4 . In this section, we prove a Johnson bound for the homogeneous weight from Definition 6, denoted by wt and let γ be its average weight on R. By abuse of notation, we denote with wt also the extension of wt to R n , that is, wt(x i ). Note that wt does not necessarily satisfy the triangle inequality. In [7] (Theorem 2), it is shown that the homogeneous weight on Z m satisfies the triangle inequality if and only if m is not divisible by 6. We define the ball of radius r with respect to a homogeneous weight wt to be the set of all elements having distance less than or equal to r. Definition 10. Let y ∈ R n and r ∈ R ≥0 . The ball B r,wt (y) of radius r centered in y is defined as Our aim is to provide a Johnson bound for the homogeneous weight over Frobenius rings. Thus, we begin by defining list-decodability. Definition 11. Let R be a finite ring. Given ρ ∈ R ≥0 , a code C ⊆ R n is called (ρ, L) list-decodable (with respect to wt) if, for every y ∈ R n , it holds that |B ρn,wt (y) ∩ C| ≤ L. Over Frobenius rings, the following result holds, which will play an important role in the proof of the Johnson bound. Proposition 7 ([10] (Corollary 3.3) ). Let R be a Frobenius ring, C ⊆ R n a (not necessarily linear) code of minimum distance d and ω = max{wt(c) | c ∈ C}. If ω ≤ γn, then With this, we obtain an analogue of the Johnson bound for the homogeneous weight. Theorem 4. Let R be a Frobenius ring and C ⊆ R n be a (not necessarily linear) code of minimum distance d. Assume that ρ ≤ γ. Then, it holds that C is (ρ, dγn) list-decodable if one of the following conditions is satisfied: (i) We have that γ n(d − γ n) ≥ 1. Proof. Assume that e ≤ ρn and let y ∈ R n . We have to show that, under the given conditions, |B e,wt (y) ∩ C| ≤ dγn. Note first that we may assume that y = 0; otherwise, simply consider the translate Assume that x 1 , . . . , x N are in B e,wt (0) ∩ C. We have that wt(x i − x j ) ≥ d for i = j, thus using Proposition 7 and wt(x − y) = wt(y − x), we obtain that Hence, it follows that N(dγn − 2eγn + e 2 ) ≤ dγn. It follows that N ≤ dγn. Remark 3. Note that the second condition already forces ρ ≤ γ. Open Problems We conclude this paper with some interesting open questions for the newly defined overweight that we have encountered. Problem 1. Classify the codes that attain the bounds derived in this paper. Problem 2. Give a Griesmer bound, an Elias-Bassalygo and a Johnson bound for the overweight. Proving an analogue of a Griesmer, Elias-Bassalygo and Johnson bound poses a difficult challenge over rings and in particular for the overweight, due to the necessity of an effective upper bound on the sum of the distances. Conflicts of Interest: The authors declare no conflict of interest.
7,217
2022-10-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
The Scarce Drugs Allocation Indicators in Iran: A Fuzzy Delphi Method Based Consensus Almost all countries are affected by a variety of drug-supply problems and spend a considerable amount of time and resources to address shortages. The current study aims to reach a consensus on the scarce drug allocation measures to improve the allocation process of scarce drugs in Iran by a population needs-based approach. To achieve the objective, two phases were conducted. Firstly, a set of population-based indicators of health needs were identified by reviewing the literature and were scrutinized by fifty academics/executives who were specialists in pharmaceutical resource allocation. In the second phase, a structured process, based on the Delphi technique requirements, was performed to finalize the indicators. The yield of literature review step was about 20 indicators, which was based on availability of data in Iran, 16 indicators were added to the next step and formed the initial questionnaire. Based on the results of the first questionnaire, only 3 indicators were rejected and 13 indicators were added to the Delphi phase. Then, in Delphi phase, the consensus was built after three Rounds. In addition to the burden of endemic, special, rare, and incurable diseases, traumatic diseases and total population of each province were the main measures. Furthermore, total mortality rates and the number of pharmacies in each province were on the border; hence, the monitoring team made the decision about inclusion or exclusion of such indicators. Other measures were in the range of ′important′ ones. To reach a higher effective and efficient process of resource allocation, the paper suggests the use of a population needs-based approach in Iran′s pharmaceutical sector. The scarce drug allocation indicators extracted in this study can make a considerable contribution to preventing, controlling, and mitigating drug shortages. Introduction The shortage related to drugs is an intricate and worldwide phenomenon that normally occurs in different parts of the world. This problem is an important subject for policy makers in each health-care system, because shortage in drug supplies might further lead to suspensions of treatments for patients and care interruptions (1,2). Under the drug shortage circumstances, all related health-care stakeholders can be affected; hence, cooperation is highly required to reduce the drug shortages and to manage the condition (3). During such shortage periods, the issue of reaching a higher value without increasing the amount of resources is raised. (4). On the one hand, the equity in health-care resource allocation is important and is considered as one of the highest influential subjects for different related groups such as politicians and researchers (5-8). On the other hand, more studies are required to uncover the association of this issue with inequalities in socioeconomic subjects (9). In Iran, resource allocation systems tend to allocate resources on the basis of past levels of service utilization. Under these systems, there are no specific indicators for optimal allocation, and any existing inequities in access to care will continue (10). In case of shortage, Iran Food and Drug Administration (IFDA) has implemented a distribution policy at provincial level, the considered indicators at this time are the population of each province, the number of pharmacies, and the number of practitioners in each province. But these criteria fail to thoroughly meet the patients′ demands around the country (10). In this regard, a major challenge for health-care policy makers in Iran is to identify and implement the main indicators of needs-based resource allocation to Iranian provinces which are consistent with the objective of Iranian health-care policy and national drug policy (NDP) that emphasizes on equity in access (11). Studies in Iran are mainly focused on the geographical distribution of resources in different population levels without taking the papulation requirements into consideration (5,12). Utilization of needs-based allocation of health-care resources was recommended to policymakers by some studies in Iran (5,10,12,13). To the best of our knowledge, previous attempts at resource allocation of health-care system provide no detailed attention to the scarce drugs. In addition, the needs-based resource allocation is a missing link in this area. Since the adoption of the most effective indicators of needs for scarce drug allocation plays a pivotal role in preventing and controlling drug shortage, identification and consensus the valid indicators by a fuzzy Delphi study seems most appropriate in accordance with the objectives of this study. Theoretical Framework In this section, a brief review of theoretical framework, including the needs-based approach and fuzzy Delphi method, is presented. Needs-based Approach Health-care systems apply various methods of allocating resources to sub-geographic areas and groups. The four commonly used methods are: 1) political patronage 2) historical allocations 3) bids by local governments and 4) and needsbased resource allocation formulae (14). The most reliable resource allocation framework related to the highest requirements of the population, known as needs-based approach, leads to the allocation of health-care resources through the characteristics of the population so that the use of inadequate health-care supplies can be optimized (15). In this approach, the amount of each population's allocation was determined independently from the existing levels of utilization by that population; rather, it is related to the characteristics of each population in the context of the corresponding characteristics of the entire population (e.g. provincial versus national) (16). Needs-based resource allocation has globally been an issue (17). A number of countries in Europe (Germany, Switzerland, UK, Sweden, and Spain) have adopted scientific needsbased allocation models (15,18). According to Gugushvilli (19), the United Kingdom′s National Health Service (NHS) was a pioneer in the field of needs-based resource allocation in 1977. While the above-mentioned countries exploited this approach to address the issue of inequity, a proper and applicable set of procedures and measures were highly required to guide resource allocation in most countries with low or medium earnings (15). Health-care needs can be measured directly or indirectly; direct measures are based on the quantifications of the precise healthcare services required to improve the health status of an individual or a group is based on the assessment of health-care professionals. Alternatively, the need for health-care can be estimated through indirect measures such as mortality, morbidity, socio-demographic characteristics, and socioeconomic characteristics or measures of Scarce Drugs Allocation Indicators deprivation (16,20). According to McIntyre and Anselmi (2012), the most common indicators of needs are: demographical compositions, socioeconomic status, ill-health levels, and the size of the population (10, 22). Clearly, with the lack of any gold standard measures of population needs, it is important to choose a valid, reliable, and responsive indicator of (or proxies for) healthcare needs (10,22). Fuzzy Delphi Method In the real world, decision-makers encounter complex, and sometimes, multi-dimensional problems. Therefore, in many cases, the problems related to decision making are not clearly defined; as a result, quite a few problems in the real world are not random but are intrinsically fuzzy; subsequently, the application of probability in numerous cases has not been satisfactory enough (23). Alternatively, the use of fuzzy theory in problems related to decision making has revealed proper practical results (24). The main characteristic of the fuzzy theory in this area provides a higher flexible framework (25). Accordingly, even if the unavailability or expensiveness of the accurate information or in the situation where the subjective inputs are required for model evaluation, the application of the fuzzy Delphi is valid (26). As a flexible method, Delphi is built and manufactured on the following basic concepts: "structured questioning, iteration, controlled feedback, and anonymity of responses"(27). Most Delphi users try to employ the policy formulation or decision making (26); however, Delphi is considered as a tool for analyzing the policy issues (26). Practically, the consensusoriented Delphi was utilized in numerous fields such as analyses of resource allocation, technological forecasting, strategic planning, formulation of the policy, and technology assessing (26). A currently applicable case of Delphi can be used in health-care study areas, which currently has a number of Delphi practitioners (17,27). This technique could be included in the groups of methods applied for indicator development (28). Methods In the process of implementing the study, as shown in Figure1, firstly, a committee of three experts and advisers in drug policy field, analysis and planning, and also in Delphi technique -have forged the monitor team, and they continuously provided assistance in different aspects of all parts in the study. To achieve the study objective, two phases were conducted in the study. In the first phase, a set of population-based measures of health needs were identified based on a wide literature review. Then, the measures obtained from the previous step formed the initial questionnaire and were scrutinized by fifty academics and executives who were specialists in pharmaceutical resource allocation and distribution. Thus, over 60 potential respondents at the national level, 50 people (e.g., Vice-chancellors for food and drug in universities of medical sciences and other experts in the field), who were directly impacted by the consequences of drug shortage, screened the indicators. In order to evaluate the results of the first questionnaire, Content Validity Ratio (CVR) was applied. Content validity is mandatory for the topic-related areas (29). Lawshe (1975) developed a proper method for measuring the content validity (30), and he suggested that each Subject Matter Expert (SMEs) raters on the judging panel should In the process of implementing the study, as shown in Figure1, firstly, a committee of three experts and advisers in drug policy field, analysis and planning, and also in Delphi technique -have forged the monitor team, and they continuously provided assistance in different aspects of all parts in the study. Figure1. The graphical presentation of the study implementation process To achieve the study objective, two phases were conducted in the study. In the first phase, a set of population-based measures of health needs were identified based on a wide literature review. Then, the measures obtained from the previous step formed the initial questionnaire and were scrutinized by fifty academics and executives who were specialists in pharmaceutical resource allocation and distribution. Thus, over 60 potential respondents at the national level, 50 people (e.g., Vice-chancellors for food and drug in universities of medical sciences and other experts in the field), who were directly impacted by the consequences of drug shortage, screened the indicators. In order to evaluate the results of the first questionnaire, Content Validity Ratio (CVR) was applied. Content validity is mandatory for the topic-related areas (29). Lawshe (1975) developed a proper method for measuring the content validity (30), and he suggested that each Subject Matter Expert (SMEs) raters on the judging panel should respond to questions of respond to questions of "Is the measure ′a) essential,′ ′b) useful, but not essential,′ or ′c) not necessary′ to the construct performance?" If a higher number of individuals participating in the test agree with a particular item or with the measure 'is essential', higher levels of content validity would exist. Lawshe provided a table of critical values for the CVR which is known as Schipper′s table. Wilson, Pan, and Schumsky (31) later modified the table. The final decision to accept or reject items is based on this table. Consequently, in this phase, the populationbased measures were sifted and clustered in accordance with national drug policy and their applicability in Iran. The second phase was based on the fuzzy Delphi technique. Based on the composition of the members and the homogeneity of the target group, Delphi can be used by six to fifty participants33 ,32) ); in the fuzzy Delphi phase, it was anticipated that nearly 15 participants would be included in the study, but, finally, the phase was held by 9 individuals, due to the lack of participation. The group of respondents represented some of the highest authorities in the field. Such a group comprised the cognizant IFDA Deputy, prominent politicians, and researchers in the resource allocation field. It must be noted that they participated voluntarily and freely. Even though several forms of fuzzy numbers are available, the trapezoidal and triangular forms are frequently used to represent fuzzy numbers. In this study, trapezoidal fuzzy numbers were applied. In fuzzy multiple criteria decision-making problems, various trapezoidal fuzzy numbers can orderly be ranked by means of portrayal of their curves. On the condition that its order cannot be ranked by the aforementioned method, other approaches could be applied instead. In the current study, the procedure proposed by Cheng was employed (23). Results Several mechanisms, which are classified as supply and demand side mechanisms, can be used for the allocation of scarce healthcare resources. The indicators in each side are different, and different countries used a simple, or a single indicator or composite indicators to weigh district population. In many countries where the concept of needs has been incorporated in resource allocation, composite indicators of socioeconomic status such as deprivation and asset indices have been utilized. In developing countries, the application of demographic and socioeconomic status such as size of population, age, sex, and health indices are common. The yield of literature review was about 20 indicators, which could have been potentially applied in Iran, but due to the situation, limited information, as well as the inaccessibility of reliable information in Iran, employing the above-mentioned indicators seemed illogical. Therefore, 16 indicators were transferred to the next step and formed the initial questionnaire. By evaluating critical values of CVR, 3 indicators were rejected, for their CVR scores were below 0.24; furthermore, some new indicators were introduced by this study. This means that according to the comments of participants in the phase, some measures were changed as follows: Population and mortality rates adjusted by age and gender were changed to total population and total mortality; the number of hospital beds was changed to total inpatient bed occupancy rate; the pharmaceutical quotas in the past was omitted, but the burden of traumatic diseases was added. Then, based on the results of this phase, the Delphi questionnaire was designed. In the following, based on the Delphi technique requirements, a structured process was performed to reach a consensus in order to improve the allocation of scarce drugs in Iran. Then, the experts' opinions were fully reviewed by using a series of 'Rounds'. Rounds were held until group consensus was reached based on Cheng′s proposed method (23). The following summarizes the computational process: At first, the linguistic variables were used by decision-makers to evaluate the significance of the measures to improve the allocation of scarce drugs in drug shortage crises in Iran. Table 1 indicates the transfer of the linguistic variables to trapezoidal fuzzy numbers. The monitoring team provided a significant amount of time for participants to carefully fill out the questionnaires. The results of the first questionnaire (Round1), including both the number of selected linguistic variables and the fuzzy average of experts′ answers are provided in Table 2. This calls for calculating the mean related to all trapezoidal fuzzy numbers by formula 1 and 2. (1) a m2 , a m3 , a m4 ) = (2) Where: à is a trapezoidal fuzzy number; i represents Experts; a 1 , a 2 , a 3 ,and a 4 are à membership; à m is the average (mean) of all à (i) ; a m1 , a m2 , a m3 , and a m4 are the average of all a 1 Then, based on the results of the first round, the differences between the answer of each expert to average of the experts' answers were computed by formula 3. (3) Where: à is a trapezoidal fuzzy number; i represents Experts; a 1 , a 2 , a 3 ,and a 4 are à membership; à m is the average (mean) of all à (i) ; a m1 , a m2 , a m3 ,and a m4 are the average of all a 1 (i) , a 2 (i) , a 3 (i) , and a 4 (i) , n = 4. Accordingly, preparation of a summary of the first Round results began shortly after the first completed Round. In the second Round, the second questionnaires along with an abstract of the results in first Round were given to the participants. They were allowed to revise their personal opinions according to the opinions of others. Again the number of selected linguistic variables and the fuzzy average of the experts′ answers were computed by formulae 1 and 2. The results of the second questionnaire (Round 2) are shown in Table 2. The method of Delphi is based upon the consensus achieved by panelists. In order to investigate the achieved consensus, the difference between the means of two implemented rounds was compared; and accordingly, the decision is made whether to continue or stop the Delphi process. The mean of two rounds was calculated by formula 4 and are shown in Table 3. If the difference was lower than a specified threshold (d i ≤ 0.2) the consensus was made and the process was stopped. (4) As Table 3 displays, by doing the second round, some measures failed to reach the specified threshold (d i ≤0.2). Therefore, after identifying the differences between opinions by formula 4, the third round was implemented, as was in the previous rounds. The results are shown in Table 4. Next, in order to check the achieved consensus, the difference between the means of round 2 and round 3 was examined; the results are provided in Table 3. Since all the requirements of reaching the consensus were met, the process was stopped. Exmp: hemophilia, thalassemia , diseases requiring dialysis, cancers, multiple sclerosis, metabolic ders and diabetes. n, based on the results of the first round, the differences between the answer of expert to average of the experts' answers were computed by formula 3. Then, based on the results of the first round, the differences between the answer of each expert to average of the experts' answers were computed by formula 3. (3) Where: is a trapezoidal fuzzy number; i represents Experts; a1, a2, a3 ,and a membership; is the average (mean) of all ; am1, am2, am3 ,and am4 ar average of all a1 (i) , a2 (i) , a3 (i) ,and a4 (i) , n = 4. Accordingly, preparation of a summary of the first Round results began sh after the first completed Round. In the second Round, the second questionn along with an abstract of the results in first Round were given to the particip They were allowed to revise their personal opinions according to the opinio others. Again the number of selected linguistic variables and the fuzzy average o experts' answers were computed by formulae 1 and 2. The results of the se questionnaire (Round 2) are shown in Table 2. The method of Delphi is based upon the consensus achieved by panelists. In to investigate the achieved consensus, the difference between the means of implemented rounds was compared; and accordingly, the decision is made wh to continue or stop the Delphi process. The mean of two rounds was calculate formula 4 and are shown in Table 3. If the difference was lower than a spec threshold (di≤ 0.2) the consensus was made and the process was stopped. (4) Where: is a trapezoidal fuzzy number; i represents Exp membership; is the average (mean) of all ; a average of all a1 (i) , a2 (i) , a3 (i) ,and a4 (i) , n = 4. Accordingly, preparation of a summary of the first after the first completed Round. In the second Roun along with an abstract of the results in first Round w They were allowed to revise their personal opinions others. Again the number of selected linguistic variables an experts' answers were computed by formulae 1 and questionnaire (Round 2) are shown in Table 2. The method of Delphi is based upon the consensus ac to investigate the achieved consensus, the difference implemented rounds was compared; and accordingly, to continue or stop the Delphi process. The mean of tw formula 4 and are shown in Table 3. If the difference threshold (di≤ 0.2) the consensus was made and the pro Very unimportant (0,0,0.5,2.5) Important (5,7,8,10) Very important (7.5,9.5,10,10) (the average of the third-round was below 6) had to be eliminated. Thus, the deprivation index of the province and population growth rate was eliminated. In addition to the burden of endemic, special, rare and incurable diseases, traumatic diseases and total population of each province were the main measures because they were in the range of ′very important.′ Furthermore, the population of non-resident patients, total inpatient bed occupancy rate, number of prescriptions, the number of GPs and specialists in each province were in the range of ′important′ measures. Other measures, including total mortality rates and the number of pharmacies were on the borderline. Since mortality rates are hidden in burden of disease measures, and the number of prescriptions can act as a proxy of the number of pharmacies, the monitoring team decided not to include these measures in potential allocation model. The number of prescriptions was used because it can show the final medicine demand better than the number of pharmacies. In other words, the number of prescriptions can indicate the real demand more accurately. Discussion Since shortages related to the drugs are frequent in health-care systems and stem from a wide variety of reasons, the future drug shortages are unknown and unpredictable. Hence, numerous organizations and stakeholders have been endeavoring to manage current shortages and to improve evidence-based decision making for future shortages. The primary aims of this study were to define and determine the needsbased measures of scarce drug allocation in the health-care system of Iran by utilizing Delphi technique to improve the allocation of these valuable resources. The results of this study pave the way for policy makers to improve their decisions concerning resource allocation, and these population needs-based measures could be included in the allocation formula of scarce drugs as well. Among the 13 reviewed measures, eight ones such as the burden of endemic, special, rare and incurable disease, and traumatic diseases, total population, the population of non-resident patients, total inpatient bed occupancy rate, the number of prescriptions and the number of GPs Table 3. The difference between the mean of two implemented Rounds (Round 1 and 2; Round 2 and 3). Difference (Round 2 and 3) Difference (Round 1 and 2 and specialists were approved. Our findings revealed that eight of the common measures/ indicators of resource allocation enjoyed face validity to measure needs in Iran and are also currently used in needs-based resource allocation formula in other countries (15). In fact, the needs-based resource allocation indicators in different countries are conceptually similar and these indicators as discussed above (population, deprivation and health needs indicators) are a significant role in resource allocation of many countries. In the Southern African regions, England, Namibia, and Spain, the size of population has been utilized in the health-care resource allocation (18). The burden of diseases is a popular indicator in countries like southern African countries, the US, Scotland, Canada, and Wales (34). Although the level of deprivation of each province is another popular indicator in health-care scarce resource allocation (e.g. in Australia), it was not approved by our results. The reason is most probably related to the concern about the accessibility of data. Other indicators are common in composite indicator of health index and based on the results of current study, their application is justifiable (15,21). In shortage periods, the IFDA conducts a controlled allocation/distribution program to improve the equity and efficacy in Iran's health-care system (8), and, at certain times, the shortages related to drug supplies in Iran are a type of health-care rationing. Like other countries, explicit health-care rationing in Iran is unpopular; however, implicit (and sometimes erratic) rationing can balance resources and maintain the system (35). It is proposed that some considerations should be taken to identify the determinants of variations among the people in each province for a rational rationing so that the limited resources can meet health-care needs (10). The final measures derived from this study, as the most important indicators of population′s needs, can play this role. Fuzzy average Linguistic variables* Measures The results of the current study provide the prerequisites for calculations of the needs of each province; therefore, the above-mentioned goals can be achievable. Briscombe, Sharma, and Saunders 2010 stated that the resource allocation to provinces must be in line with health needs of each provinces and the resource allocation formula will need to take into account different population sizes, disease burden, and other factors. In Kenya, their suggestion model was assessed for using in resource allocation (18). The findings of the current research support the model obtained in the previous study. In line with the research of Kaltenthale et al. 2004, by a systematic review, the current study extracted population needs-based resource allocation indicators; as a result, single and composite indicators were identified (22). Like other studies, the results of this study confirmed that the population size in each geographic area, the burden of disease, demographic composition, and socio-economic status are the most common indicators of needs in the resource allocation methods (21). Updated population needs-based indicators were collected by a number of national surveys (population census). For example, population size of each province is a common indicator that is the basis for calculation of many other indicators as well. Then, any change in demographic policies, either directly or indirectly, can lead to a noticeable change in the final model. In contrast, the mortality rate can bring about deviations in the final model, meaning a significant portion of the pharmaceutical expenditure is spent on patients whose treatment does not culminate in death. On the other hand, the geographical distribution of drug needs is different from the geographical distribution of death in different parts of a given country. Conclusion Achieving high levels of efficiency and success in resource allocation procedures can be accessible provided that the resource distribution is commensurate with population requirements. Given the specific context of the Iranian health-care system, it is necessary to design a needs-based resource allocation model that is compatible with the health-care system features. This model can improve the equity of care and provide a practical guide for managers and policy makers to promote evidence-based decision making. It is suggested that the final measures of this study be divided to indicators related to the access, efficacy, and equity in line with national drug policy (NDP). Thus, it is recommended that future studies, based on our findings, should design a model for scarce drugs allocation for Iranian health-care system. On the other hand, the IFDA strategy for drug shortage prevention and mitigation has the potential to decrease the burden of drug shortages impacting the patients. Future revisions of the IFDA strategy should incorporate specific province considerations, and this study has separately addressed the difference of the needs of each province; as a result, it can make a distribution to meet their needs.
6,246.8
2019-01-01T00:00:00.000
[ "Medicine", "Economics", "Political Science" ]
A short proof of HRS-tilting We give a short proof to the following tilting theorem by Happel, Reiten and Smal{\o} via an explicit construction: given two abelian categories $\mathcal{A}$ and $\mathcal{B}$ such that $\mathcal{B}$ is tilted from $\mathcal{A}$, then $\mathcal{A}$ and $\mathcal{B}$ are derived equivalent. Introduction Let A be an abelian category. Recall that a torsion pair on A means a pair (T , F ) of full subcategories satisfying (T1). Hom A (T, F ) = 0 for all T ∈ T and F ∈ F; both subcategories T and F are closed under direct summands; (T2). for each object X ∈ A, there is a short exact sequence 0 −→ T −→ X −→ F −→ 0 for some T ∈ T and F ∈ F. In a torsion pair (T , F ), it follows that the subcategory T is closed under extensions and factor objects; F is closed under extensions and sub objects. The torsion pair (T , F ) is called a tilting torsion pair, provided that each object in A embeds into an object in T . Dually the torsion pair (T , F ) is called a cotilting torsion pair, provided that each object in A is a factor object of an object in F ([3, Chapter I, section 3]). Denote a complex in A by X • = (X n , d n X ) n∈Z where d n X : X n −→ X n+1 is the differential satisfying d n+1 X • d n X = 0; its shift X • [1] is a complex given by (X • [1]) n = X n+1 and d n X [1] = −d n+1 X . Denote by D(A) the (unbounded) derived category of A, D + (A), D − (A) and D b (A) the full subcategory consisting of bounded-below, bounded-above and bounded complexes, respectively ( [6,4]). We will always identify the abelian category A as the full subcategory of D(A) consisting of stalk complexes concentrated at degree zero ([4, p.40, Proposition 4.3]). Let (T , F ) be a torsion pair on A. Following [3, Chapter I, section 2], set B to be the full subcategory of D(A) consisting of complexes X • satisfying H 0 (X • ) ∈ T , H −1 (X • ) ∈ F and H i (X • ) = 0 for i = 0, 1. Note that T ⊆ B and F [1] ⊆ B. By [3, Chapter I, Proposition 2.1] the category B is the heart of certain t-structure on D(A) and thus by [1] it is an abelian category (also see [2]); moreover the pair (F [1], T ) is a torsion pair on B. One might expect that the resulting new abelian category B is derived equivalent to A. In the case of Theorem the category B is said to be tilted from A. Note that the original theorem only claims the equivalence between the bounded derived categories and requires the existence of enough projective or injective objects. The quoted version is improved by Noohi ([5, Theorem 7.6]). We will give a short proof of the theorem via an explicit construction of the equivalence functor. The Proof of Theorem Throughout (T , F ) is a torsion pair on A and B is the resulting abelian category. We start with an easy observation. Proof of Theorem: Denote by K(A) the homotopy category of complexes in A, K(T ) (resp. K ex (A)) its full subcategory consisting of complexes in T (resp. exact complexes). The inclusion K(T ) ֒→ K(A) induces the following exact functor Since the torsion pair (T , F ) is tilting and T is closed under factor objects, we have for each X ∈ A a short exact sequence 0 −→ X −→ T 0 −→ T 1 −→ 0 with T i ∈ T . Note further that T is closed under extensions, we infer that the conditions in [4, p.42, Lemma 4.6 2)] are fulfilled, and thus for each complex . This implies that the functor F is dense and by [6,p.283, it is fully-faithful, that is, the functor F is an equivalence of triangulated categories. By Lemma 2.2 we may apply the dual argument to obtain a natural equivalence To see other equivalences, let * ∈ {+, −, b} and let K * (−) denote the corresponding homotopy categories. Note that in the argument above, for a complex X • ∈ K * (A) we may take a quasi-isomorphism X • −→ T • with T • ∈ K * (T ) (for the case * = +, just consult the proof in [4, p.43, 1)]; for the case * = −, because T is closed under factor objects one may replace T • by its good truncations; for the case * = b, consult the proof in [4, p.43, 1)] and note that since T is closed under factor objects, the argument therein is done within finitely many steps, consequently the obtained complex T • is bounded). Thus we construct the equivalences F * and G * as above. This proves the corresponding equivalences between the derived categories D * (−). Finally we will show that the obtained equivalence F G −1 is compatible with the inclusion B ֒→ D(A). This is subtle. Given an object B ∈ B, since the torsion pair (F [1], T ) is cotilting, we have a short exact sequence in B, η : Note that the complex T • is the mapping cone of d and thus form the triangle ξ we obtain T • is isomorphic to B ([4, p.23, Propostion 1.1 c)]), in particular T • ∈ B. Note the following natural triangle and thus a short exact sequence Comparing the short exact sequences η and γ we obtain a unique isomorphism θ B : B ≃ T • in B. We claim that θ is natural in B and then we obtain a natural isomorphism between the inclusion functor B ֒→ D(A) and the composite In fact, given a morphism f : B −→ B ′ in B, choose an exact sequence η ′ : 0 −→ Form the complex T ′• and then obtain the short exact sequence γ ′ and the isomorphism θ B ′ as above. Identify G(T • ) with B, G(T ′• ) with B ′ . Since the functor G is fully-faithful, we have a chain map φ • : T • −→ T ′• such that G(φ • ) = f . This implies the following commutative exact diagram in B 0. From this it is direct to see that θ B ′ • f = φ • • θ B in B and thus in D(A). This finishes the proof.
1,587.6
2009-02-05T00:00:00.000
[ "Mathematics" ]
First Human Use of Shockwave L6 Intravascular Lithotripsy Catheter in Severely Calcified Large Vessel Stenoses Background Intravascular lithotripsy (IVL) modifies superficial and deep vascular calcium by delivering pulsatile sonic pressure energy that fractures calcium in situ with the consequent enhancement of transmural vessel compliance, limitation of fibroelastic recoil, and optimization of stent implantation. To date, the use of IVL as an adjunct to facilitate stent implantation has been limited by large target vessel size and eccentricity of calcium distribution. Methods The Shockwave L6 IVL balloon delivery system includes 6 sonic energy emitters mounted on the shaft of a 30.0-mm long balloon with diameters ranging from 8.0 to 12.0 mm. The balloon nominal pressure is 4 atm. We describe first human use of this novel IVL delivery system to facilitate covered stent implantation in severely calcified stenoses involving the distal abdominal aorta and bilateral iliac arteries. Results Full IVL balloon expansion was achieved at low pressures (3 atm), despite the severity of calcification, with subsequent safe and effective covered stent implantation. Conclusions The Shockwave L6 balloon seems to expand the application of IVL to the treatment of severely calcified large vessels, such as the abdominal aorta and iliac arteries. Introduction Advanced age and an increasing frequency of diabetes, systemic hypertension, and chronic kidney disease contribute to an increased prevalence and severity of vascular calcification. 1 Calcified plaque negatively affects procedural, early, and late clinical outcomes after percutaneous vascular intervention.Moderate to severe vascular calcium leads to stent underexpansion, asymmetry, and malapposition, 2,3 which may be associated with adverse clinical events such as restenosis and thrombosis. 3Multiple technologies have been developed to modify vascular calcification with the intent to optimize stent expansion and improve subsequent clinical outcomes. 4,5These technologies have been limited by target vessel size and the depth and eccentricity of calcium distribution.Intravascular lithotripsy (IVL) modifies both superficial and deep vascular calcium by delivering pulsatile sonic pressure energy that fractures calcium in situ, thus enhancing transmural vessel compliance and limiting fibroelastic recoil with a subsequent optimization of stent expansion. 68][9] We report the first human use of a novel, IVL delivery system designed to treat severely calcified aortic and large peripheral artery stenoses. Device description The Shockwave L6 IVL catheter is purpose-built to address severe calcification in large peripheral vessels (Central Illustration A).The balloon catheter is offered in 4 inflated diameters-8.0,9.0, 10.0, and 12.0 mm-all of which are 30.0mm in length and feature 6 sonic energy emitters incorporated into the shaft of the balloon.With its compact emitter design, L6 offers a consistent, high sonic energy output across the entire length of the balloon (Central Illustration B), which differs from the energy profile of the M5/M5þ IVL balloon (Central Illustration C).The L6 is a 0.018-inch guide wire compatible to provide support needed in large vessel interventions.The L6 provides low-pressure lesion preparation to minimize complications related to barotrauma, and IVL therapy can be delivered at 2-4 atm with a nominal pressure of 4 atm and rated burst pressure of 6 atm.The L6 balloon provides a maximum of 300 sonic pressure pulses. Aortic case A 65-year-old woman with a medical history of coronary artery disease, hyperlipidemia, hypertension, and diabetes mellitus was referred for evaluation and treatment of severe, bilateral, lifestyle limiting claudication in August 2022.An aortoiliac duplex study showed a marked velocity elevation (~400 cm/s) in the distal abdominal aorta consistent with a severe stenosis.Monophasic flow was noted in the bilateral iliac systems. Diagnostic angiography through the right radial artery showed a severely calcified, 90% stenosis of the distal abdominal aorta, 75% stenosis in the distal left superficial femoral artery, and occluded anterior tibial arteries bilaterally.Abdominal computed tomography angiography showed a heavily calcified, greater than 90% stenosis of the infrarenal abdominal aorta with a reference vessel diameter of 12.0-14.0mm (Figure 1A, B).Revascularization options such as both endovascular and surgical approaches were discussed, and the patient was referred for surgical consultation owing to the severity of vascular calcification.She elected to proceed with an endovascular approach because of the risks associated with surgery. The patient returned to the catheterization laboratory for the interventional procedure in November 2022.A 25-cm 8F Brite Tip sheath (Cordis) was placed through the right common femoral artery and a 5F sheath was placed through the right radial artery.A 150.0-cm, 0.035inch NaviCross support catheter (Terumo) was advanced to the abdominal aorta through the right radial sheath for angiographic images during the intervention.An abdominal angiogram was performed (Figure 1C), and a 0.018-inch guide wire was advanced across the abdominal aortic stenosis through the femoral sheath.IVL was performed (180 pulses) in the abdominal aorta using a 12.0-Â 30.0-mmShockwave L6 balloon inflated to 3 atm (Figure 1D).An 11.0-Â 39.0mm Viabahn VBX balloon expandable covered stent (W.L. Gore) was deployed in the abdominal aorta (Figure 1E) and postdilation performed with a 14.0-Â 20.0-mm balloon.The final abdominal aortogram showed an excellent angiographic result (Figure 1F). Common iliac artery case A 74-year-old man with a medical history of coronary artery disease, hypertension, diabetes, tobacco use, hyperlipidemia, obstructive sleep apnea, and peripheral artery disease presented with severe bilateral claudication symptoms.Noninvasive vascular study revealed obstructive peripheral artery disease involving the bilateral common iliac arteries (CIAs). Peripheral angiography from the right radial approach revealed severely calcified stenoses involving bilateral CIAs (Figure 2A, B), and the decision was made to proceed with percutaneous revascularization.Bilateral common femoral 8F sheaths were placed using ultrasound guidance. Intravascular ultrasound (IVUS) of right CIA revealed severe calcification with lumen narrowing (Figure 2C) and a reference vessel diameter of 11.7 mm (Figure 2D).The IVUS images of the left CIA also showed a severely calcified stenosis (Figure 2E) and a reference vessel diameter of 10.98 mm (Figure 2F).IVL was performed (150 pulses bilaterally) using a 12.0-Â 30.0-mmShockwave L6 balloon inflated to 3 atm.Viabahn VBX balloon expandable covered stents were deployed bilaterally followed by postdilation with a 12.0-mm diameter balloon in both covered stents.The postinterventional aortogram showed an excellent angiographic result (Figure 3A, B). Discussion Both a severely calcified abdominal aorta stenosis and bilateral CIA stenoses were safely and effectively treated with the novel Shockwave L6 balloon and Viabahn covered stents.Alternative treatment options would have included either surgical grafting/endarterectomy or standard balloon angioplasty, followed by endovascular covered stent deployment.Owing to the density of the calcified plaque in these cases, high-pressure balloon angioplasty would likely have been necessary to achieve effective adequate predilation before stent deployment.Balloon angioplasty in large, heavily calcified vessels carries significant risks including atheroembolization, arterial dissection, and perforation or vessel rupture.These risks are further increased when high pressures are required.Perforation or rupture, specifically involving the abdominal aorta and iliac arteries, may be particularly catastrophic because direct manual compression cannot be applied.8][9] Indeed, low-pressure (3 atm) L6 balloon inflations achieved full balloon expansion in both cases, and higher balloon pressures were not performed until after covered stent implantation to optimize stent expansion. In this context, it is important to note that the sonic pressure wave profile of L6 differs from that of the currently available M5/M5þ, S4 or C2 IVL catheters. 6In the other catheters, the peaks of the sonic pressure waves geographically correlate with the location of the emitters on the shaft of the balloon catheters.The M5/M5þ has the highest sonic pressure wave peak, which corresponds to the middle emitter on the balloon shaft that has its own individual energy source (Central Illustration C). 6 All other emitters are coupled and share a single energy source between them.Because of the position of the emitters on the shaft of the L6 balloon catheter, the sonic pressure wave peak is higher and more uniform across the surface of the balloon delivery system (Central Illustration B).These initial cases suggest that large vessel (abdominal aorta and iliac artery) calcium modification by the Shockwave L6 IVL balloon is feasible to facilitate stent deployment during percutaneous vascular intervention.Although more clinical experience is required, the Shockwave L6 balloon seems to expand application of IVL to facilitate endovascular intervention in large vessels such as the abdominal aorta and CIAs. Peer review statement Deputy Editor Dean J. Kereiakes had no involvement in the peer review of this article and has no access to information regarding its peer review.Full responsibility for the editorial process for this article was delegated to Associate Editor Sahil A. Parikh. Declaration of competing interest J.D. Corl is a clinical investigator, speaker, and consultant for Shockwave Medical and has equity holdings for Shockwave Medical.Douglas Flynn and Timothy Henry have no relevant disclosures.Dean Kereiakes is a consultant for Shockwave Medical. Funding sources There was no financial support required or provided for this research. Ethics statement and patient consent Both patients signed the informed consent forms before the procedure for transcatheter intervention including Shockwave IVL.These procedures were performed as part of a limited product release of an approved device, so an institutional review board/ethics committee approval was not required. Central Illustration.Shockwave L6 peripheral IVL balloon catheter.(A) Three channels with 6 emitters along the shaft of the 30 mm balloon.(B) Shockwave L6 sonic energy profile, which is more uniformly intense along the length of the balloon than observed with (C) the Shockwave M5/M5þ peripheral IVL balloon sonic energy profile.The L6 IVL balloon has a unique sonic energy profile.Panel C reproduced with permission from Kereiakes et al.6 Figure 2 . Figure 2. Diagnostic imaging of bilateral iliac arteries.(A) Diagnostic angiogram of the right common iliac artery (CIA).(B) Diagnostic angiogram of left CIA.(C) Intravascular ultrasound (IVUS) of severe, heavily calcified stenosis in the right CIA. (D) IVUS of the right CIA reference vessel.(E) IVUS of severe, heavily calcified stenosis in the left CIA.(F) IVUS of the left CIA reference vessel.
2,224.6
2023-05-01T00:00:00.000
[ "Medicine", "Engineering" ]
Identification of the newly observed $\Sigma_b(6097)^\pm$ baryons from their strong decays Two bottom $\Sigma_b(6097)^\pm$ baryons were observed in the final states $\Lambda_b^0\pi^-$ and $\Lambda_b^0\pi^+$ in $pp$ collision by LHCb collaboration, whose masses and widths were measured. In a $^{3}P_{0}$ model, the strong decay widths of two ground $S$-wave and seven excited $P$-wave $\Sigma_b$ baryons have been systematically computed. Numerical results indicate that the newly observed $\Sigma_b(6097)^\pm$ are very possibly $\Sigma_{b2}^1({3\over 2}^-)$ with $J^P={3\over 2}^-$ or $\Sigma_{b2}^1({5\over 2}^-)$ with $J^P={5\over 2}^-$. The predicted decay widths of $\Sigma_b(6097)^\pm$ are consistent with experimental measurement from LHCb. In particular, it may be possible to distinguish these two assignments through ratios $\Gamma({\Sigma_b(6097)^\pm\to \Sigma_b^\pm\pi^0})/\Gamma({\Sigma_b(6097)^\pm\to \Sigma_b^{*\pm}\pi^0})$, which can be measured by experiments in the future. In the meantime, our results support the assignments that $\Sigma_b^\pm$ and $\Sigma_b^{*\pm}$ are the ground $S$-wave $\Sigma_b$ baryons with $J^P={1\over 2}^+$ and $J^P={3\over 2}^+$, respectively. I. INTRODUCTION There are two light u, d quarks and one heavy b quark in Σ b baryons, and the two light quarks couple to isospin 1 inside. Four Σ ± b and Σ * ± b have been observed by the CDF collaboration [1,2]. Their spins or parities have not been measured by experiment, they are assigned as the ground S-wave Σ b with J P = 1 2 + and J P = 3 2 + , respectively, in quark models. The assignments need confirmation in more ways. The masses and widths of these baryons from Particle Data Group [3] are given in Table I. Recently, the data were precisely improved by LHCb experiment [4]. In the same LHCb experiment, two bottom Σ b (6097) ± baryons were first observed in final states Λ 0 b π − and Λ 0 b π + in pp collision. The masses and widths of the Σ b (6097) ± are measured The identification of heavy baryons provides an excellent way to explore the structure and dynamics in baryons [5][6][7][8][9]. Therefore, the identification of Σ b (6097) ± is an important topic in the quark model. In Ref. [10], Σ b (6097) ± were explained as P -wave baryons with J P = 3 2 − or J P = 5 2 − based on the mass spectrum analysis and the strong decay calculation in a diquark picture. In Ref. [11], Σ b (6097) ± were also explained as P -wave baryons with J P = 3 2 − or J P = 5 2 − based on their strong decay analysis in a chiral quark model. As a phenomenological method, 3 P 0 model has been employed to compute the OZI-allowed hadronic decay * Electronic address<EMAIL_ADDRESS>widths of hadrons after its appearance [12][13][14][15]. Though the bridge between the phenomenological 3 P 0 model and QCD has not been established, some attempts have been made [16][17][18]. The 3 P 0 model is also capable of exploring the dynamics and structure of baryons or multi-quark systems. Recently, the approach has been employed to study of the structure of charmed baryons through their strong decays [19][20][21][22][23]. In this work, we will study the P -wave possibility of Σ b (6097) ± in detail. By the way, the ground S-wave Σ b possibility of Σ b and Σ * b will be examined. The work is organized as follows. In Sec.II, the 3 P 0 model is briefly introduced, some notations of heavy baryons and related parameters are indicated. We present our numerical results and analyses in Sec.III. In the last section, we give our conclusions and discussions. II. 3 P0 MODEL, SOME NOTATIONS AND PARAMETERS 3 P 0 model is also called a Quark Pair Creation (QPC) model. It was first proposed by Micu [12] and further developed by Yaouanc et al [13][14][15]. The basic idea of this model is assumed that a pair of quark qq is created from the QCD vacuum with vacuum quantum numbers J P C = 0 ++ , and then regroup with the quarks from the initial hadron A to form two daughter hadrons B and C [12]. For a bottom baryon, there are three ways of regrouping shown in the following equations [19,23]: where uub (the constituent quark of initial baryon A) could be replaced by ddb, and the created quark pair uū could be replaced by dd. In the 3 P 0 model, the hadronic decay width Γ of a In the equation, p is the momentum of the daughter baryon in A's center of mass frame, m A and J A are the mass and the total angular momentum of the initial baryon A, respectively. m B and m C are the masses of the final hadrons. M MJ A MJ B MJ C is the helicity amplitude, which reads [19][20][21]23] The factor 2 in front of γ results from the fact that Eq. (2) and Eq. (3) give the same final states. In the equation above, the matrix of the flavor wave functions ϕ i (i = A, B, C, 0) can also be computed in terms of a matrix of the isospins as follows [15,20] where f takes the value of ( 2 3 ) 1/2 or −( 1 3 ) 1/2 for the isospin 1 2 or 0 of the created quarks, respectively. I A , I B and I M represent the isospins of the initial baryon, the final baryon and the final meson. I 12 , I 3 and I 4 denote the isospins of relevant quarks, respectively. The space integral follows as, with a simple harmonic oscillator (SHO) wave functions for the baryons [20,24,25] where N = 3 3 4 represents a normalization coefficient of the total wave function. Explicitly, where L L+1/2 n p 2 β 2 denotes the Laguerre polynomial function, and Y LML (Ω p ) is a spherical harmonic function. The relation between the solid harmonica polynomial In above equations, Jacobi coordinates ρ and λ [26] were employed. Notations and internal structures of the heavy baryons in quark model are explained in Refs. [19,22,23,27]. In this model, there are two S-wave and seven P -wave Σ b . In other quark model, the structure and dynamics in baryons may be different. In fact, the difference is an indication of the complexity of baryon. Accordingly, the numbers of P -wave Σ b may be different in different models. For example, there are five P -wave Σ b in Refs. [10,11,[29][30][31]. For a practical calculation, the quantum numbers of two 1S-wave and seven 1P -wave Σ b baryons are presented in Table II. In this table, L ρ denotes an orbital angular momentum between the two light quarks, L λ denotes the orbital angular momentum between the bottom quark and the two light quark system, and L is the total orbital angular momentum of L ρ and L λ (L =L ρ + L λ ). S ρ denotes the total spin of the two light quarks, J l is the total angular momentum of L and S ρ (J l = L + S ρ ), and J is the total angular momentum of the baryons (J = J l + 1 2 ). For Σ L bJ l , a superscript L is specialized to denote different total angular momentum. The tilde indicates L ρ = 1, and the blank indicates L ρ = 0. III. NUMERICAL RESULTS A. Decays of Σ b and Σ * b Σ ± b and Σ * ± b were first observed in the final states Λ 0 b π ± in pp collision by the CDF collaboration [1], and were interpreted as the lowest-lying Σ ± b and Σ * ± b baryons with into Λ 0 b π ± as 1S-wave states or 1P -wave excitations. and Σ * + b , and Λ 0 b π − is the only decay mode of Σ − b and Σ * − b . In the 3 P 0 model, the hadronic decay widths of these four observed Σ ± b into Λ 0 b π ± in two S-wave and seven P -wave assignments are computed and presented in Table IV, where a '0' indicates a vanish decay channel. In comparison with experimental results (see Table I As pointed out in the second section, there are seven P -wave Σ b baryons. The masses of low-lying bottom baryons have been systemically predicted in many references such as [28][29][30][31]. If Σ b (6097) ± are P -wave Σ b baryons, there exist five possible OZI-allowed hadronic decay modes. The five channels are : Λ 0 b π ± , Σ ± b π 0 , Σ * ± b π 0 , Λ 0 b (5912)π ± and Λ 0 b (5920)π ± . The strong decay widths of Σ b (6097) − into these five channels are calculated in seven different P -wave assignments, and presented in Table V. The results of Σ b (6097) + are presented in Table VI. In the calculation, Σ b and Σ * b are set to the ground S-wave Σ 0 b1 ( 1 2 + ) and Σ 0 b1 ( 3 2 + ) as indicated in previous subsection. Λ 0 b (5912) and Λ 0 b (5920) were observed in Λ 0 b π + π − in pp collision by LHCb [35], and interpreted as the orbitally excited Λ * 0 b (5912) and Λ * 0 b (5920) though their exact assignment as the P -wave Λ b has not been made. For simplicity, Λ 0 b (5912) and Λ 0 b (5920) are set to Λ b ( 1 2 − ) and Λ b ( 3 2 − ), respectively. Based on numerical results, Σ b (6097) ± are very possi- where there is no ρ-mode excitation inside. Under theoretical uncertainties, the total decay widths (Γ ≈ 19 − 20 MeV ) are consistent with the experimentally measured ones by LHCb. In both assignments, Λ 0 b π ± are their dominant decay channels with branching fraction ratios ≈ 72 − 74%. , the branching fraction ra- These ratios can be employed by experiment to distin- ). They depend weakly on the parameters in the 3 P 0 model. There are some uncertainties in the 3 P 0 model. In addition to the masses of the hadron involved in the decay, the strong decay widths depend on some parameters such as γ and β. In principle, β can be derived directly in quark model. Unfortunately, β of baryons have not yet been determined for a complexity of quark dynamics in baryons. β were also set to those for mesons as in existed references. These uncertainties may change some predicted decay widths.
2,507.4
2018-10-16T00:00:00.000
[ "Physics" ]
Evaluating the Expressive Range of Super Mario Bros Level Generators : Procedural Content Generation for video games (PCG) is widely used by today’s video game industry to create huge open worlds or enhance replayability. However, there is little scientific evidence that these systems produce high-quality content. In this document, we evaluate three open-source automated level generators for Super Mario Bros in addition to the original levels used for training. These are based on Genetic Algorithms, Generative Adversarial Networks, and Markov Chains. The evaluation was performed through an Expressive Range Analysis (ERA) on 200 levels with nine metrics. The results show how analyzing the algorithms’ expressive range can help us evaluate the generators as a preliminary measure to study whether they respond to users’ needs. This method allows us to recognize potential problems early in the content generation process, in addition to taking action to guarantee quality content when a generator is used. Introduction Research on Procedural Content Generation for video games consists of studying procedural generation methods that allow the creation of levels for video games automatically through computational algorithms [1].Currently, these computational algorithms can potentially save money [2], time, and effort in various areas, such as engineering [1], music [3], and art [4].The total person-hours needed to complete certain activities can be reduced because these AI-driven systems imitate human action to some degree and deliver results as good as those that a game designer could create [5]. Thanks to PCG, companies have adapted their workflows to be more competitive and achieve better results.There are even situations where artists have begun to be replaced by these intelligent systems to create games more quickly and economically while maintaining quality [6].Nowadays, companies not only settle for the initial release but also add new content to keep their audience captive.We know this strategy as "Downloadable Content" (DLC) [7], where companies offer additional content that is sold separately, allowing them to generate greater profits.PCG can be a powerful tool for creating DLCs and, thus, offering better services to users. In this article, we carry out a study of the expressiveness of open-source automatic level generators for Super Mario Bros (SMB).These are 1. 3. Markov Chains (MCs) (https://github.com/hansschaa/MarkovChains_SMB(accessed on 13 June 2024)).This article's main contribution is comparing three generative spaces using Expressive Range Analysis (ERA) [10].Two of the three evaluated implementations are peerreviewed [8,9], the third is our implementation of Markov Chains, and finally, these are contrasted with the SMB levels used as training data.The levels were analyzed with nine metrics through heat charts and box/whisker graphs.As seen in Figure 1, we used the tiles from the video game Super Tux [11] throughout this article because of its GPL license.To carry out this study, 200 boards were generated with each generator, and then, for each level, the nine metrics were calculated.Finally, these values are graphed to perform a comparative analysis between generators.The results show that GA and MC have a noticeably wider expressive range than GAN.GA benefits from its exploitation and exploration capacity to find diverse levels, and MC, through the training data, creates levels similar to those that a human can design. This document is structured as follows: Section 2 briefly presents the technique's state of the art.Then, in Section 3, the implemented algorithms are reported.In Section 4, the experiments carried out and results obtained are presented, and finally, Sections 5 and 6 show the discussion and conclusions, respectively. Background Creating PCG systems can be an arduous and challenging task; the literature specifies five characteristics that these should exhibit [1].These are the following: • Speed: How fast can the generator deliver the content [12]?This metric measures the time that PCG systems take to generate content.We can categorize these as methods online (the video game has a game loop that allows the generator to create the content at runtime) or offline (the video game does not allow the generator to create the content at runtime), so it must be executed outside of the user experience. • Reliability: How faithful is the generator to the configuration imposed on it [13]?Sometimes, we need some features to be strictly adhered to.The generator should only produce content that satisfies the previously configured constraints for games to be solvable. • Controllability: Does the generator allow designers to customize the required content [13]?A highly controllable system will allow greater flexibility and freedom for the designers or engineers using the generator. • Expressiveness and diversity: Does the expressiveness of the generator allow for the generation of diverse and interesting content [14]?PCG systems are required to give rise to content valued by the audience.For example, one could have an Age of Empires map generator, but if they present the same biomes with different dimensions, it could bore the player. • Creativity and credibility: In some cases, it is useful to know that the generator produces content similar to that of humans [15]. Creating a quality generator is not easy, and evaluating it is much less so.People are different in different aspects, be they those of psychology, motor skills, ability, or what amuses them.We relate many of these metrics to a subjective factor where the audience is widely diverse, and we, as researchers and developers, must learn to read that to create ad hoc content for each of them.We can broadly divide evaluation methods into the following four groups: 1. Static functions: Static functions are widely used in search-based PCG to guide the search toward quality content.Three types of functions can be observed: direct, simulation-based, and interactive functions [1].Some examples could be the number of pushes needed to solve a Sokoban board [5] or the location of resources on a map for strategy video games [16]. 2. Expressive Range Analysis: The analysis of the algorithms that generate content is quite useful since it allows us to have an early view of the behavior of a generator [10].However, these methods should never replace the evaluations the target audience can make.Researchers often use heatmaps to position the generated content based on two variables: linearity and lenience. 3. User testing: These methods are often expensive and time-consuming.Although they provide first-hand information, they are methods that require many resources to carry them out.Among these, we can find playtesting, Turing tests, Likert surveys, and post-interviews, among others [2].4. Bots: With advances in machine learning and reinforcement learning, creating bots that allow levels to be evaluated automatically has been made possible.This allows evaluation of the content as if a person were playing the experience [17].For example, bots have been trained with Reinforcement Learning (RL) to play PCG levels of Super Mario Bros while simulating the actions of a human and, thus, evaluating their playability [18]. PCG for Super Mario Bros Super Mario Bros (SMB) is a widely known platformer video game.Its origin dates back to 1985 in Japan, when it was distributed for the Famicon [19].Its popularity, simplicity, and documentation, among others, make it an attractive study subject.Below are some key events in the study of SMB. The generation of SMB levels began with the general study of platformer games [20].The authors created categories of tile patterns: basic patterns (patterns without repetition), complex patterns (repetition of the same component but with certain changed settings, such as a sequence of platforms with holes of increasing length), compound patterns (alternating between two types of basic patterns), and composite patterns (two components are placed close together in such a way that they require a different type of action or a coordinated action, which would not be necessary for each one individually).Then, they establish a link between the game rhythm that they want to deliver and the music inspired by previous research [21].They report that although relating music to the design of platformer levels seems somewhat discordant, this depends greatly on the rhythm.When the user must jump over obstacles, they must follow a game rhythm.The design applied to video games of this genre creates a rhythmic sequence based on the placement of enemies and obstacles.These were the bases for several later studies that referred to how to evaluate platformer levels.For example, regarding difficulty, the authors of [22] proposed a metric based on the probability of loss that the player has.For this, they created five types of scenarios where each event (jumping, climbing stairs, dodging bullets) had an associated probability of loss. In the same research on measuring difficulty, evolutionary preference was also used in learning via a simple neural network to assess fun, frustration, and challenge levels [23].In 2009, the Mario AI Competition began, aiming to create bots to play SMB levels.These have allowed the levels to be evaluated according to their playability, expanding the possible analyses of SMB levels. PCG Algorithms Various algorithms have been used to generate SMB levels.With the rise of long short-term memory (LSTM) networks, such algorithms have created playable SMB levels similar to those that a human would build by introducing information about the agent's routes to solve them [24].Large Language Models (LLMs) have also been used to create levels through different prompts, achieving excellent results [25]; the authors implemented an adjusted GPT2 Large-Scale Language Model, and 88% of the levels were playable.It has also been proven that these architectures can give rise to highly structured content, such as Sokoban levels; the results improve considerably according to the amount of training data provided [26].The popularity of LLMs is such that a large number of studies showing their potential in video games have been published [27][28][29].In the same line as the use of ML, through reinforcement learning, agents capable of designing SMB levels have been created, and then a neural-network-assisted evolutionary algorithm repairs them.The authors assert that their proposed framework can generate infinite playable SMB levels with different degrees of fun and playability [30].Unlike these black-box models, other level generation systems have also been proposed, such as constructive algorithms [31][32][33] and search-based algorithms [34][35][36]. In addition to the aforementioned methods, Markov Chains have been a popular approach for content generation [37].These are known as a particular example of a Dynamic Bayesian Network (DBN) [38]; they map states through probabilities of transitioning between them.Related to the procedural generation of SMB levels, several works that generally use human-created levels to sample new columns of tiles based on the probability at which they appear can be found [39][40][41].Given the stochastic nature of the Markov Chain, there may be a problem in that some levels that are created cannot be playable, which is why hybrid strategies that incorporate search algorithms to join segments of levels have been studied [42]. Expressive Range Analysis Analyzing the expressive range of algorithms as an early quality assessment measure has been one of the most popular strategies within the scientific community for PCG.The steps of performing an ERA are the following [10]: Determining the metrics: The set of metrics to be evaluated must be chosen; they ideally emerge from the generator's point of view since we can control these variables. 2. Generating the content: A representative sample of the generator's ability to calculate the previously defined metrics is created. 3. Visualizing the generative space: The scores reflect the expressive range of the generator.This can be displayed through heatmaps or histograms to find patterns or gaps. 4. Analyzing the impacts of the parameters: Comparisons can now be made by modifying the generator variables and determining their expressiveness. To carry out an Expressive Range Analysis, most studies select the variables by intuition or simply try to use free-for-all heat graphics.To achieve a greater knowledge of the PCG system implemented, methods to study the characteristics that have the greatest impact on the video game have been created; thus, they are selected in such a way as to carry out the analysis with a much more representative set of the level qualities desired to be evaluated [43].Graphically, heatmaps and box/whisker graphs have been used to statistically study generative power when creating SMB levels [44].In another case, categorizations have been proposed for metrics and neural networks to estimate how good the aesthetics are or how complicated a game level is [45]. Multi-Population Genetic Algorithm This is a multi-population genetic algorithm for the procedural generation of SMB levels.The central idea of this algorithm is to evolve terrain, enemies, coins, and blocks independently.Each of these has its own coding and fitness function.When the evolutionary algorithm finishes the specified generations, the best individuals from each population are chosen to build a level.By combining each population to create the level, the algorithm makes sure to position each element in the correct place.For example, enemies are placed on the highest floor tile in each column, and coins are placed at a height defined by the genotype, as are blocks. Representation Each of the individuals is encoded as a vector of integers; thus, the level will be represented by the union of these four vectors.Each one follows the following logic: • Floor: The vector for the floor is a vector of the length of the level, where each element takes values between 0 and 15.Each position shows the place on the x-axis where the floor tile will go.• Blocks: The blocks follow a structure similar to that of the vector for the ground.The difference is that each element takes values between 0 and 4 to show the type of block (improvement, coin, solid, destructible).These are placed four spaces above the highest floor tile, so only one block can be placed per column.• Enemies: The vector of enemies has the same definition as the vector of blocks, except that they are located immediately after the ground.Each of its elements can take values between 0 and 3 because of the three types of enemies. • Coins: The vector of coins works the same as that of the ground, where each value shows the height at which they are located. Fitness Function The fitness function is the same for everything except the floor.It evaluates this under the concept of entropy [46].This allows the measurement of the unpredictability of an event, and here, it is used to calculate the unpredictability of the ground.The entropy function is applied to parts of the floor.This decision was made to avoid having a straight floor shape (minimum entropy) or a very stepped one (maximum entropy). The other level elements use the concept of "dispersion" [47].Its definition contemplates giving a high dispersion to sets of elements with a high average distance.The goal of the algorithm is to minimize dispersion. Deep Convolutional Generative Adversarial Network GANs are novel neural models capable of delivering interesting content by making use of the corpus of levels stored in the Video Game Level Corpus (VGLC) (https://github.com/TheVGLC/TheVGLC (accessed on 13 June 2024)) to create SMB levels.Although the created GAN produces good content, it can be improved through a Covariance Matrix Adaptation Strategy (CMA-ES) so that, through different aptitude functions, it is possible to discover levels in the latent space that maximize the desired properties. Process The applied approach is divided into two phases: the first is the training of the GAN with an SMB level.This is encoded as a multidimensional matrix; there is also a generator that operates on a Gaussian noise vector using this same representation and is trained to create SMB levels.Then, the discriminator is used to discern between the existing and generated levels.When this process is completed, we can understand the GAN as a system that maps from genotype to phenotype, takes a latent vector as an input variable, and generates a tile-level description of SMB.The CMA-ES is then used to search through the latent space for levels with different properties [9]. Training The algorithm that trains the GAN is called Wasserstein GAN (WGAN) and follows the original DCGAN architecture.It also uses batch normalization in the generator and discriminator after each layer.Unlike the original architecture [48], the study's implementation uses ReLU activation functions in all generator layers, including the output, since this produces better results. In this phase, each tile is represented by an integer extended to a one-hot encoding vector.So, the inputs for the discriminator are 10 channels of 32 × 32.For example, in the first channel, if there is a floor, they mark it with 1 and the voids with 0. The dimension of the latent vector input to the generator is 32.When running the evolution, the final dimensional output of 10 × 32 × 32 is cut to 10 × 20 × 14, and each vector in each tile is transformed into an integer using the ArgMax function. Markov Chains For this work, an algorithm that implements Markov Chains was programmed to create an SMB level.As in the previous section, we also used the VGLC.The pseudocode in Algorithm 1 shows the procedure for generating an SMB level with a length of 100.We describe it in more detail below. 1. ExtractColumns: We extract columns from the VGLC levels and add them to a vector. 2. RemoveDuplicates: The repeated columns are removed.This is essential since the transition matrix will then be calculated with the remaining states. 3. GetTransitionMtx: The transition matrix is a matrix or data structure that has, for each column, the columns that are successors, along with the frequency with which the element in question precedes them. 4. AppendNewColumn: This function finds the next column based on the transition matrix and adds it to the level structure. 5. Level construction: Once the columns that will form the level have been specified, the level can be built and exported to the required format. Metric Computation Once the generators were running, software was programmed in Java 17.0.11to extract the metrics of each level and, thus, be able to perform the ERA.This is public and stored on GitHub (https://github.com/hansschaa/SBM-Expressive-Range-Study(accessed on 13 June 2024)).As seen in Table 1, there are 9 metrics related to the difficulty and level structure. Metric Description Empty Spaces Percentage of empty spaces. Negative Spaces Percentage of spaces that are reachable by Mario. Interesting elements Percentage of elements that are not floor or empty places. Significant Jumps Number of jumps needed to complete the level, calculated as the numbers of jumps over holes and enemies. Lenience This calculation considers the number of enemies and power-ups in the level as a measure of the associated difficulty.Here, we calculated it as the number of enemies multiplied by a factor related to the difficulty of killing those enemies minus the number of power-ups. Linearity Linearity of the game level.A completely linear stage means a flat level.This was calculated as the sum of differences between each pair of columns divided by the number of columns. Enemy Compression (EC) For a margin "m", we calculate how many enemies surround others within a distance "m", giving rise to a compression measurement.High compression means that there are many groups of enemies. Density Quantity of floor tiles mounted on top of others of the same type. Enemy count Number of enemies. The metrics were calculated so that a high value indicates a high presence.For example, a linearity of 1 indicates that the level is very linear. Experiments and Results We generated 200 boards of 100 tiles for each of the generators to have a large amount of information and, thus, capture their true generative nature (Table 2 shows the symbology used).Then, the levels were imported into the software to calculate their metrics.We normalized these values, considering the maximum and minimum of all resulting values as the upper and lower thresholds, respectively.Finally, to create the charts, we divided them into four files (three for each generator and the original levels) and imported them into a Jupyter notebook (https://github.com/hansschaa/SMB-ERA-Graphs(accessed on 13 June 2024)) to create the heatmaps and box/whisker graphs.Finally, the graphs were analyzed, and we describe each one and compare them.Regarding the format of the levels, the generator based on GA [9] considers a subgroup of the tiles used for the training of the GAN and MC algorithms.We were forced to simplify some game tiles to just one character.For example, 'Q' and 'S' (blocks that Mario can break and an empty question block) from a study that implemented a GAN [9] were now blocks represented only by the character 'B' (block).Likewise, the shapes of the bottom of the pipes are represented only by 'X' (floor) and not with [, ], <, >.This logic allows us to unify metrics and make the logical representation of each PCG algorithm comparable. We also include the original levels used in the MC generator ( https://github.com/TheVGLC/TheVGLC/tree/master/Super%20Mario%20Bros/Original (accessed on 13 June 2024)) for an additional point of comparison and analysis.Some final results can be seen in Figure 2. The hyperparameters of the algorithms were extracted from each of the articles [8,9].Tables 3 and 4 show the hyperparameters for the GA and the GAN, respectively, and the MC-based algorithm only has one hyperparameter called n-level, which is 2. Expressive Range Heatmaps and box/whisker graphs were created to perform a visual reading of the generators.One of the most commonly studied concepts in the generation of video game levels is difficulty.To do this, in Figure 3b, one can see the heatmaps for the three generators and the SMB levels.The GAN produces mostly linear and less diverse levels, while the GA and MC produce semi-linear levels.Regarding lenience, the GAN does not create highly difficult levels in comparison with the GA, which, through the evolutionary method, can build challenging scenarios.The original SMB levels cover a larger area of the generative space with respect to these two metrics; this is very different from the behavior of the other generators, whose respective distributions have a lower variance.Visually, the GAN generator is the most dissimilar.These levels created through Genetic Algorithms and Markov Chains are those that come closest to the characteristics of the original levels.However, a more in-depth analysis must be performed to accurately make this conclusion.Figure 3b,c is also intended to show the degree of difficulty in the generated content.Having enemies too close together can make it difficult to solve the level since the player has a limited jumping height, and rows of enemies can kill them.In this, one can see that the MC generates various configurations on the Y axis, obtaining a wide expressive range regarding the compression of enemies.The GAN obtains poor performance, while the GA concentrates all of the values, giving rise to a low diversity of enemy distribution.The heatmaps in Figure 3a,d are related to the levels' design in appearance and navigation.Figure 3a shows how the GA and MC generators obtain a similar linearity.The GA and MC differ mainly in how the floor tiles are stacked, resulting in denser levels for the GA generator than for the MC.Regarding the GAN, the levels are highly linear with a low density, which results in SMB levels with a low number of columns and complex ground structures.Again, the SMB levels have a wide distribution, as seen on the Y axis, where the density displayed runs along the entire axis.Additionally, the heatmap in Figure 3d shows a limited number of interesting elements with the GA, which produces the greatest number of elements other than soil, but with a low variance in comparison with the MC generator.In this case, there is a similarity in behavior between the original SMB levels and the GAN and MC generators.Still, the MC generator exhibits greater monotony between this pair of variables, covering a larger area of the generative space where its projection is almost linear.Last, Figure 3e shows again how the SMB levels cover a much more uniform space than that of the other generators.This characteristic is desired since a high diversity is needed due to the expressiveness of the algorithms.The three generators distribute their data in a similar way, where the greatest variation with respect to the calculated metrics is given by the MC generator.Curiously, these levels escape the expressive range that the original levels have, since, despite their having been provided as training data, the Markov Chains manage to generate content that has not been seen with respect to the evaluated metrics.This may be caused by the number of columns that the MC considers to create the transition matrix, causing the patterns to be examined locally and not globally, as in the GA and the GAN. To analyze the generated levels further, we constructed four figures with three box/whisker graphs with normalized data to observe the differences among the generators.The variables studied were the average enemy compression, enemy count, linearity, and lenience.Figure 4d shows how the median of the GA generator is very different from that of the GAN and MC, supporting the idea that this generator creates complex levels concerning difficulty, that is, with high numbers of holes and enemies.This fact is also supported by Figure 4a,b, where it can be seen that the GA obtains levels with many enemies and a high average compression thereof.Figure 4a,c show that the MC generator has a high expressive range when compared to the other generators in terms of linearity and enemy compression, producing diverse levels in terms of structure and difficulty.The data shown by the MC generator are very similar to the original levels, except for Figure 4d, where these seem more challenging. Discussion The evaluated generators are different in their approaches, each with its own advantages and disadvantages depending on the implementation.For instance, training data can be fed to machine learning algorithms such as a GAN, and the results depend on the quality of this phase.However, they are fast methods capable of execution at runtime.As can be seen in Figure 5, the GAN sometimes produced incoherent content, which would detract from the user experience.This can be fixed through some constructive algorithms or other generative approaches that consider constraints that make the generated content playable [49].As observed, the MC generator exhibited a wide expressive range in several metrics.This is the one that distributed the evaluated metrics most uniformly within the plot, the other generators showed a reduced generative space that was concentrated in a small range of values, which did not provide much diversity in the final content.GAs are recognized for being highly configurable, debuggable, and controllable, making them one of the most favored methods for generating content.However, while effective, GAs are slow and tend to fall into local optima easily.To address this, the Quality Diversity algorithms [14] aim to deliver a diverse and high-quality set of individuals as a product. Conducting an ERA early on can help discern whether to use one method over another depending on the practitioner's needs.It is not costly and does not require an extensive programming period for calculating metrics and constructing graphs.However, the question of whether there are heuristics that can bring us closer to human thinking remains.These metrics cannot replace user testing but serve as an initial probe in analyzing procedural content generators for video games. Conclusions This paper evaluates three automatic level generators for Super Mario Bros and their training data.These are based on Genetic Algorithms, Generative Adversarial Networks, and Markov Chains.We tested 200 levels and 9 metrics, performing an evaluation through an Expressive Range Analysis. Expressive Range Analysis is useful for the early evaluation stages, as heatmaps allow us to clearly visualize how algorithms exhibit uncertain desired characteristics.We observed how genetic algorithms show a wide expressive range despite their early convergence.The presented example uses four different populations, allowing high locality in the search space and generating diverse content.Markov Chains are efficient due to their simplicity and the speed with which they are executed.It is important to have a large corpus of levels to guarantee greater diversity in the results.However, like ML methods, they are complicated to control.GANs produced good content but were sometimes incoherent, not very diverse, and had a limited expressive range. In future work, it is necessary to include more generators.There is a research gap regarding evaluating machine-learning-based generators for platform levels.It is necessary to include an evaluation of agents to gain more information about the levels, such as their playability and navigation.Although some levels were played, an automatic method is required to obtain metrics regarding the agent and how it overcomes the game level.It is also interesting to investigate the correlations between the metrics studied and humans' perception, to change them, or to pay attention to those relevant to the study [43].Also, it would be very useful to carry out a study of the search space that each generator reaches to obtain better-founded conclusions about its generative power. Figure 2 . Figure 2. Examples of the levels generated by each generator.Most levels present similarities, for example, in the absence of structures in the GAN generator or the lack of structural verticality in the GA generator.(a) Level generated by the GA generator.(b) Level generated by the GAN generator.(c) Level generated by the MC generator. Figure 3 . Figure 3. Expressive range of each generator.Each pair of variables was selected to study relevant characteristics.Density, linearity, and negative space represent the complexity of the level's navigation; the lenience, average enemy compression, and enemy count variables refer to the degrees of challenge, and finally, interesting elements correspond to the number of interactive elements (power-ups, coins, enemies) in the level.(a) Density vs. linearity.Dense levels with a high linearity can be boring to play.(b) Lenience vs. linearity.Lenience and linearity can help us estimate a level's hardness.(c) Average EC vs. enemy count.Various enemies can lead to very challenging levels.(d) Interesting elements vs. negative space.Much negative space without interesting elements can result in repetitive structures and is far from being a challenge.(e) Empty spaces vs. significant jumps.A high number of free spaces can result in more complex situations than those that allow greater navigation of the stage without too many jumps. Figure 4 . Figure 4. Boxplots for each generator to compare a single variable of interest.Each of these allows us to observe the dispersion of the data achieved by each generator.The description of each of the variables is found in Table 1.(a) Average enemy compression.(b) Enemy count.(c) Linearity.(d) Lenience. Figure 5 . Figure 5. Incoherent results by the GAN generator. Table 1 . Metrics evaluated for each SMB level. Table 2 . Symbols used for SMB level encoding. Table 3 . Hyperparameters for the Genetic Algorithm. Table 4 . Hyperparameters for the Generative Adversarial Network.
6,993
2024-07-11T00:00:00.000
[ "Computer Science" ]
Blind spots between quantum states The overlap of a large quantum state with its image, under tiny translations, oscillates swiftly. We here show that complete orthogonality occurs generically at isolated points. Decoherence, in the Markovian approximation, lifts the correlation minima from zero much more quickly than the Wigner function is smoothed into a positive phase space distribution. In the case of a superposition of coherent states, blind spots depend strongly on positions and amplitudes of the components, but they are only weakly affected by relative phases and the various degrees and directions of squeezing. The blind spots for coherent state triplets are special in that they lie close to an hexagonal lattice: Further superpositions of translated triplets, specified by nodes of one of the sublattices, are quasi-orthogonal to the original triplet and to any state, likewise constructed on the other sublattice. Introduction Interference between quantum states has no equivalence in classical mechanics. The simplest way to generate it is by superposing identical copies of the same initial state, displaced in position or momentum, in eg. a generalized two slit experiment. One produces finer fringes, the larger the separation between the states, that is, large separation in momentum generates fine fringes in the position intensities and vice versa. Both these cases give rise to oscillations of the Wigner function [1], which represents the superposed state in phase space: If both copies have individual Wigner functions that are well localized in phase space, the oscillations that lie between them are most pronounced when the individual states are sufficiently separated for the overlap between them to be very small. The fringes disappear as the copies are brought together. The phenomena studied here belong to an opposite regime, that is, though we still consider the displacement of identical copies, their individual extent is assumed to be much larger than their separation, i.e. their phase space areas are much larger than . Rather than the intensity of probability in position or momentum, it is the overlap itself between the pair of states, such as produced in an interferometer, that oscilates as a function of the displacement. Even though one generally expects large overlaps for ‡<EMAIL_ADDRESS>§<EMAIL_ADDRESS>small translations, one may even obtain complete orthogonality with the initial state. By increasing the extent of the state, one thus maximizes the effect of a very small translation. We refer to displacements for which the overlap is zero as quantum blind spots. It is as if the state were oblivious to its translation, right beside it, 'stepping on its toes'. A displacement may be generated continuously, by an interaction with a two-level quantum system, as in the example of Alonso et. al. [2], where the blind spots are related with the complete decoherence. The action of a quantum Hamiltonian, approximated as a linear function in position and momentum (in the classically small) region where the state is defined, also provides such a translation. The oscillations of the overlap of an intial state with its evolving translations can then be used to measure parameters of the driving Hamiltonian. Toscano et. al. [3] have shown that the standard quantum limit (SQL) for the precision of quantum measurements [4] can then be surpassed by choosing a specific initial state: a compass state, i.e. a particular superposition of separated coherent states. We here study the overlap structure of this and alternative extended states with all their possible translations. Is the overlap specially sensitive to decoherence? How delicate is the balance between the locations, the amplitudes and the phases of a superposition of coherent states for a blind spot to arise, or to disappear? What is the effect of squeezing and rotating the component coherent states? Do other extended states, such as excited states of anharmonic oscillators exhibit blind spots? It is found that blind spots resulting from the overlap of a translation are a general feature of any extended quantum state. It is only if the state has a generalized parity symmetry, i.e. if it is invariant with respect to the reflection through a phase space point, that overlap zeroes may occur along continuous displacement lines, rather than at isolated points. The ubiquitous Schrödinger cat states, as well as the compass state, are then understood to be symmetric counter examples. Regular, or irregular patterns of blind spots manifest delicate wavelike properties of quantum mechanics. The rich structures that emerge remind one of quantum carpets [8], though more akin to so called sub-Planckian features of pure quantum states [9]. A striking consequence of purity, the invariance of the correlations with respect to Fourier transformation [10,11,12] is a prime manifestation of the conjugacy of large and small scales and turns out to be essential to the following analysis. In principle, the zeroes featured here may be directly observable through manipulation and interference of simple quantum states, such as carried out in quantum optics [13,14]. Though generically isolated, blind spots are not constrained by analyticity, as are the zeroes of the Bargman function and, hence, the Husimi function [15]. Unlike those, which often lie in shallow evanescent regions [16], we here deal with sharp indentations on a background of maximal correlations. In section 2 we review the general theory for the Wigner function and its Fourier transform, the characteristic function, or the chord function. These complete representations of quantum mechanics are fundamentally based on reflection and translation operators, as discovered by Grossmann [5] and Royer [6]. They thus provide the natural setting for the present theory. The overlap between translated pure states is then related to a phase space correlation function, which is also meaningful for the mixed states of open systems. In section 3, the nodal lines of the real and imaginary part of the chord function are analyzed: Their intersection determines isolated blind spots, in the absence of a reflection symmetry. A theory for the blind spots of superpositions of coherent states or squeezed states, i.e. Schrödinger cat states and their generalizations, is developed in section 4. Such superpositions of several coherent states have recently become experimentally accessible [7]. It turns out that the case of a triplet of coherent or squeezed states has the special property that its blind spots form a regular pattern of displacements on an hexagonal lattice, whatever the locations, amplitudes or phases of the three component states. This structure is described in section 5. In contrast to the robustness of the blind spots of pure states with respect to any variation of parameters, they are remarkably sensitive to decoherence. Indeed they will be shown in section 6 to be much more sensitive than the oscillations of the Wigner function, whose disappearence is often taken as a sure indication of loss of quantum coherence. One should note that the washing out of fine oscillations of the Wigner function correspond to the decay of large arguments of the characteristic function, whereas we here analyze phenomena for small arguments. Thus, a density operator may still have negative regions in its Wigner function, even though any trace of its blind spots has been wiped out. Phase space representations Recall that, for systems with a single degree of freedom, the corresponding classical phase space is a plane, with coordinates x = (p, q), momenta and positions. The uniform translations of this space, x → x + ξ, define the space of chords, {ξ}. The corresponding quantum translation (or displacement) operators,T ξ , transform position states, |q → |q + ξ q (within an overall phase) and momentum states, |p → |p + ξ p . For a general state, |Ψ , we have |Ψ → |Ψ ξ and, in the special case of the ground state of the harmonic oscillator, this is transported into a coherent state:T ξ |0 = |ξ . Given the corresponding pure state density operators,ρ = |Ψ Ψ| andρ ξ = |Ψ ξ Ψ ξ | =T ξρT−ξ , the quantum phase space correlations, whose zeroes we seek, are defined as the expected value for the translated state. In terms of the operator trace this is The key point is that, for a pure quantum state, C(ξ) = | Ψ|Ψ ξ | 2 , so the blind spots lie at intersections of nodal lines for both the real and the imaginary parts of Ψ|Ψ ξ , which is generally a complex function of the chords. In the present context, it is best to let the translation operators themselves represent quantum operators. In the case of density operators, χ(ξ) = trT −ξρ defines the (complex) chord function, also known as the Weyl function, or as one of several quantum characteristic functions. Note that the chord function has here been normalized so that χ(0) = trρ = 1. Sometimes used as a theoretical tool, it is a full representation of the density operatorρ and, in the case of a pure state, we have χ(ξ) = Ψ|Ψ ξ , i.e. the translation overlap itself [10,12,17]. In terms of the position representation [10,11] χ(ξ q , ξ p ) = dq q + ξ q /2|ρ|q − ξ q /2 e −iξp·q/ . ( A double Fourier transform then defines the celebrated Wigner function [1], W (x) = FT{χ(ξ)}, which is real, though it can have negative values. The fact that the Wigner function can also be written as W (x) = trR xρ , reflects the Fourier relation of the translation operator itself to the reflection operator,R x [5,6,17]. In terms of the position representation, we have The phase space correlation coincides with the correlation of the Wigner function [11,12], that is, just as if the Wigner function were an ordinary (positive) probability distribution and χ(ξ) were its characteristic function. (Note that the two-dimensional vector product, or skew product in the exponent; it is present in all double Fourier transforms in this article.) The structure of zeroes of C(ξ) is not obtained directly from those of the Wigner function. Generically, the zeroes of W (x) lie along nodal lines, because the Wigner function is real. (In the case of more than one degree of freedom, this becomes a codimension-1 nodal surface in the higher dimensional phase space.) This is not the general case for the chord function: No constraint obliges the nodal lines of the real and the imaginary parts of χ(ξ) to coincide, so, typically, they intersect at isolated zeroes. (In the general case, these become codimension-2 surfaces.) We show below how the real and the imaginary parts of the chord function are related respectively to the diagonal and the off-diagonal elements in a decomposition of the density operator, with respect to the eigen-subspaces of the parity operator,R 0 , or any other reflection operator,R x . Reflection symmetry is not a precondition for the surprising property of Fourier invariance of the phase space correlations for all pure states [10,11,12]: FT{C(ξ)} = C(ξ). This is simply a consequence of the fact that, for pure states, we may combine C(ξ) = |χ(η)| 2 with (4). Thus a conjugacy between large and small scales is generated, leading to tiny sub-Planck structures [9] for quantum pure states with sizeable correlations over an area that is appreciably larger than Planck's constant, . In short, large structures imply large wave vectors for the oscillations of W (x) and for the oscillations of the real and the imaginary parts of χ(ξ). These real functions have nodal lines which must approximate the origin, as the overall extent of the state is increased. It remains to show that generally these nodal lines intersect transversally. These considerations are easily adapted to systems with several degrees of freedom: The origin of the higher dimensional phase space lies in the codimension-1 nodal surface of the imaginary part of the chord function and this generically intersects the nodal surfaces of the real part of the chord function along codimension-2 surfaces. Therefore the overlap zeroes for translated copies of the state will generically give rise to isolated blind spots in arbitrary two-dimensional sections of the higher dimensional phase space. Thus, the likelyhood that a continuous group of translations, evolving from the origin along a straight line in the space of chords, comes close to a point of zero overlap does not depend on the number of degrees of freedom and it is certain to occur for centrosymmetric systems. Generically, blind spots lie close to the origin for all extended systems, because the invariance of the phase space correlations with respect to Fourier transformation holds, regardless of the number of degrees of freedom. Real and imaginary nodal lines The reflection operators,R x , may be considered to be 'generalizations' of the parity operator,R 0 , which corresponds classically to the phase space map: x → −x. Becausê R x is Hermitian (as well as unitary) and (R x ) 2 = 1, there are only two eigenvalues, ±1, which define a pair of orthogonal even and odd subspaces of the Hilbert space, for each choice of the reflection centre, x. Hence, any pure state |ψ may be decomposed into a pair of parity-defined components, an odd |ψ o and an even |ψ e , i.e. |ψ e and |ψ o are eigenstates ofR 0 , for +1 and −1, respectively. The corresponding pure state density operator can then be decomposed into a pair of components,ρ D andρ N , defined by: This decomposition is generalized to mixed states by just adding the diagonal, or nondiagonal contributions from each pure state in the mixture. It is well known that the value of the Wigner function at each point, x, is defined in terms of the diagonal component [6,12]: Furthermore, for x = 0, these 'diagonal' terms making upρ D have a definite parity, Since the chord representation of such a centrosymmetric density operator is a mere rescaling of its Wigner function, it follows that the diagonal component of the chord function is entirely real. This does not necessarily imply that the chord representation ofρ N is purely imaginary, but this is indeed the case. To see this, recall that [17] (7) where asterisk indicates the complex conjugate. Since χ(−ξ) = χ * (ξ), it follows that the real and the imaginary parts of the chord function are even and odd functions, respectively. Unlike the value of the Wigner function, that depends exclusively on the diagonal component,ρ D , the chord function depends also on the nondiagonal component,ρ N . Indeed, the latter determines exclusively the imaginary part of the chord function, whereasρ D accounts entirely for the real part. These are shown in figure 1 for the superposition of three coherent states. The imaginary part is zero at the origin, because trρ N = 0, so it always lies on an imaginary nodal line. In contrast, the real part of the chord function attains there its maximum value, for trρ D = 1. The generic scenario is then that an imaginary nodal line across the origin intersects transversally the real nodal lines, which avoid the origin. For a centro-symmetric state, the imaginary part is identically zero, leading to continuous lines of zero overlap, but any slight symmetry breaking isolates the zeroes into blind spots. Nonetheless, for nearly centro-symmetric states, |χ(ξ)| 2 will remain very small along its real nodal lines. The case of the Schrödinger cat state is specially pathological: Breaking the symmetry of the coefficients of the pair of coherent states, generates straight nodal lines of the imaginary part of the chord function that are parallel to the real nodal lines. Hence, the blind spots vanish entirely. It may be pondered that the parity decomposition into even and odd subspaces is entirely dependent on the choice of origin as a reflection centre. Therefore, the real and imaginary parts of the chord function are not invariant with respect to a shift of origin. Indeed, the effect of such a translation by η is merely the multiplication of a phase factor: Nonetheless, this phase factor does not alter the blind spots, because we can still decompose the chord function into the same pair of (phase shifted) components and their nodal lines still intersect at the same blind spots, resulting from the intersection of the nodal lines in figure 1. Centro-symmetric states, i.e. those for which the commutator [ρ,R x ] = 0, for some reflection centre, x, are thus the important exceptions to nodal lines that intersect transversely. In these cases, the chord function is essentially a real rescaling of the Wigner function [12], so that its zeroes also lie along nodal lines, rather than isolated points. Superpositions of coherent states General coherent states allow for further quantum unitary transformations, corresponding to linear classical transformations, beyond the translations which define them. Leaving this possibility as implicit, we merely denote the superposition as Superconducting resonators and nonlinear optical processes, among others techniques [7,18,19,20] can generate these kind of states. A translation of this state is then within an overall phase. So, in the correlation, C(ξ), we deal with (N + 1) 2 terms of the form η n | η m + ξ . One should note that each diagonal term, η n | η n + ξ , determines the internal correlations for the single coherent state |η n . The wave function for coherent states of the quantum harmonic oscillator of frequency ω is so that the chord function for a coherent state is whereas its Wigner function is Both the Wigner function and the chord function are transformed classically by linear canonical transformations, so generalized coherent states are also two-dimensional Gaussians and the elliptic level curves for W η (x) and for |χ η (ξ)| 2 can be rotated as well as elongated. In the case of a multiplet of coherent states, both the Wigner function and the chord function split up into diagonal and nondiagonal terms: The former, W n (x), are well known to have Gaussian peaks at each coherent state centre η n , while W nm (x) is represented by interference fringes, localized halfway between η n and η m . This pattern is structurally stable with respect to variations of the coefficients of |Ψ N . Note that this decomposition for the Wigner and the chord function gives the false impression that individual pairs of coherent states are the fundamental building blocks for their full phase space picture and so for the blind spots. However, we shall show that the distribution of the blind spots changes drastically by adding a single coherent state to the initial multiplet. By focusing on very short ranges, one can dissect the simple skeleton that supports the pattern of blind spots. First, consider the diagonal terms of the full Wigner function, i.e. the Gaussian peaks W n (x). Even if they are squeezed in different ways, all their widths will be O( 1/2 ), where we consider as a small parameter, that is, the widths are much smaller than the separations, |η n − η m |, which are O( 0 ). Altogether, this positive part of the full W N (x) can be identified with the Wigner function for the mixed quantum state obtained from the same set of generalized coherent states: The corresponding mixed state chord function, resulting from a Fourier transform, is then Just as W n (x), each of the individual coherent state chord functions, χ n (ξ), has a width O( 1/2 ), though it is centred on the origin. Therefore, on a tiny scale O( ), we may approximate these by infinite widths, leading to the approximate chord function: which is independent of the relative phases in the superposition |Ψ N . This expression is analogous to the amplitude of the (far field) diffraction pattern from point scatterers with weights |a n | 2 , placed at η n (except for a π/2 rotation, because of the vector product). Squaring for the diffraction intensities, we obtain the small chord approximation, C(ξ) → C δ (ξ) = |χ δ (ξ)| 2 , so that the approximate zeroes can be read off directly from χ δ (ξ). We remark that a moment expression, such as adopted in [2], cannot be extended to the calculation of blind spots, such as obtained here. Note that each zero corresponds to a singularity of the phase of the chord function and, hence, a wave dislocation [21]. The overall phase pattern for the chord function of a coherent state triplet ( = 0.075) with equal weights and vertices located at (0, 0), (1. There would be nothing to wonder at, if (15) really represents a diffraction pattern. It is the alternative interpretation as a field of correlations, for the same source, that assumes a paradoxical flavour: The zeroes of this simple pattern signal the neighbouring presence of blind spots in the overlap of the original multiplet of coherent states with its translations. The relevance of the 'diffraction pattern' for the correlation lies in the latter's Fourier invariance. For a true mixture of coherent states, the correlations would be exclusively the Fourier transform of the chord intensity (not equal to |χ δ (ξ)| 2 ), with the result that the zeroes would be washed away. It is the outer part of the full pure state chord function, shown in figure 1, which was neglected, that guarantees the Fourier invariance for C(ξ), notwithstanding its negligible direct effect on the diffraction pattern itself. Indeed, each nondiagonal term, χ nm (ξ), contributing to the chord function is a (complex) Gaussian, also of width O( 1/2 ), centred on η n − η m . Together they form the outer hexagon in figure 2b, corresponding to the simple external oscillations in figure 2a. The condition that the separation of all pairs of coherent states is O( 0 ), implies that these outer maxima account for only an exponentially small perturbation of the central diffraction pattern. Hexagonal lattice of blind spots for a coherent state triplet We have seen that the commonly chosen case of N = 1 is nongeneric, i.e. Schrödinger cat states have either a continuum of blind spots, or none at all. Generically, the correlation zeroes are isolated, but they are only disposed in a periodic pattern for N = 2. To see this, consider each term in equation (15) as a vector of length |a n | 2 in the complex plane. Then the condition for a given chord, ξ 0 , to specify a blind spot is that the N + 1 vectors form a closed polygon. However, a triangle is the only polygon that is completely determined (within discrete symmetries) by the lengths of its sides. For large N, the statistical properties of the distribution of zeroes will be approximately those of random waves, but these are complex, so that the blind spots are the intersections of real and imaginary random nodal lines [22]. For N = 2, a triplet of coherent states, the three intensities, |a n | 2 , determine an unique triangle in the complex plane. Since the first vector has been chosen with the direction θ 0 = 0, the other angles are given by the law of cosines, i.e. for allowed values of arccos (and likewise θ 2 by an index permutation). For each choice of sign in equation (16), the solution of the pair of linear equations, determines two oblique sublattices of blind spots, ξ 0 (k 1 , k 2 ): The green and the yellow points in figure 3b. Together these form the full hexagonal lattice, shown in figure 3. Since the vectors between the sublattices equal the vectors from the chord origin to either sublattice, any superposition of triplets located at the "green spots" in the figure 3b is quasi-orthogonal to superpositions of triplets located at "yellow spots". The star of David inscribed in the hexagon of large scale peaks in figure 3a defines, in its turn, an interior hexagon, which is just a rescaling of the blind spot hexagons of the lattice in figure 3b. This a prime example of the conjugacy of large and small scales generated by Fourier invariance of the correlations. For well separated coherent states, such that components of η n − η m are O( 0 ), the deviations of C(ξ) from the lattice are minute, as shown in figure 3b, that is, each small green or yellow 'spot' contains a true zero. The small deviations can be easily calculated by Newton's method. It is important to point out that the coefficients of a superposition of states remains invariant during an unitary evolution, while, for instance, the positions of generalized coherent states are moving. Thus, an experimental measurement of a pair of correlation zeroes, ξ 0 (t), at any time, can in principle be fed into the pair of equations (17) to solve the inverse problem, that is, to obtain the coherent state positions: η n (t). Decoherence Interaction with an uncontrolled external environment evolves a pure state into a mixed state. The Wigner function of an extended state gradually looses its interference fringes and becomes everywhere positive in all cases [28,29], be it a multiplet of coherent states, or an excited eigenstate of an anharmonic oscillator. Viewed in the chord representation, this effect is translated into the loss of amplitude of χ(ξ) for all chords outside a neighbourhood of the origin of area . However, we are now focussing on structures that lie within this supposedly classical region. So, even though blind spots and the oscillations that give rise to them are a delicate quantum effect, it is not a priory clear how they are affected by decoherence. The crucial point is that it is no longer possible to identify the square of the chord function with the phase space correlations, that is, we must use C(ξ) = FT{|χ(ξ)| 2 } for mixed states. Even though, the theory of open quantum systems is in no way as complete as the quantum description of unitary evolution, appropriate for closed systems, it will be here established that the observable correlations, C(ξ), are exceptionally sensitive to decoherence, even though χ(ξ) preserves its oscillatory structure and its multiple zeroes. The full evolution of any quantum system coupled to a larger system, in the role of environment, is still unitary and it can be resolved into the completely positive evolution of the reduced system, by tracing away the environment [24]. In the limit of weak coupling, the reduced system evolution becomes memory independent (Markovian) and is governed by the Lindblad master equation [23], whereL j is a Lindblad operator, which models the interaction between the system and the environment. Assuming that each operatorL j is a linear function of the momentump and position q operators and that the Hamiltonian,Ĥ, is quadratic, leads to the chord phase space equation [28], Here {·, ·} is the classical Poisson bracket, L j (x) = (l ′ j + il ′′ j ) · x and H(x) = x · H · x are the Weyl representation ofL j andĤ and α ≡ j l ′′ j ∧ l ′ j is the dissipation coefficient. The solution for this equation is given by a product of two factors. One is the unitarily evolved chord function, χ u (ξ, t) = χ(ξ(t)), which propagates classically as a Liouville distribution, just as the Wigner function. This is multiplied by a Gaussian function, exp(−ξ · M t · ξ/ ), that depends only on the Hamiltonian and the Lindblad coefficients, but not on the initial chord function. The explicit general form, where R t = exp(2JHt) and is given in [28], but various special cases have been previously obtained [25,26,27,29]. The possibility of deducing the complete local structure of the correlations, C(ξ, t), directly from that of the chord function still exists for this special but important class of quantum Markovian evolutions. The reason is that |χ(ξ, t)| 2 also factors, such that |χ u (ξ, t)| 2 is multiplied by the squared Gaussian. A Fourier transform supplies the evolved correlation as merely the convolution, because χ u (ξ) still represents a pure state and, hence, FT{|χ u (ξ, t)| 2 } = |χ u (ξ, t)| 2 . As a result of this coarsegraining of the positive function, |χ u (ξ, t)| 2 , the correlation zeroes are lifted in time, as illustrated in figure 4. Nonetheless, they can still be detected as local minima, remaining up to a lifting time, τ l , which depends on the disposition of the coherent states, as shown in figure 4b. This time is reached when the correlation, C(ξ, t), approaches its Gaussian envelope. Simple dimensional considerations lead to the definition where A is the area of the triplet and t p is the positivity time of the Wigner function [29,28]. One should note that, though t p depends on the Lindblad coefficients, the ratio τ l /t p is hardly affected by them. We have here chosen a very special example where χ u (ξ, t) does not evolve, so as to keep the correlation minima over the blind spots. Nonetheless, the relation between the different decoherence times (23) holds much more generally, even if the area, A(t), for the evolving extended state is not constant. Recalling that t p measures the survival time of the interference fringes in the Wigner function, we find that the survival time . Effect of decoherence on correlation zeroes. Markovian evolution of the phase space correlation for a triplet located at (0, 0), (0, d) and (d, 0), along the line ξ p = −ξ q /2. In this case χ u (ξ, t) = χ(ξ, 0) and the Lindblad operators arep andq. In a) the zeros are seen to be lifted with increasing time and even the local minima are distinguishable only until the lifting time τ l . Here, d = 5, so the area of the triplet is A = 12.5 and = 0.075. b) shows the difference, ∆, between the phase space correlation and its Gaussian envelope as a function of time, at the first zero, for the same configuration of a), but varying the area, A = d 2 /2, of the triplet. for the correlation minima that would indicate 'nearly' blind spots for open systems, is much smaller if A ≫ . Thus, the oscillatory structure that gives rise to blind spots for small displacements is much more sensitive to decoherence than the fringes of the Wigner function. Discussion and outlook Quasi-orthogonality between a state and its slight evolution is a remarkable quantum property with applications in the theory of decoherence [2] and quantum metrology [3]. However, the methods in previous studies did not settle the question of whether complete orthogonality is also accessible. We have shown how the interplay between the chord function, reflection symetries and the remarkable Fourier invariance of the phase space correlations determine general srtuctures that are valid for any particular example. We have here given special attention to coherent state triplets. They lie in the generic class where the blind spots are isolated, but their ordering in the neighbourhood of an ordered lattice is unique. The decay of overlap for a translation is the simplest instance of the general loss of fidelity for a pair of quantum states, |Ψ(t) and |Ψ ′ (t) , that evolve under the action of slightly different Hamiltonians. Most treatments of this process have been based on the unitary evolution of initial single coherent states [30,31]. We have here supplied an example, where the addition of further structure to the initial state modulates the average overlap decay in an unanticipatedly rich manner. For a short time, we may even approximate such a more general evolution of a multiplet, through a local quadratic semiclassical theory [32], to continue to be a superposition of generalized coherent states. Generally, the overlap Ψ(t)|Ψ ′ (t) will be a complex function for all time, so that zeroes (for a pair of parameters) will be isolated and they can only be narrowly avoided as the pair of states evolve in time. Some decoherence is needed for the fidelity of structured states to decay smoothly. The indication of this from Markovian systems with quadratic Hamiltonians and linear coupling operators can be somewhat generalized in a WKB-type approximation, which still produces evolution of the correlations in the form of a convolution, though the coarsegraining window is no longer Gaussian [33]. In any case, the present example has brought to light the contrast between a remarkable robustness of blind spots as regards unitary evolution and their extreme sensitivity to decoherence.
7,585.2
2009-05-12T00:00:00.000
[ "Physics" ]
Relationship between acaricide resistance and acetylcholinesterase gene polymorphisms in the cattle tick Rhipicephalus microplus In this study, we aimed to develop a comprehensive methodology for identifying amino acid polymorphisms in acetylcholinesterase transcript 2 (AChE2) in acaricide-resistant Rhipicephalus microplus ticks. This included assessing AChE2 expression levels through qPCR and conducting 3D modeling to evaluate the interaction between acaricides and AChE2 using docking techniques. The study produced significant results, demonstrating that acaricide-resistant R. microplus ticks exhibit significantly higher levels of AChE expression than susceptible reference ticks. In terms of amino acid sequence, we identified 9 radical amino acid substitutions in AChE2 from acaricide-resistant ticks, when compared to the gene sequence of the susceptible reference strain. To further understand the implications of these substitutions, we utilized 3D acaricide-AChE2 docking modeling to examine the interaction between the acaricide and the AChE2 catalytic site. Our models suggest that these amino acid polymorphisms alter the configuration of the binding pocket, thereby contributing to differences in acaricide interactions and ultimately providing insights into the acaricide-resistance phenomenon in R. microplus. Introduction Acaricide resistance in the cattle tick Rhipicephalus microplus (Canestrini, 1887) (Acari: Ixodidae) presents a persistent and costly challenge in bovine exploitations situated in tropical and subtropical regions [13].Controlling cattle ticks often involves the extensive use of pesticides, leading to concerns such as environmental and food contamination, as well as the emergence of pesticide resistance [8].The development of pesticide resistance in arthropods is a complex phenomenon influenced by various factors, including behavioral, biochemical, and metabolic defensive mechanisms aimed at mitigating the impact of pesticides on the target organisms [2][3][4][5].Successful control of cattle ticks hinges on timely diagnosis and the selection of the most efficacious acaricide [8]. Previous research has established a connection between pesticide resistance and the activity of xenobiotic-metabolizing enzymes (XMEs) [2,5,6].These enzymes can be found in various metazoan organisms, where they serve as an enzymatic defense mechanism against the potentially toxic effects of natural xenobiotic compounds [26].Arthropods, in particular, possess an efficient assortment of XMEs, including cytochrome P450 (CYP), carboxylesterases (CE) [26], and other XMEs that facilitate the conversion of exogenous chemicals into hydrophilic derivatives [26]. Acetylcholinesterase (AChE) is closely related to carboxylesterases, which are XMEs [16].AChE is present in a large range of organisms, including arthropods, where it regulates levels of acetylcholine in muscular tissues [6].The active site of AChE can be phosphorylated by organophosphorus pesticides (OPs), such as the acaricide diazinon [21], leading to the inactivation of AChE as a desired toxic effect in arthropods [34].In some cases, arthropods develop a form of pesticide resistance involving a mutated version of AChE that can resist phosphorylation at the active site by OP pesticides.This resistance mechanism has been observed in Musca domestica (Diptera: Muscidae) [20], Culex pipiens (Diptera: Culicidae) [33], Anopheles albimanus (Diptera: Culicidae) [12], and Drosophila melanogaster (Diptera: Drosophilidae) [4].A different category of acaricide-resistant enzymes known as cholinesterases and carboxylesterases (CEs) are involved in sequestering OP pesticides, and increasing the expression of acetylcholinesterase (AChE) [2,12,13].These closely related enzymes likely contribute to enhanced detoxification through ester hydrolyzing activity, as evidenced by their increased expression [13-15, 18, 19].Additionally, both enzymes share an affinity for various synthetic substrates [24].In cattle ticks, at least three AChE transcripts have been identified: BmAChE1 [3], BmAChE2 [16], and BmAChE3 [31].Notably, increased transcript expression of BmAChE2 has been observed in fieldisolated OP acaricide-resistant ticks from Mexico [11] and Brazil [5], which suggests a potential role in acaricide resistance. The objective of this study was to establish a methodology for assessing the expression levels of XMEs in acaricideresistant R. microplus ticks, recognizing the significance of XMEs in acaricide resistance. Ethics The study was conducted according to the guidelines of the Declaration of Helsinki and supervised by the Experimental Animals Handling Ethics Committee of the Instituto Nacional de Investigaciones Forestales Agricolas y Pecuarias.The study design and experimental protocols were performed according to Mexican standard NOM-062 ZOO-1999 for animal care and use, and the technical specifications for the production, care and use of laboratory animals used can be found at https:// fmvz.unam.mx/fmvz/principal/archivos/062ZOO.PDF. Ticks For baseline levels of cholinesterase and carboxylesterase gene expression, an acaricide-susceptible reference strain (SUS) that has been maintained without exposure to any acaricides since 2008 was utilized.In addition, a multiple resistance reference strain that has been cultured for multiple generations has served as the resistance reference in bioassays conducted as part of the Mexican Federal Government's acaricide resistance monitoring program [25].The field isolates of cattle ticks were collected from cattle ranches located in the southeastern Mexican state of Tabasco as part of the compulsory inspection of cattle for the acaricide resistance monitoring program.All tick specimens were subsequently maintained at the Department of Ectoparasites and Diptera of the National Service for Agro-Alimentary Public Health, Safety and Quality (SENASICA) under the supervision of the Secretariat of Agriculture and Rural Development (SADER) in Mexico.The susceptible and acaricide-resistant reference strains, as well as the tick field isolates utilized in this study, were obtained by infesting cattle with 2 Â 10 4 10-15-day-old larvae.Engorged females were collected 21 days after infestation and placed in Petri dishes, with each strain represented by groups of ten ticks, for subsequent oviposition.The Petri dishes were then incubated at a temperature of 28 °C and 80% relative humidity until complete oviposition, following established protocols [7].The tick egg masses were subsequently collected, weighed, and divided into 200 mg vials.These vials were kept at a temperature of 28 °C and 80% relative humidity until eclosion.Meanwhile, the 10-day-old larvae were frozen at À80 °C and stored for future use. Larval package test bioassay The toxicological profiles of the reference strains and isolates were assessed for their resistance to organophosphate acaricides using bioassays at the Laboratory of the Ectoparasites and Dipteran Department of the National Animal Health Verification Services Center (SENASICA-SADER) [30].The larval test employed in this study involved exposing tick larvae to filter papers impregnated with acaricides at predetermined concentrations capable of causing 99% mortality in susceptible tick populations (LD99) after 24 hr [25].Four replicates were used for each reference tick strain and isolate tested, with trichloroethylene-diluted acaricides administered at the following concentrations: chlorpyrifos 0.2%, coumaphos 0.2%, and diazinon 0.08%.To impregnate the filter papers, a 63 cm 2 piece of Whatman 1 filter paper was treated with one milliliter of each acaricide dilution.Once the trichloroethylene evaporated, the treated filter papers were sealed on three sides with clips, and one hundred 10-day-old larvae were introduced through the open side, which was then sealed with another clip.After incubating for 24 hr at 28 °C and 92% relative humidity, live and dead larvae were counted [25].The mortality rate for each tick group under each acaricide concentration was recorded as data, as shown in Table 1. Relative quantification of cholinesterase expression Each sample of R. microplus ticks was frozen at À80 °C and subsequently finely ground using a ceramic mortar.Total RNA was isolated using an RNAqueous Ò -4PCR Kit (Ambion, Austin, TX, USA), following the manufacturer's instructions. The AChE2 gene expression of each strain and field isolate was measured using 7300 SDS Software v1.2.2 (Applied Biosystems).The expression levels of CYP, CE, and AChE2 in the different samples were quantified using the DD Ct method, with 18S ribosomal RNA expression levels serving as the internal control for normalization.The susceptible strain was considered to have a baseline level of AChE2 expression and was assigned a relative value of 1 expression unit (1 REU), following the instructions provided in the ABI Prism 7300 Sequence Detector real-time thermal cycler manufacturer's manual (Applied Biosystems) available at https://assets.thermofisher.com/TFS-Assets/LSG/manuals/cms_042380.pdf. Statistical analysis The means of the relative AChE2 gene expression in REU were statistically analyzed using an unpaired Student's t-test conducted with GraphPad Software (GraphPad Software, Inc., La Jolla, CA, USA), which is available online at https://www.graphpad.com/quickcalcs/ttest1.cfm. Rhipicephalus microplus acetylcholinesterase amino acid sequences All sequences analyzed in this study were obtained from the NCBI and originated from diverse sources and geographical locations.The AChE2 amino acid sequences derived from the ticks used in the toxicological bioassay were previously submitted to GenBank with the following identifiers: susceptible for AAC18857.1,isolate C1 for OR378375.1,and isolate C2 for OR378376.1. Acaricide bioassays The mortality rate of tick larvae exposed to standard concentrations of the acaricides chlorpyrifos, coumaphos, and diazinon was determined to be 100% compared to that of the susceptible reference tick strain.As expected, the resistant reference strain displayed complete resistance with 0% mortality when exposed to all the chemical formulations.Isolates C1 and C2 demonstrated 0% mortality from the acaricide diazinon, although isolate C1 only exhibited partial resistance against chlorpyrifos.However, both isolates exhibited 100% mortality when exposed to coumaphos.Additional details and data can be found in Table 1. Acetylcholinesterase 2 expression The difference in AChE2 expression between the means of each isolate and resistant strain of ticks and the susceptible ) for isolate C2.The AChE2 expression level in the resistant reference strain was determined to be 13.07 REU (t = 10.875, df = 6).Additional details and data can be found in Table 2. Phylogenetic analysis of acetylcholinesterase 2 The Clustal Omega multiple sequence alignment algorithm was utilized to construct a phylogenetic tree using the Neighbor-joining method without distance corrections.The resulting phylogeny revealed distinct regions in which AChE2 sequences are sorted, notably, that the susceptible strain, as well as isolates C1 and C2 from Mexico, formed a distinct clade.Additionally, orthologous AChE2 sequences obtained from ticks in Australia and India were observed to cluster separately in their own distinctive clades.For a visual representation, refer to Figure 1. 3D modeling and AChE2-acaricide docking The docking 3D model of AChE2 sequences revealed significant differences at the ligand-binding site when the metabolically oxidized form of diazinon, known as diazoxon, was used to model amino acid level interactions with the acaricide via the CB-Dock algorithm (Table 3 and Fig. 2).In the case of susceptible AChE2, diazoxon interacted with the ligand-binding site through hydrogen bonds with amino acids D478 and G477, which are located near H476, an essential component of the AChE2 catalytic triad.On the other hand, Isolate C1 AChE2 interacted with amino acids L480 and H476, and Isolate C2 AChE2 uniquely interacted with diazoxon directly via an ionic bond with amino acid H476.Additionally, the acaricide interacted via hydrogen bonds with R110 and N359.These findings indicate that all three analyzed AChE2 polymorphisms include a distinct set of amino acids that interact with the acaricide at the ligand-binding site. Discussion Pesticide resistance in arthropods is often determined by a genetic mechanism that may involve the collaboration of one or more genes to achieve resistance.The phenotypes resulting from these genes could include changes in the pesticide target site [6] or an increase in enzymatic detoxification [28].To investigate this further, our study focused on analyzing the AChE2 amino acid sequences and expression levels in different tick isolates with varying resistance levels.The results of our study revealed variations in both the amino acid sequences and expression patterns of AChE2 between reference strains and isolates with different toxicology profiles.These findings suggested that these variations are responsible for the varying levels of acaricide resistance found in ticks.The sequence polymorphisms and expression profile of AChE2 in acaricideresistant ticks are consistent with the idea of an altered active site of the target protein, as well as the enzymatic detoxification hypotheses.It is possible that the amino acid polymorphisms in R. microplus AChE2 may bind to, sequester, and enzymatically neutralize acaricides [2,5,14,15,[17][18][19].Previous studies have shown increased expression of AChE2 in OP acaricideresistant reference strains [11] and field isolates [5].However, isolate C2 exhibited an unprecedented level of expression, as shown in Table 2.The overexpression of AChE2 aligns with the hypothesis of OP acaricide sequestration, where excessive enzyme expression ensures sufficient active enzymes for the regulation of acetylcholine nerve impulses even after exposure to OP acaricides, preventing paralysis and death in resistant ticks.This specific transcript is also expressed primarily in the synganglion, the target organ of diazinon [3].Other studies have also reported an increase in AChE2 in a Brazilian field isolate resistant to diazinon [5].These findings suggest that the transcript level of this gene could serve as a discriminative marker between susceptible and OP-resistant ticks.Furthermore, the observation of an 80-fold increase in AChE expression in tick isolate C2 suggested that this specific transcript may be associated with an enzyme involved in the sequestration of organophosphates rather than metabolizing them.It has previously been reported that field isolates exhibiting resistance to pyrethroids also display susceptibility to organophosphates (OPs), such as coumaphos and chlorpyrifos, with some isolates Table 2. Relative quantification of acetylcholinesterase transcript 2 (AChE2) expression levels in ticks.Tick larvae from different strains and isolates subjected to RNA extraction and qPCR, and the results were subsequently converted to REU and compared against those of the susceptible strain, which was considered 1 REU. Tick sample AChE2 REU Susceptible 1 ± 0. demonstrating moderate resistance to diazinon with AChE2 subexpression [9].In this particular scenario, the main selection pressure was toward pyrethroid molecules, and it appears that AChE2 overexpression is not the predominant defense mechanism responsible for the observed moderate diazinon resistance.To gain a deeper understanding, further studies focusing on field isolates with different acaricide levels of selection pressure are needed.These studies should aim to determine whether there are alternative first-response mechanisms apart from AChE2 or if a multifactorial response is involved, indicating that AChE2 overexpression is not the sole mechanism or could develop later under increased selection pressure.To further understand this phenomenon, it would be beneficial to conduct additional studies on isolates exhibiting similar expression levels.These studies could help determine whether these high levels of expression are achieved through gene duplication, similar to what has been observed in C. quinquefasciatus mosquitoes [27].However, based on our experience, this type of acaricide resistance in R. microplus is believed to be influenced by various factors.An increase in AChE may be one contributing factor among others within a complex scenario observed under field conditions [10]. Diazoxon is a metabolically oxidized form of diazinon that undergoes biotransformation primarily at the nervous tissue level.This biotransformation enhances the toxic effect of the acaricide on AChE2, as reported by Lazarević-Pašti et al. [21].Therefore, in our study, we used diazoxon for ligand docking assessment via 3D modeling analysis to evaluate how AChE2 amino acid polymorphisms affect ligand binding sites.Interestingly, the amount of interacting amino acids showed notable changes in all cases, except for H476, as depicted in Figure 2.This particular amino acid, along with S230 and E356, forms the catalytic triad within the catalytic site of all acetylcholinesterases enzymes.This triad is crucial for the hydrolysis of acetylcholine, which regulates the tick's nervous impulse muscle, as described by Hernandez et al. [16].Our findings suggest a correlation between acaricide resistance and specific amino acid substitutions at the AChE2 catalytic site.Different amino acid polymorphisms result in varying levels of interaction between the acaricide and the catalytic triad.Through our analysis, we observed several amino acid substitutions (as shown in Table 3) in the AChE2 amino acid sequence of ticks with different levels of acaricide resistance.These substitutions inevitably impact the configuration of amino acids at the catalytic site of the enzyme, as depicted in Figure 2. Organophosphorus acaricides exert their toxic effects on ticks through the irreversible phosphorylation of S230 at the catalytic triad.However, prior to this phosphorylation, E356 and H476 play crucial roles by coordinating a nucleophilic attack on the phosphate within the acaricide, as described by Hernandez et al. [16].Based on our data, we propose a scenario in which amino acid reconfiguration at the catalytic site leads to distinct ligand binding interactions specifically around H476, when the identified polymorphisms are modeled through docking analysis.This altered amino acid configuration results in a change in the level of interaction within the catalytic triad in the presence of the acaricide, ultimately rendering the AChE2 polymorphisms resistant to phosphorylation.This phenomenon is illustrated in Figure 2. Our study revealed a significant level of polymorphism in the AChE2 gene of R. microplus ticks, particularly in acaricide-resistant individuals.Notably, we consistently identified three amino acid substitution polymorphisms (D299, I398, and F546) that contribute to a reconfiguration of the active site.This reconfiguration alters the level of interaction between the acaricide and the catalytic triad of amino acids in ticks, which exhibit high levels of acaricide resistance.These findings hold potential implications for the development of a molecular test for acaricide resistance using AChE2 qPCR or SNTP analysis. Figure 1 . Figure 1.Phylogenetic analysis of AChE2 amino acid sequences from R. microplus reported in GenBank.The protein amino acid sequences were submitted online to multiple sequence alignment via the Clustal Omega algorithm at https://www.ebi.ac.uk/Tools/msa/ clustalo/, and the results were analyzed via phylogenetic comparison via the neighbor-joining method without distance corrections. Figure 2 . Figure 2. AChE2 ligand-binding site interaction with the oxidized acaricide diazoxon.Diazinon is metabolically oxidized by tick enzymes, enhancing its toxic effect on AChE2 which interacts with the acaricide at the ligand-binding site via hydrogen bonds.(A) Diazoxon highlighted in yellow, which interacts by means of hydrogen bonds with amino acids D478 and G477 in close proximity to H476, an important component of the AChE2 catalytic triad.(B) Isolate C1 AChE2 interacts with L480 and H476.(C) Isolate C2 AChE2 interacts directly with diazoxon via ionic bonds; additionally, the acaricide also interacts via hydrogen bonds with R110 and N359. Table 1 . Acaricide bioassay data of different tick field isolates.Larvae from different strains and isolates were bioassayed by a larval package test under standard concentrations of the acaricides diazinon coumaphos and chlorpyrifos.The data are presented as the mortality rate (%) under a standard acaricide concentration. Table 3 . Amino acids at the binding pocket and type of bonds involved.Different polymorphisms of AChE2 result in different amino acid compositions in the binding pocket, which in turn results indifferent types of bonds with diazoxon ligand.
4,089.4
2024-02-04T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Retraction Retracted: Mathematical Methods for Sensitive Information Mining Method of News Communication Platform Based on Big Data IOT Analysis It is urgent to effectively monitor the public opinion of the news communication platform. +e platform designed in this paper takes microblog public opinion as the research goal, uses MongoDB to build a distributed computing platform for sensitive information of news communication platform, establishes a corpus of sensitive event topics, introduces PageRank algorithm to deal with microblog social relations, obtains the characteristics of sensitive information of news communication platform, and carries out information screening, so as to accurately screen and mine the keywords in high impact information. To ensure the practical application effect of sensitive information mining method of news communication platform based on big data analysis. Finally, the experiment proves that the sensitive information mining method of news communication platform based on big data analysis has the advantages of high timeliness and high accuracy, which fully meets the research requirements. +is is fully in line with the requirements of the study. This article has been retracted by Hindawi following an investigation undertaken by the publisher [1]. This investigation has uncovered evidence of one or more of the following indicators of systematic manipulation of the publication process: (1) Discrepancies in scope (2) Discrepancies in the description of the research reported (3) Discrepancies between the availability of data and the research described (4) Inappropriate citations (5) Incoherent, meaningless and/or irrelevant content included in the article (6) Peer-review manipulation The presence of these indicators undermines our confidence in the integrity of the article's content and we cannot, therefore, vouch for its reliability. Please note that this notice is intended solely to alert readers that the content of this article is unreliable. We have not investigated whether authors were aware of or involved in the systematic manipulation of the publication process. Wiley and Hindawi regrets that the usual quality checks did not identify these issues before publication and have since put additional measures in place to safeguard research integrity. We wish to credit our own Research Integrity and Research Publishing teams and anonymous and named external researchers and research integrity experts for contributing to this investigation. The corresponding author, as the representative of all authors, has been given the opportunity to register their agreement or disagreement to this retraction. We have kept a record of any response received. Introduction With the popularization of the Internet and the improvement of netizens' sense of social responsibility, the network public opinion broke out a huge vitality that cannot be ignored. It is the public's strong influence and tendentious views on some hot issues in real life. Generally speaking, sensitive information is mainly composed of four parts, which are sensitive words, the related words of sensitive words, the degree of correlation between them and the association rules between them [1]. At present, sensitive information mining technology mainly uses association analysis and cluster analysis to obtain sensitive information related to sensitive words. +e application range of correlation analysis technology is relatively wide, and the development speed is fast. Association analysis technology mainly includes two parts: association words and association rules [2]. Clustering analysis technology is mainly to find the text information of related topics, so as to realize the monitoring of topics and achieve the purpose of topic tracking [3]. For big data analysis, this paper aims to establish a big data platform in which IoT and smart devices can work together to collect the data. +erefore, the final objectives of this paper are to utilize the developed big data communication platform enable quick information collection and real-time feedback, to aggregate and analyze the data collected through repeaters, and to utilize structured databases in the form of big data. +e application process of this technology mainly includes three steps. +e first step is feature extraction, which mainly refers to filtering the information after the input of information, obtaining the feature vector of the sample, and finally obtaining a matrix; the second step is text clustering, which mainly refers to clustering the results of feature extraction, which can obtain a matrix reflecting all the features in the n-dimensional space. +e final step is to select the classification threshold, which mainly refers to the determination of the threshold after the clustering spectrum is obtained, and then the classification scheme can be directly obtained, in order to ensure the effectiveness of sensitive information mining method based on big data analysis. Information Filtering Algorithm of News Communication Platform. Network information is complex and diverse, and there are many kinds of good information to serve the public. At the same time, some reactionary, superstitious, violence, and other sensitive information pose a serious threat to social and public security [4]. +erefore, to carry out Internet public opinion information mining requires not only to identify the hot topics concerned by the public from the Internet public opinion information, but also to analyze whether the public's attitude towards an event is positive or negative. In addition, deep-seated public opinion information mining requires good control of negative information dissemination, timely discovery, and disposal of sensitive information, in order to prevent it from causing serious harm to the government, enterprises, individuals, and so on [5]. In the process of traditional topic detection, to judge the similarity between a report and a topic, we need to calculate the similarity between the report and each report in the topic cluster. When the size of topic cluster is large, the number of comparisons will increase exponentially, which will affect the processing speed [6]. To solve this problem, the center vector is used to represent a topic. At this time, we only need to calculate the similarity with the center vector, which improves the effectiveness of topic discovery. By analyzing the public opinion information such as network news and forum posts, the existing forms and the structural characteristics of network text information, this paper puts forward the idea of "text reconstruction", that is to say, the representative information of the topic is gathered together to form a "theme block", and the remaining part forms a "content block". News headlines contain a lot of classified information, which can let the public know the basic situation of the event in the briefest prompt and evaluation, and guide the public to further read [7]. It is a high generalization of the content of the web page, and has a high accuracy when used to classify news web pages [8]. Topic is the supporting point of Title Construction, and the title of the core event under the same topic is the same or similar to that of its related follow-up reports. Title Information has significant ability to distinguish topics in topic detection, but with the continuous development and change of events, the topic center drifts, and the titles of subsequent reports also change. +e first paragraph of the news web page is a supplement to the title, which is a general description of the event, including the time, place, event, which people or units are involved and so on. It makes a great contribution to the classification [9]. "Universal" emphasizes the statistical characteristics of public opinion information, a single web page cannot be regarded as public opinion, many web pages about a topic and the participation of many Internet users can become network public opinion information [10,11]. In this sense, network public opinion is accompanied by many news pages, BBS/forums, blog, many netizens browse or comment on a certain topic. We can say that the topic spread by multiple media and concerned by multiple netizens is a hot topic. Construction of Sensitive Information Database of News Communication Platform. From the header of the file I record, you can find the first attribute address in the file i record body, followed by two flag words (flags), where the first bit represents the file deletion flag, and the second bit represents the directory 1 division flag (normal directory). When reading the disk information, make a suitable way to read the information according to the two flag bits. If it is a normal file or directory, continue the following reading operation [12]. Otherwise, you need to recover the file first, and then filter the recovered file to get the required file information, and then hand it to the following text information extraction module for processing. As can be seen from the figure, a file may contain multiple attributes. A complete file needs all the attributes in the file records to be combined according to certain rules. +erefore, when reading the disk file information, all the attributes in the file records need to be read into the memory according to the order in the file records [13]. Each attribute has its specific content, that is, the operation of each attribute is different. +e serial number of the file i recorded in the main file table NFTstarts with 0, and the files i recorded from 0 to 16 belong to platform files, or metafiles, which are mainly used to store the metadata of the platform. +ese metafiles are transparent to users and are hidden files [14]. +e difference between 16 files and other files and directories is that they have a unique fixed address in the MFT table, while other files and directories can be stored anywhere in the table. +e content part starts with the attribute name, and then defines whether the attribute is resident or nonresident. If it is the former, then the attribute value is the content of the attribute; conversely, if it is a nonresident attribute, then the flow of the attribute will be stored in one or more runs. For the sake of simplicity, the storage running area is continuous on the logical cluster number [15]. A run table is stored after the file attribute name, through which the run date table can access the run table belonging to the attribute. +e purpose is to calculate the similarity between the text information extracted from the new web page and the existing text cluster, and taking into account the life cycle of public opinion information; its importance decreases with the loss of time [16]. +at is to say, the same keyword in different time intervals is likely to represent different meanings, so we add the calculation of time interval [17,18]. For example, after the new text information is calculated in the time interval, the smaller the similarity value is, the higher the possibility that it is a new event is, and the higher the score is. +e expression is shown in formula (1): Mathematical Problems in Engineering In the formula, x → is the new file information, c → 1 is the first cluster in the time interval, i is the number of files in the time interval, and k is the number of files added between the latest file collection time in cluster c → 1 and the arrival time of the new file x → . In the case of setting the closed value, as long as the score is greater than the set value, the new file is considered to be a new topic. Based on this, the data processing steps of the news communication platform are optimized as follows in Figure 1: +e design of network public opinion monitoring platform mainly includes three modules: text preprocessing module, sensitive information analysis module, and public opinion analysis module. +e text preprocessing module mainly includes two steps: Chinese word segmentation and information filtering. Among them, Chinese word segmentation mainly transforms the irregular key text obtained by the platform to form a sensitive word set, and then further processes the word set to obtain the corresponding associated word set. When using word segmentation tools for word segmentation, its speed is relatively fast, and has high efficiency. After inputting the original text information, through the process of Chinese word segmentation, filtering meaningless words, calculating word frequency, scoring feature items and so on, the feature vector of the sample is finally obtained, and the final output of this step is a matrix. Whether the feature selection is good or bad will greatly affect the later analysis. +rough text clustering, we can get the distance between these sample points which can reflect the n-dimensional space. +e output result of clustering algorithm operation is a clustering pedigree graph, which can generally reflect all the classification situations, or directly give a specific classification scheme, including a total of several categories, the specific sample points in each cluster, etc. After getting the cluster pedigree, we need to choose the appropriate threshold. After determining the ill value, the platform can directly see the classification scheme through the existing clustering pedigree. Realization of Sensitive Information Mining in News Communication Platform. Propensity analysis of network public opinion is essentially to distinguish the network text information and determine whether it belongs to the positive category or the negative category. +e main process of classifier construction is to use the word sequence check suffix tree representation model to calculate, get the similarity calculation results in the feature space, and use support vector machine algorithm to find the optimal classification hyperplane, so as to achieve the purpose of accurate judgment of network information public opinion tendency. Different from structured data, there are polysemy and polysemy in Chinese [19,20]. At the same time, the context understanding of sentences brings challenges to public opinion information monitoring, which needs the support of corresponding natural language processing technology. +e network public opinion monitoring based on the network information extraction and semantic analysis technology cannot understand the deeper semantics, can only stay in the stage of passive monitoring of network public opinion, and cannot realize the automatic identification of network hot spot information, or track the discovered public opinion information. With the development of natural language processing, data mining and other technologies, especially the wide application of search engine, we can efficiently organize the originally scattered information together through the analysis of its relevance [21,22]. Build a sensitive information knowledge thesaurus, through the analysis of users' concerns about sensitive words, for the detection documents, infer the relationship of sensitive information in the knowledge base, determine the query conditions of sensitive words, submit to the search engine, build the basic analysis set, carry out sensitivity analysis, get the sensitivity evaluation of sensitive words, and issue early warning according to the analysis results. +rough the analysis of Web usage records, web structure information and web content information, some quantitative indexes of public opinion are given for decision makers to use, as shown in Table 1. +e basic task of information collection layer is to collect the rich and various public opinion information from the web pages with various data formats. It provides the required data for the public opinion information mining layer, and is the premise of public opinion deep mining. It is the main task of the Internet public opinion information mining layer to carry out in-depth mining of public opinion information, find the hot issues of public concern, analyze the attitude of the public, and deal with the sensitive information that constitutes harm. By analyzing the data provided by the public opinion information collection layer, it can detect network topics, analyze people's attitudes, monitor networksensitive information, evaluate public opinion situation, etc., and provide objective basis for the public opinion information service layer to serve relevant departments. +e sensitive information analysis module mainly includes three ways, namely, association analysis, cluster analysis, and feature extraction. Among them, the common algorithm of association rule mining is Apriori algorithm. +e application of this algorithm must first form all frequent item sets, and then form all credible association rules from these frequent item sets. +e most important feature of this algorithm is to start from the single item, and then filter it layer by layer, so as to get an effective item set, effectively avoiding the search for impossible items. +e public opinion analysis module mainly provides two functions, which are the discovery of public opinion hot spots and the tracking of network public opinion topics. Among them, the application of hot spot discovery can let users know the current hot topics in time, and comprehensively grasp the current network public opinion information. In the process of hot spot discovery, the public opinion monitoring platform mainly obtains information, frequently searched words, browsed web pages, forum replies, and other relevant information based on the keywords entered by users, then monitors the hot spots, and automatically identifies the "hot spot" information in the network, thus forming hot spot alarm [23][24][25]. Topic tracking in public opinion monitoring platform is mainly realized by topic tracking method, which mainly forms tracking expression expressed by query vector from training set, and then uses this tracking expression to judge the newly captured web page information, and finally obtains the information related to the current topic [26,27]. Experimental Analysis If the threshold is set too low, it will lead to the separation of reports on the same topic. If the threshold is set too high, it will make the topic larger and introduce a lot of irrelevant reports. In the research of Chinese text orientation analysis, there is no open corpus at present https://www.drip.com 16000 comments were collected as experimental data, and emotion categories of the text were manually labeled, including 8000 positive category documents and 8000 negative category documents. A total of 9804 documents are divided into 20 categories. Among them, there are no more than 100 documents in 11 categories such as literature and education, and more than 1000 documents in 6 categories such as computer, environment, agriculture, economy, politics, and sports. Because the training process of genetic algorithm needs a large number of samples, we only select 6 categories with more than 1000 documents. At the same time, because the algorithm will eventually be applied to information filtering, the project team collected 276 and 192 documents of violence and pornography, respectively. As a result, there are 7947 documents in 8 categories. +e distribution of training documents is shown in Table 2: In order to set a reasonable threshold, ten experiments were carried out. In each experiment, firstly, 10 reports were randomly selected from 8 topics in data set 1 to form the original report set; secondly, the reports corresponding to the original report set were selected from data set 2 to form the reconstructed report set; +en, each report in the original report set and the theme block of each report in the reconstructed report set are segmented, feature selected, and weighted, and each topic in the two report sets is represented as a central vector; Finally, cosine similarity is used to calculate the similarity of each report to its topic and other topics in the report set. +is experiment uses eight topics to measure the performance of traditional single-pass clustering algorithm and hierarchical topic detection algorithm in topic detection. +e experiments were conducted five times and the performance was evaluated by the cost of testing. +rough the average measurement of five experiments, the two methods can identify the topic well under the similar threshold settings of different topics, but the detection costs are different, and the change curve is shown in Figure 2. It can be seen from the figure that when the topic threshold is given, the detection cost of hierarchical topic detection algorithm is lower than that of traditional singlepass topic detection algorithm, which indicates that the former has better topic detection ability. With the different topic similarity threshold setting, the detection cost Mathematical Problems in Engineering fluctuates up and down. When the topic similarity threshold rises from 0.24 to 0.30, the detection cost decreases. When the topic similarity threshold is greater than 0.30, the false detection rate changes to miss detection trend, and the detection rate is improved. +erefore, when the similarity threshold is 0.30, the detection cost is the least and the topic detection performance is the best. In the experiment, the topic threshold BR is set as 0.30, and the subtopic threshold value is in the range of 0.4 to 0.6. A hierarchical subject detection algorithm is used to test the subject. +is method detects the topic and identifies five subtopics, and detects the same number of subtopic reports as the manual identification reports, as shown in Figure 3. By comparing the detection results of similarity mining effect of sensitive information, we can know the discovery that the number of reports of subtopics identified by the hierarchical topic detection algorithm is roughly the same as that of manually annotated subtopics, indicating that the method can distinguish molecular topics and present the hierarchical structure of topics to a certain extent and the accuracy of the proposed method in each category of test data 1. +rough analysis, we can find that there are some similarities between the two categories with poor classification effect. For example, the political category often contains economic, environmental, agricultural, and other factors, resulting in the low accuracy. In the above experimental data in Table 3, the improved calculation method can achieve better results. However, we cannot rule out that the above experimental results are obtained on the basis of data 1, and there may be some overfitting problems. +erefore, the above second set of test data is used for further test, and the analysis data are as follows: Among the above experimental data, in terms of accuracy in Table 4, although the computer finance and closed test have a slight decline, there is little difference, while the sports class has a big gap. After analyzing the training documents and test documents, we can find that the sportsrelated documents in the original training documents belong to sports theory research, while the test documents come from the network, so there is a big difference between the distance. In view of the purpose of the research which is to apply to content-based information filtering, this experiment is designed to apply the above classifier to the test experiment of network-sensitive information filtering. In the experiment, test data 1 is divided into two categories, legal documents and illegal documents. +e illegal documents are composed of pornographic and violent documents in test data 1, while the legal documents are randomly selected from the other six categories. +e experimental data composition and test results are as follows. In the experimental data shown in Table 5, in the two numbers before and after each table item, the former one uses the template generation method based on dynamic genetic algorithm to generate the template, but does not use the weight calculation method in this paper to calculate the filtering effect of the filtered information. +e latter one uses both the template generation method based on dynamic genetic algorithm and the template generation method based on this paper. +e weight calculation method calculates the filtering effect of the filtered information. Simultaneously interpreting the data in the above table with traditional methods, the proposed method is obviously better than the traditional method in terms of the accuracy of illegal information. Because the test data used by traditional methods are completely consistent, the proposed method has better filtering effect. Conclusions With the increase of the number of network users, the network environment has become more complex. +erefore, the establishment of network public opinion monitoring platform is particularly important. On the basis of sensitive information mining, relevant key technologies are studied, and the design method of network public opinion monitoring platform is proposed to effectively realize the network public opinion monitoring, realize the maintenance of social stability, and promote government departments to make decisions more democratic and scientific. Data Availability +e data used to support the findings of the study can be obtained from the corresponding author upon request. Conflicts of Interest +e authors declare that they have no conflicts of interest.
5,532.8
2022-08-17T00:00:00.000
[ "Computer Science", "Mathematics" ]
A Stopping Rule for Simulation-Based Estimation of Inclusion Probabilities Design-based estimation of finite population parameters such as totals usually relies on the knowledge of inclusion probabilities characterising the sampling design. They are directly incorporated into sampling weights and estimators. However, for some useful sampling designs, these probabilities may remain unknown. In such a case, they may often be estimated in a simulation experiment which is carried out by repeatedly generating samples using the same sampling scheme and counting occurrences of individual units. By replacing unknown inclusion probabilities with such estimates, design-based population total estimates may be computed. The calculation of required sample replication numbers remains an important challenge in such an approach. In this paper, a new procedure is proposed that might lead to the reduction in computational complexity of simulations. Introduction , we shall represent a finite population as a set of unit indices U = {1, …, N}. Values of a fixed characteristic for corresponding population units are represented by a vector y = [y 1 , …, y N ]'. The parameter under study is the population total (Hedayat, Sinha, 1991: 2): The unordered sample space may be represented by a matrix: represents one possible sample with a ij = 1 when this sample contains the j-th unit and a ij = 0 otherwise. The matrix A has N columns and Z = 2 N rows representing all possible sequences of zeros and ones of the length N, including an empty sample represented by a sequence of N zeros and a census represented by N ones. A vector of corresponding sample sizes may be calculated as: Let an unordered sample: sU ⊆ be drawn from U. The sample composition may be characterised by a vector of sample membership indicators (Tillé, 2006: 8): The sampling is equivalent to choosing a certain (say i-th) row of A so that I(s) = a i . It may be done according to a sampling design: [ ] 1 ,..., ' One may also define a matrix of second-order inclusion probabilities (Tillé, 2006: 17) as: Pr i j s π = ∈ . This lets us express the covariance matrix of the vector I as: The size of the sample s may be expressed as: Denote sampled elements as: (13) For any vector u = [u 1 , …, u N ], let: This lets us define sample vectors: y(s), π(s), d(s) which are obtained by omitting elements corresponding to zeros in I(s) respectively in y, π, d. For known π, the design-unbiased Horvitz-Thompson (HT) estimator of t may be expressed in the form (cf. Narain, 1951;Horvitz, Thompson, 1952): or equivalently: Simulation-based estimation To calculate the HT estimator, first order inclusion probabilities are needed. However, many sampling procedures are too complicated to calculate them. In particular, this is true for spatial sampling (Barabesi, Fattorini, Ridolfi, 1997;, order sampling schemes, especially the Pareto scheme (Rosén, 1997), rejective sampling (Wywiał, 2003;Boistard, Lopuhaä, Ruiz-Gazen, 2012;Yu, 2012), and sequential sum-quota sampling schemes (Pathak, 1976;Kremers, 1985). A particular example is the greedy sampling scheme (Gamrot, 2014: 223) where costs of sampling individual units vary but are known in advance, and the survey budget is restricted. Individual units are drawn to the sample sequentially, one-by-one, with equal probabilities, from a gradually shrinking pool of still-affordable units. In the most pessimistic case, the calculation of inclusion probabilities would require analysing all permutations of units, which is unfeasible. If inclusion probabilities do not depend on sample observations, then Fattorini (2006;2009) or equivalently: Pobrany 31-10-2020 W y d a w n i c t w o U Ł Setting up the stopping rule In order to establish a sufficient value of replication number R that guarantees a required precision of simulation-based estimates, Fattorini (2006;2009) proposes the accuracy criterion: and finds its upper bound on the basis of Bennet's inequality. On this basis, he proposes a formula for the sufficient value of R. Later, Gamrot (2013) attempted to improve over that using asymptotic approximations based on a normal distribution, Chernoff-Hoeffding inequality, and pre-calculated tables of exact probabilities for the restricted maximum likelihood estimator. However, the relative deviation of the empirical HT estimator ( ) ts  from its "true" value ( ) ts that would be calculated for known inclusion probabilities has a complex distribution. The construction of an upper bound for it requires the pessimistic assumption of possible high correlation among sample membership indicators. This leads to very conservative replication numbers, which results in long calculation time. In what follows, it is demonstrated that these pessimistic assumptions are often overly conservative, and may be improved upon. The value of the empirical HT estimator depends on the count vector m. Let Ω be a set of such values of this vector for which the condition is satisfied so that: (24) Hence, instead of examining the scalar distribution of ( ) ts  one may investigate a much simpler, multivariate distribution of m. As an introductory example, let us consider a population of size N = 3, with the sample space and sampling design: When a sample s corresponding to the sample indicator vector I(s) = [0, 1, 1] is drawn, all the columns of the matrix A which contain zeros and correspond to non-sampled units may be disregarded in our analysis because the HT estimator does not depend on these units and corresponding inclusion probabilities. Hence, it is sufficient to consider a reduced sample space and sampling design: where: In the following examples, we will now discuss in more detail some interesting special cases of such a bivariate distribution. The dependency of Q(R) on sample observations of the study variable are not the only important effect. The following example illustrates another one: Example 2. Let us consider four sampling designs P 1 , P 2 , P 3 , P 4 given by Table 1, together with corresponding values of the correlation coefficient ρ between the two sample membership indicators corresponding to the first and second unit. Despite its simplicity, the example shows that the probability Q(R) depends on the correlation ρ between sample membership indicators. It is high for the least favourable case extreme of positive correlation that is tacitly assumed in the derivation of known stopping rules. In reality, however, it is often much lower. Taking this effect into account, one might construct tighter bounds for Q(R) and obtain a stopping rule which gives lower required replication numbers. This effect remains in force when more than two elements are drawn to the sample, as shown in the last example. The proposed stopping rule The accuracy criterion depends on correlations among sample membership indicators. However, it is not reasonable to expect these correlations to be known when inclusion probabilities (defined as moments of their distributions) remain unknown. To account for correlations, the simulation may be divided into two phases. In the first phase, Then the estimates of first and second order inclusion probabilities are obtained as: are generated. Hence, in the whole simulation, a total of R = R 1 + R 2 sample replications s ˘1, …, s Ȓ are generated, which leads to the calculation of the final count vector m and the empirical HT estimator according to expressions (17)-(21). Capabilities of contemporary computers make it possible to set R 1 , R 2 and R quite large (in the order of millions) without much effort so that the distribution of m tends to multivariate normal: ( ) 2 , NR R π C as shown by Krzyśko (2000: 31). After the first phase of the simulation, it may be approximated by ( ) 2 ,NR R π C . Realisations of this distribution may be easily and quickly generated in large quantities, for example, by using algorithms described by Zieliński and Wieczorkowski (1997). This enables the estimation of the probability Q(R) associated with any value of R by counting what percentage of these pseudo-random realisations falls outside the Ω region (with the unknown 'true' statistic ( ) ts approximated by ( ) ts  based on R 1 replications). Such estimation is easily repeated for various candidate values of R because generated replications of multivariate normal distribution may be reused. The re-calculation boils down to a few matrix operations (addition, multiplication, division of corresponding elements) which are easily serialised and optimised. The well-known golden-section or Newton-Raphson algorithms may hence be applied to find the minimum sufficient number R before the second phase of simulation is initiated. Pobrany 31-10-2020 W y d a w n i c t w o U Ł Conclusions According to the approach sketched in the last section, one relatively simple, but very time-consuming simulation, is replaced with a more complex but potentially faster procedure. Instead of calculating a conservative number of replications and then generating them all, a more subtle approach is proposed. After initially generating some R 1 sample replications, the auxiliary nested but fast simulation is executed to establish the required total number R of replications accounting for correlations among sample membership indicators. Then the second, possibly quite a small batch of R -R 1 replications, is generated, and the empirical HT estimator may finally be evaluated. The nested fast simulation step may eventually be repeated more times when more and more replications are available to make the initial assessment of R more reliable. It may also be done after all replications are generated to verify that their number is indeed sufficient. Further studies are needed to confirm whether the proposed procedure indeed produces substantial speeding-up of the whole simulation process.
2,224.6
2020-09-22T00:00:00.000
[ "Mathematics" ]
Unpacking the Emotional Dimension of Doctoral Supervision: Supervisors' Emotions and Emotion Regulation Strategies Successful completion of a PhD is challenging for both the candidate and the supervisor. While doctoral students' emotional burdens received much attention, their supervisors' emotional experiences remain under-explored. Moreover, while teacher education research stressed the importance of teacher emotion regulation, empirical studies on doctoral supervisors' emotion regulation barely exist. The current qualitative study explored 17 computer science supervisors' emotions unfolding in doctoral supervision and their emotion regulation strategies. Semi-structured interviews revealed the supervisors' wide-ranging emotions, with their negative emotions more diverse and common than positive ones. The supervisors also regulated their emotions through multiple strategies within antecedent-focused and response-focused approaches. As one of the initial studies on doctoral supervisors' emotion and emotion regulation in their own right, the current study not only uncovers the complexity of the emotion-laden dimension of supervision, but also highlights the need for all stakeholders to attend to supervisors' psychological well-being in tandem with their students'. INTRODUCTION Doctoral supervision is rewarding but challenging experiences for academics (Elliot and Kobayashi, 2019). Even without sufficient professional training (Acker and Haque, 2015), supervisors are expected to balance conflicting roles and to meet individual students' needs (Hemer, 2012;Benmore, 2016) in contemporary higher education marked with growing managerialism, performativity, and accountability (Aitchison et al., 2012). These demands can induce supervisors' anxiety (Halse, 2011), frustration (Robertson, 2017) and exhaustion (González-Ocampo and Castelló, 2019), which suggests the need for supervisors to self-regulate emotions. In the neighboring field of teacher development, teacher emotions have been found to influence teaching and learning (Sutton and Wheatley, 2003). Similar anecdotal evidence also appeared in doctoral supervision literature (Sambrook et al., 2008). However, despite increasing attention to psychological well-being of doctoral students (Cotterall, 2011;Acker and Haque, 2015;Virtanen et al., 2017), supervisors' emotional experiences remain underrepresented. To address this research gap, we reported a qualitative study as an initial attempt that looks into doctoral supervisors' emotions and emotion regulation in their own right. BACKGROUND LITERATURE Emotion and Emotion Regulation Once considered inferior to cognition, emotion has been gaining momentum in education, psychology, and other fields of social science in past decades. Although definitional complexity still lingers, emotion is conceptualized as multifaceted and dynamic, involving physiological, psychological, cognitive, behavioral, and affective responses (Frenzel and Stephens, 2013) to situations appraised as meaningful to individuals or groups (Lazarus, 1991). Building on this conceptualization, Gross (1998Gross ( , 2015 has proposed the process model of emotion regulation. In this model, emotion regulation was conceived as one's conscious or unconscious effort to influence what, when, and how emotions occur, are experienced, and expressed. Although emotion regulation often aims to reduce negative emotions and/or enhance positive emotions, counterhedonic emotion regulation also exists, i.e., worsening one's emotional experiences (Gross, 2015, p. 5). In terms of the target of emotion regulation, one can engage with intrinsic emotion regulation (influencing his/her own emotions, also termed as intrapersonal emotion regulation, Niven, 2017) or extrinsic (influencing others' emotions, also termed as interpersonal emotion regulation, Niven, 2017) (Gross, 1998). Intrinsic and extrinsic emotion regulation can be closely related. First, a single regulatory strategy may be both intrinsic and extrinsic simultaneously (Gross, 2015;Guan and Jepsen, 2020). More important, intrinsic emotion regulation can be achieved through seeking extrinsic emotion regulation from others. This strategy was identified as intrinsic interpersonal emotion regulation by Zaki and Williams (2013). Altan-Atalay and Saritas-Atalar (2019) further suggested that this strategy (more specifically, having others assure oneself that the situation was under control) particularly benefits individuals who lack confidence in intrinsic emotion regulation. Regardless whose emotions one intends to regulate, emotion regulation can be realized through two broad approaches: (a) antecedent-focused approach, which alters factors inducing emotions to avoid or modulate emotions; and (b) responsefocused approach, which changes one's emotional responses and expressions after an emotion fully blossoms. Each approach consists of several strategies (see Table 1). The importance of emotion and emotion regulation has long been recognized in the neighboring field of teacher development. Positive emotions were found to enhance teacherstudent relationship, promote flexible and creative teaching, strengthen students' motivation (Frenzel et al., 2009) and their learning (Hargreaves, 1998), whereas negative emotions reverse these effects (Sutton and Wheatley, 2003), which indicates the necessity of teacher emotion regulation. Teacher emotion regulation has mainly been investigated under two frameworks: emotion labor (Hochschild, 1983) and emotion regulation (Taxer and Gross, 2018). While emotion regulation is at core of emotion labor (Alam et al., 2019), two frameworks have different emphases. Emotion labor highlights one's display of emotions according to sociocultural, sociopolitical, institutional and organizational rules and Gross (1998Gross ( , 2015 and Zaki and Williams (2013 (Hochschild, 1983), whereas emotion regulation does not emphasize whether the emotions are to be shown in relation to rules. Although doctoral supervision is different from classroom teaching, it does include a pedagogical dimension (Cotterall, 2011). The crucial role of emotion and emotion regulation is thus likely to hold true in doctoral supervision. In fact, Sambrook et al. (2008) has reported evidence that a doctoral supervisor was reluctant to regulate students' negative emotions before she successfully regulated her own. Nonetheless, empirical research on supervisors' intrinsic emotion regulation is very rare. It is thus necessary to explore this matter to pave the way for future research into the relationship between supervisors' emotions, emotion regulation, sociocultural and institutional requirements. Challenges to Doctoral Supervision in Relation to Supervisors' Emotional Experiences Doctoral supervision is highly complex and demanding, which may take a heavy toll on supervisors' emotions. Supervisors need to balance myriad contradictory responsibilities and roles, such as developing students' autonomy vs. helping them reach academic milestones efficiently (Overall et al., 2011;Aitchison et al., 2012). Another challenge is to meet the needs of different students, and of the same student at different stages of candidature (Deuchar, 2008;Benmore, 2016). Moreover, global moves toward managerialism and accountability trickle down to day-to-day pressure on both supervisors and students to be productive (Aitchison et al., 2012). In addition, supervisor development programs remain limited in many parts of the world (e.g., Acker and Haque, 2015). Many academics apply their own experience being a doctoral student (Hemer, 2012;González-Ocampo and Castelló, 2019) or experience advising undergraduate and master's students to supervising doctoral students (Halse, 2011). In sum, although researchers have emphasized the emotional aspect of teaching and highlighted emotional challenges of supervising doctoral students, little is known about doctoral supervisors' emotions and their emotion regulation strategies. The dearth of research becomes even more glaring in the Chinese context, of which doctoral education started late but expands rapidly (Zheng et al., 2018). A qualitative study was thus conducted to answer two research questions: • What emotions do the supervisors experience when supervising doctoral students? • Do they regulate their own emotions? If so, what strategies do they use? Participants Unlike their international counterparts, academics in China only become eligible for supervising doctoral students when (a) their institutions are approved by the Ministry of Education for offering doctoral programs, and (b) they have met multiple institutional requirements, especially in terms of high-quality publications and research funding. Therefore, doctoral supervisors in China are an elite group of full professors (and very rarely, associated professors), mostly working in prestigious institutions and being limited in number. Considering these contextual constraints, we recruited participants through convenience sampling in combination with snowball sampling. We approached an acquaintance, a computer science doctoral supervisor in a key institution, introduced our research aim and clarified participants' right. After he agreed to participate, we requested him to introduce his colleagues to us. We also sent out invitations to computer science doctoral supervisors in other institutions through email and instant Data Collection The data were collected through individual semi-structured interviews. Since the researchers and the participants were working from home due to the Covid-19 pandemic in the spring of 2020, all interviews took place through phone calls or Wechat audio calls. Each interview was between 30 and 90 min, during which the participants were asked to describe their general perceptions of doctoral supervision, their supervisory style, their emotions unfolding in doctoral supervision, and whether and how they regulated their own emotions. Probing questions were raised when relevant, surprising, and ambiguous responses were provided to gain an in-depth understanding of the doctoral supervisors' emotional experiences and their intrinsic emotion regulation strategies. The interview guide is presented in Appendix. These interviews were delivered in Chinese (all participants' first language), audio-recorded, and transcribed verbatim. Data Analysis To analyze data, we firstly read transcripts repeatedly to identify and label strings of texts relevant to research questions mainly based on participants' original words. For instance, P1 constantly described himself feeling "nervous, " thus relevant excerpts were labeled as "nervousness/hecticness." The initial codes were constantly revised as informed by literature (e.g., Pekrun and Linnenbrink-Garcia, 2012;Rowe et al., 2014, p. 286, based on Lazarus, 2006 and ground on the current data. For instance, in this process, initial code "nervousness/hecticness" was further subsumed to "anxiety, " operationalized as the feelings aroused by an anticipation of an uncertain, pending threat or failure (Pekrun and Linnenbrink-Garcia, 2012;Rowe et al., 2014). Meanwhile, we also created case summaries and memos to capture the characteristics of each supervisor's emotional experiences. The finalized categories of emotions involved three valence-based classifications: positive, negative, and mixed emotions, and the first two of which were broken down into a number of discrete emotions. Regarding the emotion regulation strategies, two broader approaches, i.e., antecedent-focused and response-focused, emerged from the data. The former included situation selection, situation modulation, cognitive change, attention deployment. Since three participants also purposefully communicated with others to alter their own emotions, seeking extrinsic emotion regulation was added, which was not included in Gross's (1998Gross's ( , 2015 framework. The response-focused approach included suppression and relaxation. In other words, the revised emotion regulation was consistent with Table 1 introduced earlier in the literature review. To enhance credibility and trustworthiness of data analysis, the second author coded 50% of the data and yielding an initial inter-coder agreement of 85%. The inconsistent codes were discussed after consulting previous literature and finally resolved. Interview summaries and findings were also confirmed by four participants, who were willing to attend a memberchecking process. FINDINGS Doctoral Supervisors' Emotion Experience The interviews indicated that doctoral supervisors experienced a variety of emotions in doctoral supervision, both positive and negative. However, positive emotions were less mentioned and less diverse than negative emotions. In addition, two participants (P7 & P8) also described rather mixed emotions in doctoral supervision: "I felt everything, joy, anger, sadness, and happiness. It's quite mixed" (P7). Table 3 summarizes the participants' emotions. As shown in Table 3, positive emotions included happiness and love, whereas negative emotions involved 10 discrete emotions. Compared to love, happiness was much more common and consisted of pedagogical pleasure (Elliot and Kobayashi, 2019) and intellectual pleasure (Halse and Malfroy, 2010). Pedagogical pleasure emerged when nine participants witnessed students' growth and achievements, as well as their own growing competence as supervisors (P2 & P14), whereas intellectual pleasure arose when supervisors ignited their motivation and enjoyment in extending their knowledge boundary: "I feel quite fulfilled in doctoral supervision because this is a learning opportunity for me too. I have a family and am short of energy and time. I'm motivated by my students to keep up with the new knowledge" (P14). P16 felt joyful as he perceived himself as an increasingly competent supervisor good at cultivating sound supervisory relationship. "My students often communicate with me proactively. Those moments convinced me that I'm a successful supervisor because students are willing to approach me and talk to me about academic and non-academic issues" (P16). In addition to happiness, love also emerged from the data. For instance, P8 articulated her fondness of self-directed and competent students: "We supervisors are really fond of an excellent 'academic seedling' , who loves doing research, exceeds your expectations, and has his/her own pursuits" (P8). Love also arose as supervisors and supervisees formed a longlasting bond along the doctoral process, during which they overcame difficulties together. "Doctoral research is arduous. In this process, no matter how effective or ineffective the supervisor-supervisee communication is, there is strong emotional attachments to each other" (P13). Corresponding to our review of literature, the participants experienced a wide spectrum of negative emotions. Within negative emotions, anger was the most prevalent, reported by 12 participants, followed by anxiety (seven participants), frustration (five participants) and disappointment (five participants). Other four negative emotions, i.e., sadness, exhaustion, guilt, and pain, were occasionally mentioned by three or two participants. Anger was usually ignited when students violated either written or unwritten rules and conventions that supervisors (a) regarded as the cornerstone of a trusting supervisory relationship, and (b) considered as tacit knowledge of both parties. P1 and P12 were furious when their students secretly submitted poorly-written manuscripts without their approval. P4 was irritated because a student missed deadlines repeatedly without informing neither P4 nor other fellow students of his difficulties in doing research. P6 was angry because he received a cursory draft very close to the deadline of an international conference, which left him no choice but burning the night oil to revise the draft for three consecutive days. The participants' anxiety, however, stemmed unanimously from students' difficulties in conducting research, for instance: "I had a student who used to major in mechanics, not computer science. His grade was below average. He had some ideas but couldn't realize them, which made me quite anxious" (P4). Although disappointment was also related to students' research difficulties, it was induced by the dissonance between supervisors' expectations and students' actual achievements or performances. For example, despite his repeated reminders, P6 disappointingly found some students "commit[ting] the same mistakes. I don't understand why some students can be so NOT self-disciplined!" (capitalization added) While previous research attributed supervisors' frustration to their inadequate competence in supporting students (Aitchison et al., 2012), contextual constraints (Robertson, 2017), and students' misunderstanding (González-Ocampo and Castelló, 2019), four out of five participants who reported frustration linked this emotion to the students' failure to make progress despite supervisors' effort: "Sometimes I got frustrated. I had done so much but (their work) was still not solid" (P11). Frustration was also caused by supervisors' failure to understand students' perspective. After witnessing a student bursting into tears because her work was criticized publicly in a lab meeting, P2 "was rather frustrated" because he didn't understand "why this was worth crying for." He later added that he lacked confidence in handling students' emotions, which also escalated his frustration. Sadness emerged when supervisors believed that they were unable to make a difference after being disappointed repeatedly: "I sometimes feel very upset and sad. Some PhD students' knowledge and skills foundation were so poor that I felt I couldn't do anything about it" (P7). The study revealed that exhaustion was reported in two scenarios: (a) when supervisors felt overwhelmed by the workload, or (b) when students consistently frustrated supervisors with poor work, which finally added up to exhaustion: "Some students are not committed to research. After several supervision meetings, they still didn't grasp my idea. I felt exhausted and wanted to cry. Tired. Exhausted" (P2). Guilt and pain occurred occasionally, with only two participants reporting relevant experiences. Corroborating Halse's (2011) findings, guilt emerged as the supervisors attributed students' unsatisfactory research performance to supervision: "I often feel guilty because I would question whether I offered sufficient supervision when my student encountered difficulties" (P17)." P2 and P14 reported feeling painful in supervising doctoral students. P2 regarded his entire experience supervising students as "painful, " whereas P14 specifically linked this emotion to developing student's writing abilities: "Revising students' manuscripts is really painful." His experience resonated with Aitchison et al.'s (2012) finding that supervisors and supervisees both suffered from the process of learning to write. Doctoral Supervisors' Intrinsic Emotion Regulation The interviews indicated that the participants regulated their emotions induced by doctoral supervision through antecedentfocused approach and response-focused approach. While they mostly aimed to reduce negative emotions, counterhedonic regulation occasionally occurred as well. Table 4 summarizes the participants' intrinsic emotion regulation approaches and strategies. Situation selection referred to supervisors' intentional avoidance to situations where they might experience negative emotions. This strategy was manifested in four supervisors' rigorous recruitment process, aiming to screen out students with whom they might not get along. "I'm picky when recruiting students. There are multiple rounds of interviews, tests, and trials. I also need to know the candidate's personality. That's why I barely argued with my students. Only after I'm certain that the candidate and I can work together, I'll admit him/her" (P12). Situation modulation was implemented through changing supervisory practice to regulate supervisors' emotions. Some supervisors became stricter after getting angry and disappointed on a few occasions where students violated the rules or constantly broke their promises. For instance, P1 had his students write down a memo and sign on it after each supervision meeting to confirm that supervisory feedback has been received. P4 was so frustrated by students' repeatedly late submission that he advanced the deadlines. P12 informed his research team of the unpleasant experience of a student who had to defer and resubmit his dissertation, as the student hastily submitted an unpolished draft without the supervisor's approval. P12 also suggested other students learn a lesson from this incident and never do the same. Contrastingly, P7, P9, and P10 loosened their control and placed more confidence in student autonomy to regulate their own emotions. "In the first a few years (of being a supervisor), I was very anxious. I met with students every week but they had very little input to share. Later I gradually 'loosened my grip'. I just focus on the general picture and have them explore the details. Now they approach me proactively, not the other way around. Now I barely am anxious about supervising students" (P10). Situation modulation was also reflected in lowering supervisors' expectations of students to reduce negative emotions. P2 indicated that he would attend more to those committed to doctoral research and "completely let him/her free, " if the students lacked motivation. P5 stated that he once had high expectations of students, but only found himself becoming more irritable. He thus decided not to "count on" students' output and did more research on his own: "When I was young, I always aimed at publishing more and more papers. But even though I invested enormous time in (supervising) students, students were still not productive. Shouldn't I be angry? Considering this may 'break my bowl' (idiom: jeopardizing his career), I realized that I couldn't count too much on students' research. Then I watched the time I spent (on supervision) and did research myself " (P5). Attention deployment referred to supervisors' deliberately or unintendedly directing their attention away from the antecedents of their emotions. Yet the most common emotion regulation strategy that the participants employed was cognitive change. Fourteen out of 17 participants reappraised (a) the meaning of emotions and the antecedents of emotions, (b) the relevance between emotioninvoking situations and their own role as a supervisor and/or as a person with long-term life goals, and/or (c) students' and their own control over student performance (see Table 5). Table 5 reveals that the participants engage with cognitive change through upgrading or downgrading the meaning of emotions and antecedents of emotions, reappraising the relevance between students' performance, supervision, and supervisors' own life goals, as well as evaluating students' and their own control over students' performance. Interestingly, while cognitive change was predominantly employed to alleviate negative emotions, two exceptional data extracts emerged, in which P16 and P17 self-reprimanded for not offering as much supervision as they hoped to. Another emotion regulation strategy that emerged in the data was to seek extrinsic emotion regulation, which was not in Gross's (1998Gross's ( , 2015 emotion regulation framework but identified by Zaki and Williams (2013). P11, P13, and P17 intentionally communicated with others to regulate their own emotions. P11 and P13 turned to family members for support, while P17 mentioned that she would talk to colleagues about her emotional encounters in supervising doctoral students. "I believe it doesn't do any good if I show anger to students, so it's important not to show it. I also talked to colleagues to discuss students' problems as a way to help me feel better" (P17). P17's account revealed that she employed cognitive change (downgrading the value of anger), seeking extrinsic emotion regulation (communicating with colleagues), as well as suppression (not showing anger to her student). The first two strategies were antecedent-focused, whereas the latter was response-focused. This finding lends support to Taxer and Gross's (2018) argument that multiple strategies and multiple approaches can be employed in tandem. In fact, P17 was but one of the eight participants who reported suppressing their emotions. They made effort to refrain themselves from losing their temper: Subjective control reappraisal Reappraising students' selfcontrol over their performance Downgrading students' control over their own performance and the "damage" that they had made (P3, P4, P12, & P16). "Students have their own emotion burdens. They have a lot of work to do. Some have families to support" (P16). "He worked hard but just progressed slowly" (P12). "What's done is done. I have to accept the reality that he missed the deadline" (P4). Reappraising supervisors' control over students' performance (a) Upgrading one's confidence in students' autonomy and abilities (P9, P10, & P14) "We should trust students. I used to be afraid that they might not have tackled some challenges but they actually made it. Then I believe I should be more confidence in students" (P14). (b) Upgrading one's subjective control over students' achievement by lowering the expectations for students (P5, P9, & P16) "Some students just wanted a doctorate as quickly and easily as possible. I lowered my expectations of their academic achievements, so I wouldn't get upset by their mediocre performance" (P5). (c) Downgrading supervisors' subjective control over students' performance and development by recognizing students' individual differences (P7, P11, P12, & P15) "I used to believe every student in this prestigious university is excellent and hardworking, but this belief shattered. Not everyone is alike. You have to accept that students are different" (P7). (d) Downgrading supervisors' subjective control over students' behaviors by recognizing the limited influences of supervisors on supervisees (P11) "People have their own agenda. Sometimes you can do very little as a supervisor. Your influences aren't always long-lasting" (P11). *The only intrinsic emotion regulation strategy that served the affect-worsening purpose in the current data. In addition to suppression, relaxation was another responsefocused strategy emerging in the data. The participants undertook various relaxation activities to improve their affective experiences: "When I get frustrated at doing research, my family will have a trip. [We] go sightseeing" (P8) "I'm extroverted. Bad mood never stays overnight for me. I'd be fully refreshed the next day after a sound sleep. [. . . ] I also go fishing once a week as a way to self-regulate my mind" (P14). DISCUSSION The current qualitative study drew upon the perspectives of 17 Chinese doctoral supervisors in computer science through individual interviews to investigate (a) supervisors' emotions unfolding in doctoral supervision, and (b) whether and how they regulated their own emotions. The participants experienced wide-ranging emotions, including positive, negative, and mixed emotions, adding support to the emotion-laden nature of doctoral supervision (e.g., Sambrook et al., 2008;Halse and Malfroy, 2010;Halse, 2011). The participants' positive emotions were mostly happiness, which incorporated pedagogical pleasure (Elliot and Kobayashi, 2019), and less commonly intellectual pleasure (Halse and Malfroy, 2010). The relative prevalence of pedagogical pleasure was expected, as supervision overlaps with teaching in fostering students' growth (Sambrook et al., 2008;Cotterall, 2011). However, intellectual pleasure, albeit only reported by two participants, reveals mutual intellectual growth of both parties. While such intellectual pleasure was reported by supervisors in humanities (Halse and Malfroy, 2010;González-Ocampo and Castelló, 2019), our findings show that computer science supervisors shared the same feelings, which suggests such an emotion can emerge across disciplines. The emotion of love was the other positive emotion reported by the participants, although this emotion was scarcely documented in the doctoral supervision literature (Deuchar, 2008) but found in feedback research in higher education contexts (e.g., Rowe et al., 2014). Our findings extended the current knowledge by showing that love could stem from supervisors' profound satisfaction of a competent and autonomous student, as well as from a strong bond between a supervisor and a supervisee forged as they jointly overcome obstacles during the doctoral candidature. It is also worth noting three out of four participants who reported love were female, which may suggest that female supervisors may be more likely to experience this emotion in doctoral supervision. Although a gendered perspective on doctoral supervision is beyond the scope of this study, a possible explanation is that female supervisors are under great pressure to live up to the "perfect" role model, i.e., being intelligent and showing care (Hemer, 2012). In line with previous literature, the study also reported supervisors' negative emotions of higher frequency and greater diversity than their positive emotions, which generate three insights. First, the study expanded the range of antecedents of supervisors' negative emotions. While previous research revealed supervisors' frustration and exhaustion stemming from students' misunderstandings (González-Ocampo and Castelló, 2019), and sadness and disappointment stemming from students' underperformance (Sambrook et al., 2008), the current inquiry supplemented these findings by showing that students' underperformance, limited competence, weak intrinsic motivation, as well as supervisors' heavy workload could all contribute to frustration, sadness, and exhaustion. Furthermore, while some of these antecedents were anticipated or foreseen by our participants, including students' difficulties in research and academic writing (e.g., Aitchison et al., 2012), as well as dissonance between expectations and students' actual research progress (e.g., Sambrook et al., 2008), they were often irritated by unexpected, surprising, or "shocking" obstacles, such as students' violation of rules, protocols, and promises that supervisors had taken granted for. Second, such wide-ranging antecedents indicate that doctoral supervisors are susceptible to negative emotions, possibly more so than researchers have reported. While Cotterall (2011) argued that a mismatch of expectation and working style between supervisors and supervisees can cause students' anxiety and stress, the current study extends this point by showing how such a mismatch, as reflected in students' violation of taken-for-granted conventions, backfires supervisors' own emotional experiences. It is even more alarming that this mismatch not merely renders the initial stage of doctoral candidature particularly bumpy (Cotterall, 2011). In fact, P12's experience showed that the mismatch may have been concealed but surfaced at a very late stage of doctoral candidature, which can undermine supervisory relationship and jeopardize timely completion of a doctorate. Furthermore, the data also uncover the interrelationship between negative emotions that supervisors experienced, suggesting that negative emotions, if not regulated effectively in time, may be aggregated and worsened. Our participants' frustration was induced by students' failure to make progress despite supervisors' persistent effort; and if this situation occurred repeatedly, the participants were prone to experience exhaustion and sadness as they might perceive their effortful support futile. Although our participants who reported exhaustion and sadness did not indicate having psychological problems themselves, exhaustion has been argued to contribute to disengagement (Virtanen et al., 2017). Taken together, the nuanced picture of supervisors' emotional experiences not only confirms the challenging and emotion-laden nature of doctoral supervision (e.g., Aitchison et al., 2012), but also highlights the importance of supervisors' intrinsic emotion regulation to their well-being and supervision. Regarding the second research question, our participants took both antecedent-focused approach and response-focused approach mostly to improve their emotional experiences and very rarely in the reverse direction. Each approach was implemented through multiple strategies. The former involved situation selection, situation modulation, cognitive change, attention deployment, and seeking extrinsic regulation; and the latter included suppression and relaxation. This finding suggests that the supervisors, like teachers in Taxer and Gross's (2018) study, have a variety of emotion strategies at their disposal. The deployment of situation selection and situation modulation indicated that doctoral supervisors' adjustments of their practice were not solely driven by the goal to foster students' growth but also to enhance their own emotional experiences. Four participants selected students carefully to avoid negative emotions invoked by supervisor-supervisee mismatch in working styles, and 10 participants modulated their supervision practice to experience less anger, anxiety, disappointment, frustration and sadness. Although doctoral supervision research has advocated a "fit" between supervisors and supervisees (e.g., González-Ocampo and Castelló, 2019), this issue was discussed in relation to supervision per se, but barely in the light of supervisors' emotional experience. Our data thus supplement this bulk of literature, showing that supervision practice can be fueled by supervisors' psychological needs as well. While suppression was the most common emotion regulation strategy found in Taxer and Gross's (2018) study on teachers in the US, our participants adopted cognitive change most frequently. This finding was somewhat unexpected because cognitive change was also under-represented in previous doctoral supervision scholarship. Although the literature consistently advised supervisors to understand students' difficult situations and to show empathy (e.g., Overall et al., 2011), this line of argument was mainly proposed to enhance students' experience, satisfaction, and performance. Building on this argument, we highlight the importance of reappraising the meaning, relevance, and subjective control of student progress, particularly appreciating the nonlinearity of research process and students' individual differences, in benefiting doctoral supervisors' subjective experiences. Another important insight from the participants' deployment of cognitive change is that individuals could modulate their emotions by reappraising not only antecedents of emotions, but emotions themselves. While P15 preempted his complaints by emphasizing the positive implications of policies and requirements regarding doctoral supervision, four participants (P4, P12, P14, & P17) reappraised the meaning of their negative emotions. They saw these negative emotions as natural and reasonable, thus "forgiving" themselves for suffering from anger or frustration. They also reminded themselves that venting would be useless, so as to improve their emotional experiences. Although cognitive change has been framed as antecedent-focused because it addresses on-going emotions rather than fully blossomed emotions (Gross, 1998(Gross, , 2015, the current data suggested that the line between antecedent-focused and response-focused approaches may be less clear-cut in reality than it is on the conceptual level. There is also an overlap between seeking extrinsic emotion regulation and other strategies within antecedent-focused approach. P13's husband's remark that the student herself, rather than the supervisor, should be responsible for the completion of a PhD reduced P13's self-relevance of the student's performance and thus alleviated her anxiety. P17's discussion with colleagues about her student's difficulties in doing research helped her not only unburden herself but also obtain suggestions, which further led to situation modulation in the form of adjusting her supervision practice. On the other hand, the fact that the participants reached out for extrinsic emotion regulation underscores the emotion burdens of supervisors. Such tendency also highlights the need for all stakeholders to pay due attention to supervisors' psychological well-being along with doctoral students' emotions. As P13 rightly appealed: "Students can access in-house counseling service [. . . ] but we supervisors need it too." In addition, eight supervisors were found to suppress their anger and anxiety. Despite the prevalence of suppression found in Taxer and Gross (2018), this strategy was rarely documented in doctoral supervision research (except Halse, 2011). Our finding indicates that doctoral supervisors, at least in the disciplinary and institutional contexts of the current study, tended to suppress their emotions on spot, probably because some held the belief that displaying negative emotions would ruin supervisory relationship or shaken students' delicate confidence (P3 and P12). However, as habitual use of suppression may be maladaptive and even threatens health (Gross, 2015), the finding suggests that further investigation of supervisors' emotion regulation strategies and their short-term and long-term effects are needed. LIMITATION AND FUTURE RESEARCH DIRECTIONS The current study has a few limitations, such as small sample size, the exclusive focus on one discipline (i.e., computer science), single data source, and the absence of longitudinal data. Contextual limitations also exist, as the supervisors recruited were elite professors in Chinese higher education institutes. Moreover, some emotional experiences may be evanescent or occur at the sub-conscious level, so that they might not be readily verbalized. However, the current findings can offer a few implications for future research. Greater research effort should be made to explore supervisors' vis-a-vis supervisees' emotional experiences to paint a more complete picture of the emotional dimension of doctoral supervision, as well as to enrich our knowledge about the factors mediating their emotional experiences. Given that supervisors utilized various emotion regulation strategies, it is worth investigating why and how they chose particular strategies on spot. Another issue to be examined is the effects of both positive and negative emotions and emotion regulation strategies on doctoral supervision, students' completion of degree, and even doctoral supervisors' own development. Finally, since the present study only looked into intrinsic emotion regulation of supervisors, investigations can be extended to understanding of how both parties regulate each other's emotions and the effectiveness of these endeavors. RECOMMENDATIONS FOR DOCTORAL SUPERVISION STAKEHOLDERS In terms of practical implications, the current inquiry stresses the need for stakeholders to pay close attention to supervisors' emotional experiences and psychological well-being, ideally in tandem with their students'. Institutions should provide supervisors with resources, such as in-house counseling service and workshops, to help them regulate their own emotions and improve their psychological well-being. Supervisors should carefully select students to minimize possible mismatch in working styles and beliefs; however, they should be aware that being selective in recruitment does not completely preempt all emotion-draining experiences. Rather, they should be aware of the existence of remarkable dissonance between their expectations and their students' , and more importantly, to make rules and conventions explicit. In addition, supervisors themselves should develop their competence in emotion regulation in order to identify and regulate negative emotions at an early stage, so as not to let them aggregate and incur unfavorable consequences. Finally, supervisors can leverage on the power of community by communicating with colleagues and other trusted parties to enhance their emotional experiences. CONCLUSION Based on individual interviews with 17 computer science professors, the qualitative study explored doctoral supervisors' emotions and their strategies to regulate those emotions. The data revealed their wide-ranging emotional experiences, with negative emotions being more frequent and diverse. The participants also employed various emotion regulation strategies, encompassing both antecedent-focused and response-focused approaches. This study is one of the initial studies exploring doctoral supervisors' emotion and emotion regulation in their own right. The prevalence and diversity of supervisors' negative emotions, underscores the emotion-taxing challenges inherent to doctoral supervision (Aitchison et al., 2012), expands our knowledge about the antecedents of supervisors' negative emotions, and reveals the complex relationship between multiple emotions themselves. These insights collectively suggest that doctoral supervision as a cognitive as well as emotional endeavor. Another contribution of the study is uncovering various intrinsic emotion regulation strategies that supervisors utilize, especially the underreported strategies such as cognitive change and seeking extrinsic emotion regulation. Additionally, the findings can enrich the framework of emotion regulation strategies: Cognitive change can be applied to reappraising emotions themselves, in addition to reappraising antecedents of emotions; and that one can proactively seek extrinsic emotion regulation to improve one's own emotions. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Guangdong University of Foreign Studies. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS YH is mainly in charge of research methodology and drafting of the manuscript. YX is responsible for data collection and revision of the manuscript. All authors contributed to the article and approved the submitted version.
8,313.2
2021-06-28T00:00:00.000
[ "Education", "Computer Science", "Psychology" ]
Comparative testing of Ag/Au/Pt graphene electromodified electrodes in electrochemical detection of tetracycline-emerging pollutant This study aimed to obtain new electrochemically modified electrodes with graphene and Au, Pt, Ag particles considering graphite (GP) and glassy carbon (GC) substrate by applying the chronoamperometry technique to develop the detection protocol of tetracycline (TC) considered as an emerging pollutant in water, using cyclic voltammetry (CV) technique. The graphite-based substrate used for Ag/ Au/ Pt electrodeposition led to the electrode compositions on which TC oxidation process was not diffusion-controlled and as consequence, TC detection failed. TC detection protocols were developed for all Ag/Au /Pt electrodeposited GC and GC-GP electrodes. Better limits of TC detection was achieved for Ag electrodeposited on GC-GP at the cathodic potential of 0.460 INTRODUCTION In recent years, the detection and monitoring of emerging pollutants in aquatic ecosystems is a growing concern worldwide. Currently, various micropollutants and emerging pollutants are present in the aquatic environment through in various ways: industrial, hospital, livestock, agricultural, or domestic effluents, due to the non-performance conventional wastewater treatment technology usually applied [1][2][3]. It is well known that the main characteristic of the emerging pollutants is the lack of maximum allowance concentration setup. Even in low concentrations (ng/L), the presence of antibiotics in wastewater treatment plants (influents and effluents), in environmental matrices (groundwaters, surface waters: lakes, rivers, soils, and sludge) may cause chronic and acute harmful effects on natural flora and fauna and a consequence, to the human health [4,5]. Tetracycline, aminoglycosides, macrolides, blactams, vancomycin, phenols, and fluoroquinolones, are the antibiotics with the widest presence in water matrices [6]. Due to its strong antibacterial activity, easy method of administration, and low cost, tetracycline is one of the most widely used antibiotics in human and veterinary medicine. Although tetracycline is widely found in a variety of foods, including meat, milk, and honey, the toxicity and accumulation of tetracycline in both the environment and food have become a severe threat due to the negative impact on human health and wildlife. Exposure to very low levels of tetracycline leads to the development of antibiotic-resistant genes, vision problems, tooth discoloration, and allergic symptoms in humans [7]. The reduction of the negative impact of tetracycline on the health of consumers has been achieved by establishing regulations by various food safety authorities. Thus, the European Union has recommended a maximum residue limit of 100 mg kg1 for tetracycline in milk [8,9]. Therefore it is necessary to develop fast, analytically accurate methods for the determination of tetracycline in food and environmental samples. Current analytical methods commonly used for the detection and quantification of tetracycline are gas / liquid chromatography spectrometry, inductively coupled plasma mass spectroscopy (ICP-MS), high-performance liquid chromatography (HPLC), molecular and atomic absorption spectrophotometry (AAS), capillary electrophoresis and immunoassays [10]. These analytical techniques are often expensive, require long-term sample preparation activities, are time-consuming techniques, and can only be performed by trained personnel. Regarding the electrochemical methods for the detection of these pollutants, they are some of the most promising alternative methods, with easy adaptability, low costs, short analytical times, and high sensitivity. Due to its wide versatility, low cost, mechanical strength, reproducibility, and high sensitivity, compared to other analytical techniques (gas chromatography, HPLC, or atomic absorption spectroscopy), the use of electroanalytical techniques (voltammetry and amperometry) to quantify important analyzes has increased exponentially. For the development of the electrochemical detection procedure, the electroanalytical techniques and the electrode material shored be considered. The key to the electrochemical process performance is the electrode material and its modification shored the enhance the electroanalytical performance linked to sensitivity, the lowest limit of detection, selectivity [11]. The commercial electrodes, especially carbon-based ones such as glassy carbon (GC) and graphite (GR) electrodes are frequently used in electroanalytical applications, but continued efforts are being made to improve their performance by using them in various modified forms. It is wellknown that the use of conventional electrodes there are some serious problems in terms of electrochemical detection, due to their slow surface kinetics, which severely affects the sensitivity and selectivity of the electrodes. Conventional electrodes used in the detection of the target analyte (tetracycline) usually display a low-intensity peak that is generally not visible in the detection of target pollutants at lower concentrations which makes them even less attractive for commercialization. In solving this problem, several research studies have been reported in which researchers have presented solutions to improve the surface kinetics of electrodes by modifying them with various materials [11,12]. Recently, numerous research studies have been reported in the literature presenting the modification of conventional electrode surfaces with a large variety of materials intensively explored to improve electrode performance, e.g. graphene (GP) and metal particles: silver (Ag), gold (Au), and platinum (Pt). Graphene is a carbon-based material that has been extensively investigated in recent years following a report by Novoselov et al. on the isolation and measurement of its unique electronic properties [13]. Graphene and graphene oxide are the most promising materials in the field of nanotechnology due to their excellent chemical properties such as high chemical stability, high elasticity, desired catalytic properties, and large specific surface area. A study by Zhang et al described the manufacture of a glassy carbon electrochemical sensor using graphene as a modified nanofiber material stacked in combination with gold nanoparticles. The new sensor obtained presented reproducible results, high long-term stability, and exceptional electrocatalytic properties in terms of electrochemical detection of capecitabine. After testing the electrode, a detection limit of 0.0171 μM was obtained [14]. Electrochemical modification of commercial electrodes is an efficient method for depositing metal particles. The electrochemical deposition in stages of metal particles has the advantage of fine-tuning the amount of metal deposited, the number of metal sites, and their size. The special properties of the carbon materials make them be considered appropriate for use as electrode support to develop the modified electrodes [15]. Silver particles (Ag) are some of the best developed and used to modify the surface of working electrodes, because they are economically cheap compared to other materials, possess good chemicals and physical properties, offering excellent rates of electron transfer. Also, due to their optimal conductivity and biological compatibility, in recent decades there has been an emphasis on the use of gold particles (Au-P) in obtaining sensors [12]. Gold nanoparticles (AuPs) have been applied in electrochemical fields to improve sensor performance. AuPs have special properties, such as high conductivity, a large specific surface area, a strong adsorption capacity, biocompatibility, and a high electrochemical catalytic activity [16]. This study aimed to modify two substrates, i.e., graphite (GR) and glassy-carbon (GC) with graphene (GP), Ag, Au, and Pt, and to test them for tetracycline detection. A study was conducted to identify the negative and positive aspects to select the electrode substrate linked to the sensing application demand (sensitivity, selectivity, the lowest limit of detection, individual simulations detection). EXPERIMENTAL PART The electrochemical studies were performed using a potentiostat -galvanostat Autolab PGSTAT 302 (Eco Chemie, The Netherlands), controlled by a computer using GPES 4.9 software and a cell with three electrodes. The cell structure included a working electrode, electrochemically modified, a platinum counterelectrode and a saturated calomel electrode (ESC) used as the reference electrode. The commercial electrodes used in obtaining new electrodes with improved properties for electroanalytical use were: the glassy carbon (GC) provided by Metrohm and the pencil graphite electrode (GR). The working electrodes were mechanically cleaned using 0.2 μm alumina powder (Al2O3), and then washed with distilled water. The electrochemical modification of the commercial electrodes GC and GR with graphene (GP) used graphene oxid, that was electrochemically reduced on the substrate and particles/films of gold (Au), platinum (Pt), and silver (Ag) electrodeposited by applying the chronoamperometry technique at different potentials and as different deposition times. GR deposition on the substrate surface occurred at the potential of -1.5V for 120s. The modification of the electrodes with Ag particles occurred at the electrodeposition potential of -1.3 V and the electrodeposition time was 5 s. Regarding the modification with gold particles, the potential of -0.3 V and electrodeposition time of 300 s was applied and the deposition of platinum on the surface of the electrodes was assured at the potential of -0.2V and 300s [17]. 3 mM HAuCl4 concentration solutions + 0.5 M H2SO4, was used for Au electrodeposition, 10mM H2PtCl6 • 6H2O + 0.5 M HCl for Pt, and 4 mM AgNO3 for silver deposition. The supporting electrolyte of 0.1 M NaOH solution, was prepared using analytical purity sodium hydroxide (Merck, Germany) and distilled water. The electrode surface was renewed after each experiment by a light mechanical cleaning, washing, and application of an electrochemical treatment by repeating the cyclic scanning voltammetry between -0.5 → +1 V/ESC in the 0.1M NaOH support electrolyte. The electrochemical technique applied for electrochemical characterization and analytical applications was cyclic voltammetry. The working conditions for each electrodeposition are gathered in Table 1. RESULTS AND DISCUSSION Graphene (GP) was electrodeposited onto graphite (GR) and respective, glassy carbon (GC) substrates using graphene oxide under the conditions reported in the literature [11] to test its effect onto the further electrodeposition of Ag, Au, and Pt, further considering the development of the modified electrode characterized by the superior features for tetracycline (TC) detection using cyclic voltammetry (CV) technique. Each metal was electrodeposited onto graphite (GR) and glassy carbon (GC)-based substrate according to the reported data, and the resulting electrode compositions were characterized by cyclic voltammetry (CV) in 0.1 M NaOH supporting electrolyte and in the presence of TC. Comparative electrochemical behavior of Ag/Au/Pt electrodeposited onto graphite (GR) and graphene-modified graphite (GR-GP) substrates in alkaline medium and tetracycline (TC) target analyte The cyclic voltammograms were recorded for each Ag/Au/Pt electrodeposited onto GR and GR-GP substrate in 0.1 M NaOH supporting electrolyte and in the presence of 30 µM TC and are presented comparatively for both GR and GR-GP substrates in Figure 1. It can be seen that the graphene presence onto the graphite substrate improved the capacitive component of the current due to the electric double-layer capacitance and depolarization for oxygen evolution is noticed (Fig. 1a). The presence of the graphene within the graphitebased substrate on the metal deposition is can be seen in Fig. 1b-d and it can be noticed that the peaks corresponding to the redox behavior of the metal are smaller, which can be associated with a competition between graphene and metal reduction due to the close potentials applied for each deposition. For Ag and Au electrodeposited on the graphite substrate the presence of graphene led to the polarization effect towards the oxygen evolution, while for Pt the depolarization is noticed in the presence of graphene. A small change of the voltammogram shape is observed for Pt electrodeposited on the graphite substrate in comparison with GR and GR-GP ( Fig. 1a and 1d), which reveal a small presence of platinum within the electrode composition. A similar oxidation and reduction behavior of TC manifested on the whole potential window between oxygen and hydrogen evolutions was found for GR, GR-GP electrode compositions, which shows a possible electropolymerization process of TC or its oxidation products. For GR-Pt and GR-GP-Pt electrode compositions, the presence of TC reduced both anodic and cathodic currents, the aspect can be associated with a possible electrode fouling effect. Very good signals are found for GR-Ag and GR-GP-Ag electrode compositions in relation to both detection current and potential. Besides the anodic signal, Ag-based electrode compositions exhibited the cathodic signals, as well as. The useful current considered as a signal for TC detection obtained after extracting the background current from the peak current values recorded in the presence of 30 µM TC was determined for each composition and are gathered in Table 2. Comparative electrochemical behavior of Ag/Au/Pt electrodeposited onto glassy carbon (GC) and graphene-modified glassy carbon (GC-GP) substrates in alkaline medium and tetracycline (TC) target analyte Similar experiments for comparison were carried out for Ag/Au/Pt electrodeposited onto GC and GC-GP substrate in 0.1 M NaOH supporting electrolyte and in the presence of 30 µM in comparison with GC and GC-GP substrates. The cyclic voltammograms are presented in Fig. 2. In comparison with GR electrode composition, the presence of graphene within GC composition is manifested in special for depolarization effect towards the oxygen evolution and less for the capacitive component of the current. The current increase in the presence of TC was noticed only within the oxygen evolution potential for GC-GP in comparison with GC, for which a small current increase due to TC oxidation was found within the potential window (Fig. 2a). The effect of the presence of the graphene within the GC based substrate on the metal deposition can be seen in Fig. 2b-d and is similar to metal electrodeposited onto a GR-based substrate. The useful current reached on GC-based electrode composition is smaller in comparison with those found on the GR-based electrode substrate, which is due to the lower background current that influenced the current domain. However, their behavior is important to consider the detection potential for further development of the detection protocol using the cyclic voltammetry technique. The detection parameters considered as a reference for the development of the detection protocol related to the detection potentials and the useful current are gathered in Table 3 for glassy-carbon-based electrode compositions. Even if Ag/Au/Pt showed the potential for the development of the detection protocol, however, the useful currents prior selected did not increase linearly with TC concentration increasing, which informed that the mechanism of TC oxidation and reduction is not based on the diffusion step and implicit, they are not suitable for the detection process. Figures 5 and 6 show as examples the cyclic voltammograms recorded on GR-Ag in the presence of TC concentrations ranged from 10 to 60 µM and on GR-GP-Au Ag in the presence of TC concentrations increasing from 10 to 60 µM. All detection characteristics were determined based on the calibration plots for each electrode composition are gathered in Table 4. The relative standard deviation (RSD), the lowest limit of detection (LOD), and the limit of quantification (LQ) are determined for three replicates based on the following equations [21]: (1) where xi is the values current, S is the square average deviation, RSD is the relative standard deviation, LOD is the lowest limit of detection, LOQ is the limit of quantification and m is the value of the obtained sensitivity. Based on the above-presented results, it can be noticed that the highest sensitivity for TC detection was achieved for GR and GR-GP electrodes, which was expected to take into account the higher background current of GR in comparison with GC. However, it was not possible to develop the TC detection protocol for Ag/Au/Pt electrodeposited onto the GR and GR-GP substrates. The results achieved for GC and GC-GP substrates showed that all Ag/Au/Pt electrodeposited GC and GC-GP electrodes exhibited useful properties for the development of TC detection protocol. In comparison with GR, GC electrode exhibited worse electroanalytical parameters. Except for the unmodified GC electrode, all Ag/Au/Pt electrodeposited onto GC and GC-GP substrates allowed the development of the TC detection procedures using CV characterized by the better limit of detection and lower detection potential (see Table 4). Better limits of detection is the main target for the detection protocol and it can be seen that Ag electrodeposited on GC-CP exhibited the lowest limit of detection for TC determination at the potential detection of 0.460 V/SCE on the cathodic branch of the cyclic voltammogram that represents also, a great advantage for possible selective detection of the TC within the multi-component system, which can be detected through anodic detection procedures. A very interesting behavior was found for GC-GP-Au based on a negative anodic detection potential, which presents also a great potential for simultaneous detection of TC within the multi-component systems.
3,663.8
2020-10-14T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Global network of computational biology communities: ISCB's Regional Student Groups breaking barriers Regional Student Groups (RSGs) of the International Society for Computational Biology Student Council (ISCB-SC) have been instrumental to connect computational biologists globally and to create more awareness about bioinformatics education. This article highlights the initiatives carried out by the RSGs both nationally and internationally to strengthen the present and future of the bioinformatics community. Moreover, we discuss the future directions the organization will take and the challenges to advance further in the ISCB-SC main mission: “Nurture the new generation of computational biologists”. All authors are affiliated with the ISCB-SC Regional Student Group program. Competing interests: The events mentioned in the article were partially supported by funds from ISCB-Student Council, a subsidiary of the Introduction Regional Student Groups (RSGs) are student-oriented groups affiliated to the Student Council of the International Society of Computational Biology (ISCB-SC) 1 . Aligned with the mission and objectives of the parent organization ISCB-SC, RSGs were formulated to promote networking amongst budding computational biologists in the local geographical regions. Since its formation in 2006 with four RSGs (Netherlands, India, Korea, and Singapore), the program has come a long way with 30 active RSGs operating around the world, together constituting a global network of over 2000 members. RSGs are completely autonomous and over the last decade, with the economic support of the ISCB-SC, have organized a large variety of activities according to their community needs. Computational biology and bioinformatics are relatively new and multidisciplinary areas and hence undergraduate education on these topics is scarce. Instead, young researchers in these fields often come from other disciplines such as molecular biology, computer science or physics and need to complement their background education with knowledge from other disciplines. Additionally, the growing importance of computational biology in a wide range of biological fields is also motivating young researchers in pure experimental groups to get more expertise about this emerging field 2 . For all these reasons, the RSGs are playing an instrumental role in promoting networking and knowledge transfer related to computational biology topics amongst student researchers. These are typically achieved by organizing offline and online networking and educational events such as workshops, symposiums, hackathons, online competitions, virtual seminars, and many others. When the ISCB-SC was formulated in 2004, one of its main challenges was due to students from different geographical regions having different needs that could hardly be addressed by a single activity or event. As a consequence, the RSG-program was created in 2006 so people living in specific regions could articulate their own activities that will, in turn, enhance networking and the emergence of regional leaders that will later be potential successors to the ISCB-SC leadership. Each RSG has a different organizational architecture depending on the local requirements, objectives and initial set up. For instance, some RSGs have been built from scratch, whereas some others have been created in collaboration with existing student organizations such as COMBINE (Australia) and SASBi (South African Student Bioinformatics Society). Irrespective of the organizational setup, there are a few requirements which have been kept mandatory for setting up an RSG at the region. The steering team requires a President and a Secretary who are primarily students, and a faculty advisor who is a member of the ISCB. Also, it is highly encouraged that this steering team has a representation from multiple universities/institutes to promote local collaborations in the field. Setting up an RSG involves taking into consideration operational and logistics aspects which can be a challenge at the start. However, this is a great learning experience for the involved young researchers as they can develop transferable soft skills such as conferences and symposia organization, fundraising, conflict management, team building, project executions and many others. Some of the operational hurdles and related case-studies have been highlighted in our previous article 3 . All 30 ISCB-SC RSGs have organized a diverse battery of events tailored to address specific needs and requirements from their target audiences and members. We will discuss some of the many successful ventures carried out by the different RSGs Regional events are great networking platforms Since 2005, the ISCB-SC organizes several events at distinct levels of national or international cooperation. One-day symposia, namely SCS 4,5 , ESCS 6 , LA-SCS 7 and SCS Africa 8 , are yearly organized as satellite events to the main ISCB official conferences: ISMB, ECCB, ISCB-LA and ISCB-Africa respectively. These symposia constitute the ISCB-SC flagship events and have itinerant locations for each edition. Although these events constitute the perfect occasion where the general ISCB-SC leadership can get together and discuss in person the plans and balances for the current year, travel costs and accommodation make it difficult for all students to attend, venues are often located at capital cities and the official language is English which can intimidate and make it difficult for young students to participate. To leverage these obstacles and inspired by the success of the aforementioned symposia, various RSGs have organized oneday symposia either under the aegis of a major conference or as a stand-alone event. In the next section, we revisit the highlights of some of the latest RSG organized events, such as symposia, workshops and social meetings, grouped by continent. Latin America Since 2016, RSG-Brazil has successfully organized three symposia editions in collaboration with the Brazilian Association of Bioinformaticians and Computational Biologists (AB3C), with an average of 70 delegates per year. RSG-Colombia organized two regional meetings in Medellin and Bogotá in 2017. Their national meeting is held every two years during the biennial Colombian Congress in Bioinformatics (http://ccbcol.org/) in collaboration with the Colombian Society of Bioinformatics and Computational Biology (http:// www.sc2b2.org/). The first edition of this national meeting was held in Cali, which experienced an intense day of science and networking, with 12 student talks from 8 Colombian universities. Similar to previous SAJIB versions, the 3rd Argentine Symposium of Young Bioinformatics Researchers (SAJIB) was held on July 28-29 2018 at Fundación Instituto Leloir (FIL), in Buenos Aires, Argentina. SAJIB 9 was carried out for a period of two days; one day reserved for workshops and another day for the symposium. In this edition, they offered two courses: "Introduction to Python" and "Filtering, assembly, and assessment from NGS data." Forty-five students and young researchers attended both. The second day included the symposium which gathered 34 people. Additionally, RSG Argentina has been working, joining efforts to expand the bioinformatics community over the country. During 2017-2018, RSG-Colombia has been involved in the organization and teaching of different bioinformatics courses such as tutorial sessions on metagenomic data analysis at Universidad de Antioquia and Universidad del Valle. A similar approach has been employed by RSG-Chile for organizing tutorial workshops on topics of genomics, coarse-grained Molecular Dynamics, R programming etc. Recently initiated in 2017, RSG-Chile primarily has the presence in two campuses in Chile. They have been working on spreading it to other universities. Despite various logistic and technical hurdles, they have successfully managed two workshops in the past year and have received a good response from the audience. Technical-oriented seminars have also been organized by RSG-Brazil such as "Genomic data analysis using Python programming" and a hands-on course on bioinformatic analysis of data related to tropical diseases. Europe RSG-Spain has organized several Bioinformatics Student Symposium editions, either preceding the Spanish Bioinformatics Symposium or as standalone events, typically gathering over 50 attendees for selected scientific and/or career talks, talks, workshops and networking activities. RSG-UK has organized several bioinformatics and life science student symposia since its inception along with several workshop and seminars gathering UK-wide community of bioinformatics students and scientists 10 . In 2018, RSG-Turkey organized a one-day symposium as a satellite meeting to the most prominent international bioinformatics meeting in Turkey, 4th International Symposium on Health Informatics and Bioinformatics (HIBIT). Collaborations between various RSGs have resulted in successful events such as the BeNeLux Bioinformatics Conference (which was organized jointly by RSG-Belgium, RSG-Netherlands, and RSG-Luxembourg). Apart from formal sessions, informal setups also have proved to be good networking events well-received by youth computational biologists across Europe. In 2017, RSG-Luxembourg organized a series of science pub quiz events named 'Sci-Pub,' where scientists and citizens casually met to answer fun questions about science and win prizes. The invitations were extended to the broad public through Facebook adds as well as students from the University of Luxembourg from various backgrounds. After the end of each session, the assessment of knowledge was performed through written questionnaires. In total, for "Sci-Pub" events spanned the second semester of 2017 with an average attendance of 30 people per session. Similarly, one of the recurring activities of RSG-Switzerland called "Bioinformatics in the Pub" has been very much appreciated by the Swiss students. It is a monthly meeting that gathers bioinformaticians from different departments of the University of Lausanne (UNIL) and the polytechnic school EPFL in a friendly atmosphere. This event is planned to be extended in the future to Basel and Zurich. RSG-Switzerland is also closely related to the Swiss Institute of Bioinformatics (SIB). Encouraged by the SIB, RSG-Switzerland aims to enable students to discover other careers paths outside of academia. It is in this spirit; they were given the opportunity to organize a full session during the Basel Computational Biology conference BC2 named "Entrepreneurs' stories: opportunities and challenges of starting your own company." RSG-Denmark focused on smaller events that facilitate networking between computational biologists around Copenhagen. They have organized workshop events and a regular bioinformatics coffee meetup. At CBio Coffee events students got a chance to ask questions on how to further their career for example "what elective courses should I focus on?", "how can a foreign expat best further his/her career in Denmark?" or "How would it be to work in a specific company as a data scientist/bioinformatician?" On the other hand, their young and neighbouring community RSG-Sweden organized career events that have been much appreciated. Besides, RSG-Sweden community have organized journal several clubs and online networking hours. Africa RSG-South Africa organized a one-day student Symposium in association with the South African Bioinformatics (SASBi) and Genetics (SAGS) Societies 11 for two consecutive years. Besides, to encourage student presenters to showcase their research work, RSGs also invited eminent speakers who are distinguished scientists in their area. RSG-Northern Africa organized a conference on "Personalized Medicine" and invited Prof. Peter Tonellato from Harvard Medical School as their keynote speaker in 2015. RSG-South Africa organized a session of exhibitions and workshops at the National Science Festival, with prime focus at school-going scientists. Irrespective of venue locations, events organized by RSGs have helped to facilitate interactions from students of other countries involved as well. For instance, most of the events hosted by RSG-Northern Africa 2013 generally have venues in different parts of Morocco. But, students of other neighbouring countries such as Tunisia and Mali could also participate in RSG-Northern Africa symposium as they were provided with travel fellowships to travel to the event venue in Nador, Morocco. In 2017, the 2nd RSG-Northern Africa Symposium was organized in December in Casablanca, Morocco. This event was co-founded by H3AbioNet that supported three travel fellowships for students coming from Tunisia and Mali. This was followed by a further meeting and student symposium alongside the International Society for Computational Biology (ISCB) and the African Society for Bioinformatics and Computational Biology's (ASBCB) biennial conference in October of 2017 in Entebbe, Uganda. Forty-four students from 9 different countries across the continent attended including participants from various levels of expertise. The symposium featured a keynote speaker, Dr Segun Fatumo, as well as student presentations and a poster session. These events are valuable for students in terms of experience as well as for networking within the continent not only the country 12 . USA Some symposia have also extended to more than a one-day schedule and comprised various events. In December 2017, RSG-Southeastern USA organized their first research symposium for 2017-2018 in collaboration with University of South Carolina; University of South Florida, St. Petersburg; and University of Alabama, Birmingham. There were research talks from professors from across these universities, hands-on workshops on machine learning, designing pipelines for genetic analyses and threedimensional modelling of biomolecules. Undergraduate and graduate students also had the opportunity to give talks and present posters at the symposium. For students who search for initiatives where they can acquire hands-on experience with new concepts and techniques, standalone workshops typically appeal a lot. RSG-District of Columbia (USA) has been organizing summer workshops on "Bioinformatics, Genomics and Computational Biology" at the University of Maryland campus, where they have targeted to involve researchers from different programming backgrounds to benefit from the workshop (https://iscb-dc-rsg.github.io/workshop2017/). Asia For panel discussion at "Career Opportunities in Computational Biology and Bioinformatics" held during InCoB 2018 (International Conference of Bioinformatics), RSG-India invited various scientists and professionals who have considerable experience in their respective fields of academia and industry in India and abroad. The event held at Jawaharlal Nehru University, New Delhi focussed on the discussion about different career opportunities and skill sets required for a job profile in academia or industry. Online platform for networking and knowledge transfer The increasing number of social media platforms and usage of web resources have led to new communication channels for networking. The community of STEM researchers is expanding its presence on Twitter and other social media platforms to voice their opinions on crucial matters, share their research work, and promote science. Social media platforms allow people to get connected quickly and frequently irrespective of their locations. The majority of RSGs interact with their members via Twitter, curated mailing lists, online groups, and official Facebook pages. Several RSGs are spreading their branches by launching online initiatives. The webinar project started a few years ago by the RSG-Turkey has reached the audience of more than 350 people (https://www.bigmarker.com/communities/bioinfonet) in over 30 countries and continues to grow. The RSG-Turkey team has also initiated collaborative sessions with RSG-Colombia and RSG-Denmark. The primary goal of these webinars is to encourage researchers and mainly students to know more about computational biology in their countries as well as abroad. In collaboration with RSG-Turkey, the RSG-Colombia is inviting bioinformaticians and computational biologists working in Colombian universities as speakers. This is an essential step towards increasing the visibility of research work being carried out. Starting in 2019, RSG-Southeastern USA is also organizing online podcasts and talks to engage, foster and increase participation amongst the Bioinformatics student groups across Southeastern USA region. Online platforms also benefit regions with limited access to resources, for instance in the case of RSG-Western Africa. H3A-BioNet, a Pan African Bioinformatics network comprising 32 Bioinformatics research groups distributed amongst 15 African countries with two partner institutions in the USA, is a major supporting network behind RSG-Western Africa. RSG Western Africa has primarily benefited from the H3ABioNet as they provide free content through webinars, funding tailored for students in resource-limited countries to attend career conferences and workshops and more recently, facilitating participation Bioinformatics development by inclusion in a variety of H3ABioNet projects. In addition to a webinar series, online competitions have also organized programming challenges 'CASPita', by RSG-Italy, and 'Research writing competition,' by RSG-India. RSG-Italy organized a programming challenge inspired by the CASP (Critical Assessment of Structure Prediction) 10 competition and hence named it 'CASPita.' The participants were challenged to write a parser for text output of BLAST. A few groups for all over Italy joined the competition and were evaluated by coding skill and biological accessibility. The winner was awarded a monetary prize funded by the ISCB. In addition, a "Scientific Writing Competition" was organized by RSG-India at the end of 2018. The participants were asked to submit an essay entry in any of the three different topics provided. The topics were selected to encourage students to be creative and innovative while requiring a prerequisite knowledge of computational biology and bioinformatics and being up-to-date with the latest advancements and present status of the research area. The competition invited commendable participation from students from across the country and the best entries, graded by creativity, innovation, futuristic outlook and other aspects, were awarded. Future directions and plans: Opportunities and obstacles At present, the RSG program is extending further to new regions such as Lebanon, Czech Republic, Bangladesh, and other countries. Existing RSGs are attempting to expand further in their areas of operations; such as RSG-Spain, which has the intention to split into several local nodes to better cover the large region. Similar efforts have been carried out by RSG-Australia to have different divisions across the country. Many new RSGs started in the past three years such as RSG-Costa Rica, Colombia, Chile, Greece, Bangladesh, Jordan, and Southeastern-USA. The relatively new RSG-Sweden has also established itself as a bridging entity between the student community and industry in the field of computational biology in Sweden since their recent initiation in 2018. To further expand the community and make it inclusive, they have established the concept of branches starting with Lund and Stockholm and most recently Uppsala and Gothenburg. In the future, RSG-Sweden aims to collaborate with other RSGs across Europe to be able to share knowledge and ideas and strengthen the global community of the ISCB and its Student Council. Future plans include recruiting new members to the committee and branches and organizing seminars and hackathons. Currently, RSG Germany is also in the process of reestablishing its connections to students by initiating monthly literature review events and in the planning process of organizing a student symposium in Heidelberg in 2019. Furthermore, the RSG plans to expand its network with German universities and arrange lunch meetings between interested participants along the lines of the Connect movement (https://connected.mit.edu/about/connector/mit). RSG Chile has closely collaborated with biotechnologist leaders from all over Latin America who make up the Allbiotech Community. Allbiotech's purpose is to establish and promote a Latin American community ecosystem that includes all segments of the economy. This collaboration also resulted in the starting up of RSG-Costa Rica. Their focus is to unify the interested community in both disciplines through meetings, workshops and diffusion activities like the ones developed at the ISCB LA-SCS 2018 and Allbiotech 2018, and work together to be a part of the organization team for Allbiotech 2019 that will take place in Costa Rica. Although, new RSGs show zeal and enthusiasm to expand their ventures in the regions, they also face issues with the establishment or team transitions. For instance, to promote networking among the students and postdocs on the west coast, RSGs based in California and Nevada region of the United States was initiated in 2016. They had initial hurdles of expanding their membership, even though they tried by being one of the exhibitors at NCCB (North California Computational Biology) symposium 2016. Later, RSG-California+Nevada was merged with the undergraduate student organization of the University of California, San Diego (UCSD) bioinformatics group for further development. The way to success and growth when running an RSG is full of challenges. Irrespective of several operational hurdles and obstacles 13 , the RSGs have been putting efforts to expand the spirit of enthusiasm for computational biology research. In the future, the RSG program aims to expand to new regions, particularly in developing nations, promote collaborations between RSGs and also exploit virtual space via virtual seminar series programs. Data availability No data is associated with this article. Grant information The events mentioned in the article were partially supported by funds from ISCB-Student Council, a subsidiary of the International Society for Computational Biology. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
4,631.2
2019-09-02T00:00:00.000
[ "Computer Science", "Biology", "Education" ]
Evaluation of LL 37 Lipoprotein As Innate Immunity Marker among Sudanese Patients Cutaneous Leishmania Background: The leishmaniasis is a group of diseases with a broad range of clinical manifestations caused by several species of parasites belonging to the genus Leishmania. LL-37/hCAP18, the only cathelicidin in human, is expressed as an 18kDa preproprotein. The most prominent function of cathelicidin is their ability to inhibit propagation of a diverse range of microorganisms, which occurs at a micromolar range. Objective: The study was aimed to evaluate the LL37 plasma level in Leishmania Sudanese patients. Methods: In a case-control study, 300 subjects were enrolled (200 as case and 100 controls); 5 ml venous blood was collected in EDTA container, then plasma was obtained and stored frozen at –80°C. LL 37 was estimated using competitive ELISA. The data were analyzed using SPSS version 21. Results: The results revealed that 115 (57%) of Leishimania patients were male and 85 (43%) were female. Plasma LL 37 level was significantly increased in Leishmania patients (1.30 ± 0.71) compared to the control (0.21 ± 0.20) with (p-value 0.000). Conclusion: Leishmania patients had higher levels of plasma LL37, suggesting effective antimicrobial immunity process enhancing healing of cutaneous leishmaniasis. Introduction The leishmaniases are a group of diseases with a broad range of clinical manifestations caused by several species of parasites belonging to the genus Leishmania (Family: Trypanosomatidae).The Leishmania parasite, a haemo-flagellate protozoan organism, is exclusively transmitted by the bite of a female sandfly of the genera Phlebotomus or Lutzomyia.There are three clinical forms of leishmaniasis: visceral leishmaniasis (VL) including post-kala-azar dermal leishmaniasis (PKDL), cutaneous leishmaniasis (CL), and CL with the involvement of the mucous membranes, also called mucocutaneous leishmaniasis (MCL) [1].CL in Sudan is similar to the disease in other endemic areas. Most patients have multiple lesions of the modular or ulcerative type [3].Typically, lesions start to heal spontaneously after approximately three months [4].Unlike American mucocutaneous leishmaniasis, Sudanese mucosal leishmaniasis (SML) is not preceded or accompanied by cutaneous lesions.Three clinical presentations of SML have been reported: nasal, which is characterized by nasal obstruction, mucoid discharge, and slight bleeding; oral, where the patient complains of a sensation of fullness of the mouth, spontaneous loss of teeth and bleeding from the gum; and oro-nasal, where the hard palate may perforate.The disease is almost exclusively found in adult males (20-70 yr) [5].Cathelicidins are a family of evolutionarily conserved antimicrobial peptides described in mammals, birds, fish, and reptiles.This class of pleiotropic peptides is an important mediator of innate immunity against microbial pathogens and provides first-line defense against infection by promoting rapid elimination of pathogens.LL-37/hCAP18 is the only cathelicidin in human [6].The most prominent function of cathelicidins is their ability to inhibit propagation of a diverse range of microorganisms, which occurs at micromolar range [7].Besides their direct antimicrobial action, recent studies have revealed multiple functions of cathelicidins in many other activities relating to tissue repair and innate immunity.The human and porcine cathelicidins, LL-37/hCAP18 and PR-39, respectively, for examples, have been reported to modulate the activity of immune and 7, 8, 9 inflammatory cells [8,9].Cathelicidins have also been shown to promote re-epithelialization of human skin wounds [10] and rat gastric ulcer [11].The study was aimed to evaluate the LL37 plasma level in Leishmania Sudanese patients. Methods In this case-control study, 300 subjects were enrolled, 200 diagnosed with CL infection and 100 as control (free of CL infection).Blood samples were drawn after obtaining written informed consent from the patients; 5 ml of venous blood was collected from each subject in EDTA container, then plasma was obtained and stored frozen at -80°C. LL 37 was estimated using competitive ELISA.Data were expressed as percentage and differences in variables mean levels between the two groups were tested by Student's -test.The results showed that 115 (57.0%) of leishmania patients were males and 85 (43.0%) were females (Figure 1).Regarding the age group, the higher proportion (47%) of leishmania patients was found among the 12-29 yr old, followed by 30-47 yr, and then 48-65 yr (Figure 2).The mean values of LL-37 (plasma level ng/ml) for the cases was 1.30 ± 0.71 and for the control group was 0.21 ± 0.25, significant differences increase was found between cases and control groups with a p-value of 0.000.(Figure 3).The mean LL-37 level was significantly increased in female leishmania patients compared to male leishmania patients p-value 0.000 (Figure 4).LL-37 is one of the most studied antimicrobial peptides that play important roles in the innate immune system.In addition to its anti-infective activities, LL-37 stimulates local angiogenesis, acts synergistically with the epidermal growth factor receptor to promote epithelial growth, and attracts monocytes and neutrophils through formyl peptide receptors on these cells.In this way, the peptide helps orchestrate the inflammatory process [12][13][14].Unlike the present study, which studied the plasma level of LL-37, Kulkarni MM et al. studied the role of this host peptide in control of the dissemination of cutaneous infection by the parasitic protozoan Leishmania, using a mouse knock-out model in cathelicidin-type antimicrobial peptides (CAMP) [15].They found that the presence of pronounced host inflammatory infiltration in lesions and lymph nodes of infected animals was CAMP-dependent.To our knowledge, this is the first study of plasma LL37 in cutaneous leishmania patients.The present study data revealed an increase in plasma LL 37 levels among cutaneous leishmania patients suggesting increased expression of LL 37 is moreover able to limit dissemination of CL.These results support the Kulkarni MM et al.'s suggestion that CAMP is crucial for the local control of cutaneous lesion development and parasite growth and metastasis [15]. Conclusion Leishmania patients have higher levels of plasma LL37, suggesting effective antimicrobial immunity process enhancing healing of CL. Figure 1 : Figure 1: Distribution of patients according to the gender. Figure 2 : Figure 2: Distribution of patients according to age.
1,337
2019-09-26T00:00:00.000
[ "Medicine", "Biology" ]