text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Exploring the bushmeat market in Brussels, Belgium: a clandestine luxury business
The European Union prohibits the import of meat (products) unless specifically authorised and certified as being eligible for import. Nevertheless, various scientific papers report that passengers from west and central African countries illegally import large quantities of meat, including bushmeat, into Europe via its international airports. They also suggest that African bushmeat is an organised luxury market in Europe. In the present study we explore several aspects of the African bushmeat market in Brussels, Belgium. We demonstrate the clandestine nature of this market where bushmeat is sold at prices at the top of the range of premium livestock and game meat. Inquiries among central and western African expatriates living in Belgium, who frequently travel to their home countries, indicate that the consumption of bushmeat is culturally driven by the desire to remain connected to their countries of origin. DNA-based identifications of 15 bushmeat pieces bought in Brussels, reveal that various mammal species, including CITES-listed species, are being sold. Moreover, we find that several of these bushmeat pieces were mislabelled.
Introduction
Wild meat, the meat derived from non-domesticated terrestrial animals hunted for consumption or sale as food, ensures the food supply in many regions of the world. In the African tropical rain forest regions, wild meat is referred to as bushmeat. Especially in regions where meat from domesticated animals is scarce or expensive, it often represents the primary or only source of animal protein (Swamy and Pinedo-Vasquez 2014;Wilkie et al. 2016). In such areas, the bushmeat trade also generates income by providing cash through trade (Nasi et al. 2008;Brown and Marks 2008). Hunting and the consumption of bushmeat are integral parts of the cultural heritage of many African communities and extend to the expatriate urban elite who consider it a delicacy and a way to maintain links with a traditional lifestyle (Cawthorn and Hoffman 2015;Ichikawa et al. 2016). This taste preference and desire to retain cultural ties has generated an international market for bushmeat, with widespread organised trade networks feeding this international demand (Chaber et al. 2010).
International bushmeat trade is illegal. The European Union bans the import of meat (products) by passengers, unless it meets the certification requirements for commercial consignments and is presented at the EU border control post with the correct documentation (EU 2019/2122; European Commission 2019). Many wildlife species that are frequently consumed and traded as bushmeat may carry pathogens, and the hunting and processing of bushmeat has been linked to several disease outbreaks (Kurpiers et al. 2016;Dawson 2018;Katani et al. 2019). As such, the uncontrolled import of bushmeat may represent health risks on human and animal populations in the countries of destination.
There is no reliable information on the scale of the international bushmeat trade, yet growing evidence suggests that the amount of bushmeat imported in Europe is substantial (Chaber et al. 2010;Falk et al. 2013). During a survey at Roissy-Charles de Gaulle airport (Paris, France) seven percent of the inspected passengers from west and central African countries were carrying bushmeat, while 25% had livestock meat in their luggage (Chaber et al. 2010). Yet, bushmeat consignments were over 20 kg on average (up to 51 kg), compared to 4 kg for livestock. This and other studies indicate that bushmeat is not only imported for personal use, but also to supply an illegal market for African bushmeat in Europe (Chaber et al. 2010;Falk et al. 2013;Dawson 2018). News reports on the illegal sale of bushmeat in European cities (Milius 2005;Brown 2006;Marris 2006;Oger 2011) describe the clandestine character of these markets and note that customers are prepared to pay high prices to purchase bushmeat.
The illegal international bushmeat trade contributes to the overexploitation of vulnerable and protected species. Studies at European airports for example reported the illegal import of species that are listed as Vulnerable or Critically Endangered by the International Union for Conservation of Nature (IUCN) and that are protected under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) (Chaber et al. 2010;Falk et al. 2013;Wood et al. 2014).
The current study provides the very first data on the clandestine bushmeat market in Brussels, Belgium (including communities recognised under the Brussels Capital Region). We check the CITES and IUCN Red List status of the species sold as bushmeat and their market prices. In addition, we examine the motivations for this trade by means of focus group discussions with expatriates from central and western African countries. Because carcasses are usually smoked or cut into small pieces, morphological identification of bushmeat pieces is often unreliable. Therefore, we use DNA barcoding of the mitochondrial markers cytochrome b (cytb) and cytochrome c oxidase subunit 1 (COI) to identify the bushmeat samples (D'Amato et al. 2013;Gaubert et al. 2015).
Sample collection
In November and December 2017, we attempted to purchase bushmeat at nine African grocery stores in the ''Matongé'' quarter in Brussels. The shops were selected because (fresh) African food products were on display (e.g. palm kernel oil, cooking ingredients), and the shopkeepers were of African descent and speaking Swahili or Lingala. In three shops bushmeat was for sale, so that over the course of 2 months we bought twelve bushmeat pieces. In May 2018 three additional bushmeat samples were bought in two other African grocery stores in ''Matongé'' by journalists of the Belgian public television stations RTBF (www.rtbf.be) and VRT (www.vrt.be).
DNA-based bushmeat species identification
Bushmeat tissue pieces were taken 0.5-1 cm under the surface and genomic DNA was extracted using the Qiagen QIAampÒ DNA Micro kit following the manufacturer's instructions. A 658 bp long COI fragment was amplified using the LCO1490 and HCO2198 primer pair (Folmer et al. 1994), while the primer L14723 or L14724NAT was combined with H15915 to amplify a 1140 bp fragment of cytb (Ducroz et al. 2001;Guicking et al. 2006). Details on PCR and sequencing conditions can be found in the supplementary material (Online Resource ESM_1).
To identify the bushmeat pieces the generated sequences were compared against the GenBank nucleotide collection (nr/nt) database using BLASTN (www.blast.ncbi.nlm.nih. gov/Blast.cgi) and against the Species Level Barcode Records using the BOLD Identification System (www.boldsystems.org). Neighbour-Joining trees were constructed to complement the interpretation of the results from the search engines (see Online Resource ESM_1 for more details).
Focus group discussions
We organised qualitative group discussions to gain insight into the reasons for bushmeat import and consumption within the framework of a large awareness project implemented by Brussels Airport aimed at sensitizing African travellers about import rules and policies. Participants were selected from a panel of volunteers who had signed up to take part in market research on a wide range of topics. They included expatriates from central and western African countries, who frequently return to the country where they, or their ancestors, were born. Sixteen persons engaged in the discussions ( Table 1). The participants originated from seven countries (Angola, Burundi, Cameroon, the Democratic Republic of the Congo, the Republic of Guinea, Senegal, Togo) and most have been living in Belgium for at least 10 years. The discussions were organised in two separate groups of eight participants, and lasted for approximately 2 h.
In order to safeguard the authenticity of the answers, participants entered the group discussions without prior knowledge of the subject. No reference to the importation of food items or bushmeat was made during the recruitment of the participants, nor during the introduction at the beginning of the group discussions. The facilitator was an experienced ([ 20 years) market researcher with a psychology background. The discussions were held using an impartial approach, without directing the answers or judging the participants. The facilitator started by creating a relaxed environment before beginning the discussions using a pre-set procedure and predefined questions (Online Resource ESM_2). At first, the discussions focussed on the personal context concerning motivation and importance of travelling to the country of origin after which they slowly moved towards the subject of importing food items, plants, other goods and eventually also bushmeat.
Sample collection
Bushmeat was not on display in the African grocery stores visited. After specifically inquiring after its availability, six vendors declared they did not sell bushmeat. These vendors said that selling bushmeat is illegal and that they did not want to be fined. Conversely, in five shops the vendors did concede after negotiation that they had bushmeat for sale. The meat was either kept in the back of the store or it could be picked up the following day. Three vendors provided a phone number which allowed us to check when a new shipment of bushmeat was expected to arrive, which occurred with an average interval of 1-2 weeks. All bushmeat was claimed by the vendors to originate from the Democratic Republic of the Congo and sold under vernacular (French, Lingala, Swahili) species names ( Table 2). All pieces were heavily smoked and most were wrapped in non-transparent plastic. The announced price of the bushmeat was 40 €/kg. However, the pieces were not weighed, except for the three lightest and two heaviest ones (Table 2). So, after taking into account the weight of the individual pieces, prices ranged from 31-62 €/kg.
DNA-based bushmeat species identification
The 15 bushmeat samples were successfully sequenced for both DNA fragments (only cytb for sample VRT002) and the sequences were deposited in GenBank (COI: MT020839-MT020852; cytb: MT024303-MT024317). Yet, seven of the 15 bushmeat pieces could not be identified to the species-level with certainty (Table 2, Online Resources ESM_1 and ESM_3). For eight pieces-including the three monkey samples-the DNAbased identification did not correspond to the vernacular name communicated by the vendors. Two pieces sold as African buffalo were cattle, while in three instances (Table 2: samples RTBF001, BXL003, BXL007) the identified species belonged to a different family than the alleged species reported by the vendors. The remaining three misidentifications were inaccurate at the genus level. Four pieces of bushmeat originated from three CITES-listed species: red-tailed monkey (Cercopithecus ascanius, sample RTBF001), De Brazza's monkey (C. neglectus, samples BXL011 and VRT001) and blue duiker (Philantomba monticola, sample VRT002). Two other pieces of bushmeat (BXL004 and BXL008) originated from ''duiker'', without further distinction between Peters' duiker (Cephalophus callipygus), Ogilby's duiker (C. ogilbyi) and Weyns's duiker (C. weynsi). Of these three species only Ogilby's duiker is listed on Appendix II of CITES. Species could not be distinguished d VRT001 and VRT002 were sold together for 95 €. Price per kg was calculated using the total price and the sum of the weights of the two pieces, while the price per piece was calculated using the individual weights e Samples were not collected by co-authors PM, CN and SN, so no additional information to interpret the local name was available. Therefore all antelopes were included instead of restricting the translation to duikers only
Focus group discussions
All focus group participants, emphasised the importance of staying connected with the country where they, or their ancestors, were born as well as with the local customs, habits and rituals. They expressed a profound sense of attachment to their relatives and to their ancestral region. In general, all participants perceived African food items as tastier than European products, as well as purer and of better quality. Fifteen participants declared they often import African food items, including bushmeat, into Belgium for personal use, to introduce children to the taste of African food, and to share with relatives and close friends. Several participants benefitted financially from importing these goods, and for some this activity generated enough revenue to pay their travel expenses. All participants were aware of the existence of import regulations on food items and understood why the import of certain goods is illegal, e.g. to prevent disease or protect endangered species, yet they all believed that the items they import are safe. All participants indicated that there is confusion on what is allowed (and what not) and why certain food items are forbidden. Small quantities of dried food, well-packed or transported in cooler boxes, are considered safe and the sale of cooler boxes at African airports is perceived as an affirmation that transporting all food items is allowed. In addition, custom controls are experienced by the participants as a lottery, ''sometimes it passes, sometimes it doesn't''.
Availability of bushmeat and drivers of the market in Brussels
All vendors at the African grocery stores visited in Brussels appeared to be aware that importing and selling bushmeat is illegal. Those who were not deterred from selling bushmeat by the prospect of getting fined, kept the meat hidden. The clandestine, under the counter selling of bushmeat outside Africa has also been observed in London and Paris (Brown 2006;Oger 2011). Our sampling strategy did not allow to estimate the amount of bushmeat that was in stock at the different shops, yet the fact that new shipments arrived on a regular basis indicate that bushmeat is commonly available. The financial benefits from importing food stuff mentioned by some of the focus group participants, adds to the suspicion that the bushmeat market in Brussels is thriving well, the more so as some vendors declared to have stores and customers in other Belgian cities.
The culture-based motivation for the consumption of bushmeat by African expatriates in Brussels, i.e. to retain family and cultural ties, and to share the bushmeat with friends and relatives, has been reported elsewhere, too (Bair-Brake et al. 2014;Walz et al. 2017). The participants understood the import ban for certain products (disease risks or protected species), but were under the impression that some imports are permitted. The confusion about import regulations also arose during focus group discussions in the USA (Bair-Brake et al. 2014;Walz et al. 2017) and Germany (Jansen et al. 2016). This may be the consequence of the fact that information on import regulations is derived from word to mouth information, and personal experience or experiences from family and friends. There appears to be no incentive to actively search for correct official information.
Since bushmeat is illegal in Europe, there are no rules or control with respect to its identification and labelling. Instead, species identifications entirely rely on the doubtful information provided by the vendors. So, not unexpectedly, our limited data suggest that the bushmeat sold in Brussels is frequently mislabelled. This is in line with the equally high level of misidentifications of bushmeat reported on African markets (Bitanyi et al. 2011;Minhós et al. 2013;Galimberti et al. 2015). The fact that bushmeat is often sold under erroneous species names may be explained by the informal character of its commodity chain (Bitanyi et al. 2011;Boratto and Gore 2018). Hence, pieces of bushmeat may pass through several hands so that eventual species information is easily lost along the way. Moreover, the (semi)processed state of the bushmeat probably makes it difficult for vendors to know exactly what they are selling.
In some cases, however, mislabelling may be deliberate. Studies on African markets have shown that hunters and sellers may lie about the identity of the species concerned when it is, for example, a protected species or in order to meet customer expectations (Lindsey et al. 2011;Bitanyi et al. 2011;Minhós et al. 2013). The antelope that turned out to be a monkey (Table 2: sample RTBF001) may be an example where the vendor deceived the customer, since the sample involved a hand and was thus easily recognisable as part of a primate by the vendor. This specific purchase concerned an ahead-order of antelope which was collected the next day, at which time the piece was wrapped in non-transparent plastic and thus unavailable for visual inspection. The two pieces that were sold as African buffalo but that in fact were cattle are another example of deliberate mislabelling of livestock meat as bushmeat.
Bushmeat and species conservation
We were unable to identify all bushmeat pieces to the species level because the DNA reference databases are not complete and/or because the taxonomy of certain species is not yet fully resolved. This affected the number of CITES-listed species, since of the three potential duiker species identified for samples BXL004 and BXL008 only Ogilby's duiker (Cephalophus ogilbyi) is currently listed on Appendix II. Taking this into account, four up to six of the 15 obtained meat pieces originated from protected species, a proportion comparable to the approximate 1/3 of carcasses identified as CITES-listed species at Paris (France) and Swiss airports (Chaber et al. 2010;Wood et al. 2014).
Studies on the international bushmeat trade also found several species listed as Vulnerable, Endangered or Critically Endangered on the IUCN Red List (Chaber et al. 2010;Smith et al. 2012;Wood et al. 2014). In contrast, all of the wild African species identified in the current study are listed as Least Concern. However, this does not mean that these species are not at risk to become threatened in the (near) future. For instance, primates include the highest number of species threatened primarily by human hunt (Ripple et al. 2016). They constitute an important portion of the international bushmeat trade (Brown 2006;Chaber et al. 2010;Smith et al. 2012) and accounted for 3/15 of the pieces bought in the current study. Duikers (Cephalophinae) also represent a large proportion of the bushmeat trade in Africa (Fa and Brown 2009;Olayemi et al. 2011) and make up an important share of seizures at European airports (Chaber et al. 2010;Wood et al. 2014). Especially for these taxa accurate information on the quantity and identity of traded meat might be important to better assess the impact of the (international) bushmeat trade on the population size trends of individual species.
Luxury status of bushmeat in Brussels
Information on the price of imported bushmeat is scarce, yet reports on illegal markets in west European cities indicate that bushmeat is more expensive than livestock meat (Brown 2006;Marris 2006). Interviews with three bushmeat vendors in Paris, trading by telephone or on the streets, indicated they charge 20-30 €/kg (Chaber et al. 2010), prices higher compared to an average of 15 €/kg for domestic meat sold in French supermarkets at the time. Since the interviews were conducted in 2008, the price for bushmeat in Paris may have risen and might now be comparable to the 40 €/kg declared by vendors in 2017-2018 in Brussels. This latter price, however, was most often applied as a price per piece instead of price per weight, a custom comparable to the practice at African markets (Brown and Marks 2008;Okiwelu et al. 2009;Minhós et al. 2013). Due to this practice, prices for bushmeat in the current study were almost always (much) higher than the announced 40 €/ kg. Therefore, a comparison of the prices announced in Paris with the prices paid in the current study might not be straightforward without information on the actual prices per kilogram charged in Paris.
While the price for bushmeat in Brussels reaches up to 62 €/kg there seems to be no relationship between the price and the species involved (irrespective of who provided the species identification). Consulting the price of meat on the web shops of the three largest supermarket chains in Belgium (October 2019), only game meat (i.e. roe-deer, red deer and partridge) was sold at prices above 50 €/kg, while premium beef reached prices of 40-45 €/ kg. This comparison supports the luxury status of bushmeat in Brussels.
Author contributions TB, MDM, SG and EV contributed to the study conception and design. Material preparation, data collection and analysis were performed by FG, SG, PM, CN, SN, MP and SVDH. The first draft of the manuscript was written by SG and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding Not applicable.
Data availability The sequences generated during the current study are available in the GenBank repository.
Code availability Not applicable.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 4,795 | 2020-11-10T00:00:00.000 | [
"Economics"
] |
Oleanolic Acid Enhances the Beneficial Effects of Preconditioning on PC12 Cells
Preconditioning triggers endogenous protection against subsequent exposure to higher concentrations of a neurotoxin. In this study, we investigated whether exposure to oleanolic acid (OA) enhances the protective effects of preconditioning on PC12 cells exposed to 6-hydroxydopamine (6-OHDA). A concentration response curve was constructed using 6-OHDA (50, 150, 300, and 600 μM). The experiment consisted of 6 groups: untreated, OA only, Group 1: cells treated with 6-OHDA (50 μM) for 1 hour, Group 2: cells treated with 6-OHDA (150 μM) for 1 hour, Group 3: cells treated with 6-OHDA (50 μM) for 30 minutes followed 6 hours later by treatment with 6-OHDA (150 μM) for 30 minutes, and Group 4: cells treated as in group 3 but also received OA immediately after the second 6-OHDA treatment. Cell viability and apoptotic ratio were assessed using the MTT and Annexin V staining tests, respectively. In preconditioned cells, we found that cell viability remained high following exposure to 6-OHDA (150 μM). OA treatment enhanced the protective effects of preconditioning. Similarly, with the annexin V apoptosis test, preconditioning protected the cell and this was enhanced by OA. Therefore, preexposure of PC12 cells to low 6-OHDA concentration can protect against subsequent toxic insults of 6-OHDA and OA enhances this protection.
Introduction
The formation of free radicals and an impaired ability of cells to resist stress can result in damage to all major macromolecules in cells including proteins, nucleic acids, and lipids [1]. Living cells have a way of resisting oxidative stress; these include detoxification, production of protein chaperones such as heat-shock proteins, removal of damaged molecules, and increase of the levels of DNA repair enzymes [1]. The latter enzymes are enhanced when cells are exposed to mild stress and this is called preconditioning/hormesis [1][2][3]. Preconditioning is therefore a process by which cells are exposed to mild oxidative stress so as to make them more resistant when subsequently exposed to more severe oxidative stress [1]. During preconditioning bioprotective mechanisms that include induction of cytoprotective pathways (molecular chaperones), antioxidative systems, DNA repair systems, and immune system are stimulated. These molecular mechanisms have been shown to protect cells from various forms of cell death [4,5].
Parkinson's disease is a neurodegenerative disease and its prevalence rate is continuing to escalate as the life expectancy of the general population increases. Despite numerous studies its management remains unsatisfactory. Cell based therapy is one of the emerging treatments of Parkinson's disease and its use has grown in promise since certain stem cells have also been shown to have the ability to enhance endogenous neurogenesis [5]. However, the harsh environment of the diseased brain is a severe threat to the survival and/or correct differentiation of these implanted cells [5].
It has been shown that preconditioning may also occur within cells of the nervous system [6]. This study demonstrated how exposure of a dopamine cell line to sublethal concentrations of 6-hydroxydopamine (6-OHDA) protected against subsequent exposure to high concentrations of 6-OHDA. These results suggested that dopaminergic cells, when treated with a low concentration of a chemical, may enable the cells to survive better in the neurodegenerative brain.
Oleanolic acid (OA) is a relatively nontoxic compound that has been shown to have antitumoric [7], hepatoprotective [8], anti-inflammatory [9], antihyperlipidemic [8], antihyperglycemic or hypoglycaemic [10], and antimicrobial [8] properties, in addition to being an analgesic, antiulcer, anti-infertility, and anticarcinogenic agent [7]. Oleanolic acid therefore seems to have a variety of healthpromoting/disease-preventing characteristics. In this study we investigated the effects of preconditioning on cell viability of PC12 cells and assessed whether OA treatment would enhance the protective effects of preconditioning.
Chemicals and Reagents.
We obtained the adrenal phaeochromocytoma cell line (generally referred to as PC12 cell line) from the Department of Biochemistry, University of KwaZulu-Natal (South Africa). RPMI-1640 growth medium was purchased from Highveld Biological (PTY) Ltd (South Africa). Heat-inactivated horse serum, fetal calf serum, 6-OHDA, trypsin, dimethyl thiazolyl diphenyltetrazolium salt (MTT), and OA were purchased from Sigma-Aldrich (South Africa). Ascorbic acid was obtained from SAARchem (South Africa). Penicillin/streptomycin solution was obtained from Biochrom AG (South Africa). The Annexin V kit was purchased from BD Biosciences (South Africa). All solutions were prepared fresh prior to each experiment or assay.
Cell
Culture. The PC12 cells were grown in tissue culture corning flasks (Whitehead Scientific, South Africa). RPMI-1640 medium containing 300 mg/L L-glutamine, 4.5 g/L Dglucose, 1.5 g/L sodium bicarbonate, 1 mM sodium pyruvate, and 10 mM HEPES buffer was used as a component of growth medium in the following manner: growth medium which was made up of 83% RPMI-1640, 10% heat-inactivated horse serum, 5% fetal calf serum, and 2% penicillinstreptomycin solution (penicillin 10.000 U/mL, streptomycin 10.000 g/mL) and was kept at 37 ∘ C in a humidified incubator supplied with 5% carbon dioxide (CO 2 ). Cells were grown until 70-80% confluency was reached. When enough cells were grown, cells were then trypsinised, a cell count was performed using a Neubauer counting chamber under light microscope and the cells were then plated in a 96-well microtitre plate (for the MTT assay) and a 24-well plate (for the annexin V apoptosis test), at plating densities of 50 000 and 1000 000 cells per well, respectively. Cells were left to adhere to the plate surfaces overnight (12 up to 15 hours). On the day of cell treatment the medium in the wells was aspirated and cells were treated with 6-OHDA dissolved in medium or medium alone. 6-OHDA stock solution was prepared using a saline solution containing 2% ascorbic acid. All solutions were diluted with serum-free medium to their final concentration. After treatment, media with toxin was aspirated and cells were incubated with serum-free media for 24 hours before cell viability and apoptosis assays were conducted.
Concentration-Response
Curve. Four concentrations of 6-OHDA (50, 150, 300, and 600 M) were used to treat cells for a duration of 1 hour. Twenty-four hours later the MTT assay and Annexin V staining tests were performed to assess cell viability and apoptotic ratio, respectively.
Preconditioning Regimen.
Six groups of cells were used in the experiment. These consisted of 2 control groups (untreated cells and cells treated with OA (5 M) only). Group 1 cells (50 M) and Group 2 cells (150 M) were exposed to 6-OHDA for 1 hour. Group 3 cells were exposed to 6-OHDA (50 M) for 30 minutes before being exposed 6 hours later to 6-OHDA (150 M) for a further 30 minutes. Group 4 cells were treated as in Group 3 but were also treated with OA (5 M) for a period of 24 hours after the completion of the protocol used for Group 3 cells. The viability of cells and apoptotic ratio were assessed 24 hours later by the MTT viability test followed by the Annexin V apoptosis test.
Cell Viability
Testing: MTT Procedure. The MTT assay was used to test for the viability of PC12 cells. The cells were plated at a density of 50 thousand cells per well in a 96well microtiter plate 12 to 15 hours before experimentation. After each cell treatment protocol mentioned in the previous section, the cells were incubated in a 37 ∘ C incubator for 24 hours. Following this, the cells were incubated with 20 L of MTT (5 mg/mL) for 3.5 hours at 37 ∘ C in an incubator. After the incubation, media was removed without disturbing the cells and DMSO (150 L) was added to dissolve the formazan crystals. The plates were shielded from light using a foil and left in an orbital shaker maintained at 600 revolutions/minute for 15 minutes. The resulting purple solution was measured spectrophotometrically. Cells with normal functioning mitochondria that are metabolically active and proliferating produce an increase in the amount of MTT formazan formed and hence an increase in absorbance. The amount of MTT formazan product formed was determined by measuring absorbance (A) using a microplate reader (Bio-Tek) at a wavelength of 630 nm.
Annexin V-Propidium Iodide
Procedure. The Annexin V assay was used to measure the apoptotic ratio (portion of apoptotic cells: portion of viable cells) of PC12 cells. The cells were plated at a density of 1 million cells per well in a 24well plate 12 to 15 hours before experimentation. During the cell treatment protocol, cells were incubated with serum-free medium. Following this, the cells were processed as follows. The media was discarded from all the wells and cells were washed twice in 0.90% w/v phosphate buffered saline (PBS, 500 L). Trypsin was added to each well (just enough to cover the surface) and incubation occurred for 5 minutes in a 35 ∘ C heated incubator. Following incubation, the plates were lightly tapped until all the cells were detached from the well surface. PBS (500 L) was added to each well to neutralize the trypsin. The cell suspension from each well was transferred to a Falcon tube, pipetted up and down to break up the clumps, and centrifuged at 12,000 g in a refrigerated centrifuge (Hermle Labortechnik GmbH, Germany) for 5 minutes at 4 ∘ C after which the supernatant was removed, leaving the pellet in the tube.
Staining Procedure.
Binding buffer was prepared according to the manufacturer's instructions (BD Pharminogen) and 100 L was added to the cell suspension into the Falcon tube. FITC Annexin V (5 L) was added to each sample followed by the addition of propidium iodide (5 L). The tube(s) were mixed on a vortex mixer, incubated for 20 minutes at room temperature (25 ∘ C) in the dark after which binding buffer (400 L) was added to each tube. The reaction tube was analyzed by flow cytometry (BD FacsCalibur) within 1 hour (each sample was mixed on a vortex mixer before reading).
Statistical
Analysis. The data were analyzed using the software program GraphPad Prism (version 5). Results are reported as mean ± SEM of 3 experiments. Data were subjected to Shapiro-Wilk normality testing and found to have a skewed distribution. Subsequently nonparametric tests were performed (Kruskal-Wallis test followed by Mann-Whitney test). A value less than 0.05 was considered significant. Figures 1(a) and 1(b)). There was minimal cell damage in the group treated with the 50 M concentration of 6-OHDA (Figures 1(a) and 1(b)).
Preconditioning and OA Treatment following Exposure to 6-OHDA.
There was a neurotoxin effect on cells exposed to a higher concentration of 6-OHDA (150 M) * (Untreated group versus Group 2, < 0.05, Figure 2). There was a neuroprotective effect on cells initially exposed to a lower concentration of 6-OHDA (50 M) * * (Group 2 versus Group 3, < 0.05, Figure 2). This neuroprotection was further enhanced by exposure to OA * * * (Group 3 versus Group 4).
Similarly when using the Annexin V assay to ascertain apoptosis we found that there was a neurotoxic effect on cells exposed to a higher concentration of 6-OHDA (150 M) * (Untreated group versus Group 2, < 0.05, Figure 3). There was a neuroprotective effect on cells initially exposed to a lower concentration of 6-OHDA (50 M) * * (Group 2 versus Groups 3, < 0.05, Figure 2). This neuroprotection was further enhanced by exposure to OA * * * (Group 3 versus Group 4). 6-OHDA has been shown to be an effective neurotoxin to study the pathogenesis of Parkinson's disease both in vitro and in vivo [11,12]. Previous studies have shown that 6-OHDA inhibits mitochondrial respiration, generates intracellular reactive oxygen species, induces abnormal cell cycle reentry, and eventually causes dopaminergic neuron death [11]. However, sublethal levels of 6-OHDA have been shown to provide cellular protection against a subsequent toxic concentration of 6-OHDA in a phenomenon called preconditioning/hormesis [6]. In this study we investigated the combined effect of preconditioning and oleanolic acid (OA) on cultured PC12 cells.
Discussion
In the study we found that treating the cells with 150 M 6-OHDA induced oxidative stress by initiating a sequence of events that reduce cellular activity leading to apoptotic cell death. We showed that preconditioning PC12 cells by exposing them to a lower concentration of 6-OHDA for a short period before subsequent exposure to a higher concentration resulted in greater cell viability and therefore less cell death. This is similar to a study that showed that the use of a sublethal concentration of 6-OHDA prior to exposure to a toxic concentration protected the cells by activating the antioxidant response system [6]. In our study we showed that this neuroprotective effect of preconditioning was enhanced by subsequent exposure of the cells to the triterpenoid oleanolic acid.
Differences in the intensity, duration, and/or frequency of a particular stress stimulus determines whether that stimulus is strong enough to elicit a response of sufficient magnitude to serve as a preconditioning trigger, or whether it is too robust and therefore harmful [13]. In comparable experiments to our study, it was shown that the administration of small and adaptive concentrations of H 2 O 2 to PC12 cells protected these cells against subsequent oxidative damage induced by paraquat and 6-OHDA [14]. The protective function of H 2 O 2 was attributed to the activation of antioxidative enzymes in the cells [14]. It has also been suggested that protection by preconditioning may involve the MAPK and PI3K/Akt pathways [15]. These signalling pathways are important in regulating apoptotic cell death in development and in disease states [15]. These results therefore suggest that preconditioning may be a promising approach to treating reactive oxygen speciesmediated diseases such as Parkinson's disease [14].
Although the mechanism by which oleanolic acid provides protection against oxidative stress is unknown, most studies that look at synthetic analogues of oleanolic acid have shown that it activates the Nrf2/ARE signalling pathway which controls the upregulation of an array of genes involved in antioxidant responses, heat shock chaperone proteins, and mitochondrial protective genes [16][17][18]. Oleanolic acid treatment has been shown to result in the retention of higher glutathione levels in oxidatively stressed cells [19]. Oleanolic acid treatment also restores the levels of glutathione peroxidase, catalase, and superoxide dismutase in oxidatively stressed PC12 cells [19]. It has been shown that oleanolic acid decreases the formation of malondialdehyde which is a marker for the presence of oxidative stress [19]. It has also been shown that the concentrations of interleukin-6 (IL-6) and tumour necrosis factor alpha (TNF-) increase in oxidatively stressed PC12 cells; however, this increase is attenuated by exposure to oleanolic acid [19]. IL-6 and TNFhave also been shown to play a role in the neuroinflammation associated with Parkinson's disease [20]. The neuroprotective effects of oleanolic acid have also been investigated in an animal model for multiple sclerosis suggesting a role for oleanolic acid in other neuroinflammatory diseases [21]. In our study we were able to show that the addition of 5 M oleanolic acid [22] exacerbated the neuroprotective effects of preconditioned dopamine containing cell lines.
Although a majority of studies have looked at the effects of modified and biologically enhanced molecules, in our study we used a parent molecule, oleanolic acid, and our results showed the beneficial effects of this molecule in a cell culture model of oxidative stress.
Conclusion
Exposing stem cells to preconditioning in stem cell based therapy can make these cells more resistant to the toxic environment present in neurodegenerative brains such as in Parkinson's disease. This may be enhanced by treating these Parkinson's Disease 5 stem cells with oleanolic acid which may attenuate or slow down the disease process, therefore sustaining the improved quality of life in Parkinson's disease patients undergoing this treatment. | 3,598 | 2014-11-13T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
GeekMAN: Geek-oriented username Matching Across online Networks
How can we identify malicious hackers participating in different online platforms using their usernames only? Disambiguating users across online platforms (e.g. security forums, GitHub, YouTube) is an essential capability for tracking malicious hackers. Although a hacker could pick arbitrary names on different platforms, they often use the same or similar usernames as this helps them establish an online "brand". We propose GeekMAN, a systematic human-inspired approach to identify similar usernames across online platforms focusing on technogeek platforms. The key novelty consists of the development and integration of three capabilities: (a) decomposing usernames into meaningful chunks, (b) de-obfuscating technical and slang conventions, and (c) considering all the different outcomes of the two previous functions exhaustively when calculating the similarity. We conduct a study using 1.2M usernames from five security forums. Our method outperforms previous methods with a Precision of 81--86%. We see our approach as a fundamental research capability, which we made publicly available on GitHub.
I. INTRODUCTION
How can we identify malicious hackers across different platforms?This is the question that motivates our work.First, hackers with visible online personas often lead major cyber-criminal activities [1].Second, these hackers are active and visible on many online platforms including specialized security forums and popular platforms like GitHub [2].In fact, some of these platforms harbor malicious activities to the point that they are forced to shut down [3].One thing is clear: these hackers create a brand around their online names.As a result, hackers: (a) adopt unusual names and (b) use them fairly consistently with only minor changes across different platforms.
The problem we address here is the following: given two usernames, how can we determine if they are likely to belong to the same user?As our goal is tracking hackers, we focus on technogeek usernames, which we define as usernames with: (a) technical jargon, (b) slang and unconventional use of letters and characters, and (c) multiple parts.These types of usernames seem to be used by malicious hackers, but also by tech-enthusiasts, gamers etc.For example, a username of interest could be w33dgod, which we may want to match with godweed (both are real usernames).We refer to this kind of obfuscation using letters and digits in unusual ways as slangification.Many of their usernames have multiple parts, which we refer to as chunks.Traditional string matching and edit distance techniques have difficulty matching these types of usernames.Here, we impose an additional challenge: we do not use other types of information, such as demographic attributes, context, or social connections, which could help refine the matching accuracy.
There has been relatively little work on the problem as we define it here.In particular, we find that most of the previous works: (a) focus on the popular social media usernames, (b) rely on training data, and (c) use string matching without following a human-like interpretation, such as decomposing the username into meaningful chunks.As we explain later, we compare our approach against a set of state-of-the-art username similarity algorithms [4], [5].We discuss previous works in Section V.
As our key contribution, we propose GeekMAN, a systematic approach for linking technogeek users across platforms.Our approach is inspired by human cognition: it attempts to emulate how a human will try to disambiguate this type of username, such as IAmBlackHacker and B14CKH4K3R.
The key novelty of our work consists of the development and integration of three capabilities: (a) deslangification, which de-obfuscates slang and geeky naming conventions, (b) chunkification, which decomposes usernames into meaningful chunks, which leads to one or more lists of chunks, and (c) comparison, which considers all the lists of chunks to calculate the similarity between two given usernames.We deploy our approach on 1.2M usernames from five popular hacker-rich security forum users.The key results are summarized below.a. Technogeek usernames use slang and chunks extensively.We find that 17-37% of the technogeek platform users use multiple digits in their usernames.Quantifying the prevalence of chunks, we notice that 60-70% of the usernames could be decomposed into at least four chunks.
b. GeekMAN outperforms prior approaches.Focusing on technogeek usernames, we find that our approach identifies matches with 86.0%Precision and 72.6% Relative F1-score (which we define later).By contrast, two prior approaches exhibit 76.0% and 47.4% Precision with 42.9% and 57.1% Relative F1-score, respectively.
Our approach is a fundamental building block for user disambiguation with an emphasis on technogeek usernames and we made our code available on GitHub [6].
II. BACKGROUND AND DATA
In this section, we provide some background, explain the motivation of our approach, and discuss the dataset in detail.
A. Malicious hackers use technogeek usernames.Hackers and other malicious users often participate in various public platforms, including specialized discussion forums [7], technical forums, and software platforms like GitHub [8].Their main goals seem to be: (a) establishing an online brand [9] and (b) boasting of their accomplishments [2], [10].As a noteworthy example, an FBI most wanted cybercriminal, having alias ha0r3n, was found to have a GitHub profile named wo4haoren.In 2020, FBI listed another most wanted hacker named Behzad Mohammadzadeh, with alias Mrb3hz4d, who was charged for defacing a number of websites [11].
C. Quantifying technogeek usernames.We study the usernames from our technogeek forums which are likely to be visited by hackers.We plot the distribution of the usernames having multiple digits in between letters of their usernames in Figure 2(a).We see that around 17-37% of the usernames show the behavior.For example: n1nj4sec and z3r0d4y contain two and three digits in between letters, instead of ninjasec and Validation and groundtruth.We describe our validation approach in Section IV.
III. PROPOSED METHOD
The goal of our method is to determine the similarity of two usernames deriving inspiration from a human interpretation.The key idea is to de-obfuscate the usernames (if possible), decompose them into one or more lists of chunks, and compare all possible lists of chunks to calculate the similarity score leveraging three main modules: (a) Deslangification, (b) Chunkification, and (c) Comparison, which we discuss below.We provide a conceptual overview in Figure 1 and demonstrate its operation for usernames z3r0c00l and COOL zERO in Figure 3.
A. Maximizing the likelihood of a match.The task of reverse engineering the naming habits of users is a challenging problem.To have the broadest possible coverage, we can consider the following approaches: 1) we do deslangification and then chunkification 2) we do chunkification and then deslangification Note that the best results are derived from the first sequence in our study as we explain in Section IV.
B. Deslangification module.In this module, we try to deobfuscate the username to identify any available slangified chunk.First, we create a function, slangCharM ap(), that maps a potential slang character to a letter following the common technogeek conventions based on our observations and commonly reported usage [18].Then, we search for potential slang characters in the username, and if any are found, we replace them with the corresponding letter character found from slangCharM ap().As an example, username th3m4lw4r3 can turn into themalware and username z3r0s4mur41 into zerosamurai.We notice that digit 4 can be translated as both a and r.In addition, digit 4 does not necessarily have to transform into a letter.Thus, we obtain a plethora of potential deslangified versions from the username.
C. Chunkification module.We perform chunkification on a given username based on different criteria and create a Bag which is a set of lists of chunks.We consider four different chunkification criteria emulating four different aspects of username naming behavior.
a. Symbol-based chunkification: Symbols are used as delimiters to chunkify usernames.These symbols are the underscore, the space, and the dot.We create L, a list of chunks using the symbols present in the username.An example is COOL zERO which is split into {COOL, zERO}.
b. Digit-based chunkification: Digits in a username can be either something telling about the user or a user may also use random digits simply to make her username comply with certain platform requirements.We search for numbers in the username, and if any are found, we chunkify the username accordingly.An example of digit based chunkification can be: sniper7kills will be split into {sniper, 7, kills}.
c. Capitalization-based chunkification: Capital letters can also provide cues for chunks; e.g. the username ObscureCoder is reasonable to assume that the user has combined Obscure and Coder.In a more challenging example, T0x1cV3n0m can be split into {Toxic, Venom}.Note that here, we leverage the commonly-used English words, but if users use obscure geographical/regional names, things can become more complex.
d. Token-based chunkification: We also propose an approach to detect chunks even in the absence of cues.As an example, let us consider the following username thegreathacker which seems to correspond to L = {the, great, hacker}.There are many different ways to identify words within a string [19].We follow the approach: we start from the end of the string and consider letters until we find a word that exists in our T okenDict, a dictionary of English word/name phrases.In our example that word would be hacker.We then create two parallel approaches: (a) we repeat the same process on the string, having removed hacker, and (b) we continue to see if the word hacker is part of a longer word.At the end of this process, we have several lists of chunks.
D. Comparison module.Given two Bags with lists of chunks from usernames u 1 and u 2 , we calculate the highest similarity score among all possible comparisons between the lists of chunks.Given usernames u 1 and u 2 , earlier modules produce two sets of lists of chunks, Bag(u 1 ) and Bag(u 2 ), respectively with N and M , the number of lists in each Bag.a. Similarity of chunks.There are many string matching algorithms, and our approach could use any of them.We select the Levenshtein method [20] which is widely used in text processing [21].We use the term ChunkSim(c 1 ,c 2 ) to refer to this function.
b. Similarity of lists of chunks.Comparing lists of chunks is slightly more complex, as we need to identify the most likely match between the chunks of the two lists L 1 , L 2 (note that for clarity we drop the superscipt in the notation).We use the term ListSim(L 1 ,L 2 ) to refer to this function.There are many different algorithms that we can use to compare similarity of unordered lists that vary in efficiency and computational Fig. 3: An example of how the modules of our approach will handle a pair of usernames: z3r0c00l and COOL zERO.complexity.Our approach could use any such function.In our current implementation, we use the Monge-Elkan method [22] which is found to perform well consistently across many scenarios and data types [23] with a polynomial computational complexity of O(|L 1 | |L 2 |).Intuitively, the method iterates through the elements of the first list and identifies the highest similarity with any element in the second list.The final value is the average chunk similarity.Formally, the method calculates the similarity for the two lists as follows: where 1 ≤ j ≤ |L 2 | and L 1 [i] and L 2 [j] are the i-th and j-th chunks of each list, respectively.This method hides a subtle point.The function is sensitive to the order of the arguments: ListSim(L 1 , L 2 ) ̸ = ListSim(L 2 , L 1 ).Therefore, one could consider three approaches.We can consider the similarity as: (a) ListSim(L 1 , L 2 ), (b) ListSim(L 2 , L 1 ), or (c) considering both "directions", ListSim(L 1 , L 2 )+ListSim(L 2 , L 1 ).In the results, we use one direction with the longest list as the first argument, namely, assuming c. Similarity of Bags of lists.We use the term Similarity Score SimScore (u 1 ,u 2 ) to define the similarity between two usernames, u 1 and u 2 .We do the comparison exhaustively: each list in the Bag of one username is compared with each list of the other username using ListSim() from above.The Similarity Score is the maximum list similarity over all list pairs L n 1 , L m 2 as follows:
IV. EXPERIMENTS AND EVALUATION
Here, we present our evaluation study and discuss our ground truth, comparison metrics, and the baseline algorithms.At a high level, our experiment aims to match a set of users from a source forum with a set of users from a target forum.
A. Baseline.We evaluate our approach by comparing it against state of the art methods.
a. : This method extracts content features (e.g.2-grams), and pattern features (e.g.letter-digit, date) from the usernames.Then, a vector based modeling is used to compute the cosine similarity of the vectors of features from usernames.
b. UISN-UD [5]: This method exploits the information redundancies that can be available in a pair of usernames used by the same user.It computes features from different string comparison metrics, such as common substring, common subsequence, and edit distance, which are used in their classifier.
The major difference of GeekMAN with them is deslangification and chunkification of the technogeek usernames, since such properties exist in technogeek usernames.
B. Experimental setup.In this experiment, we find matches between usernames from a source forum within a target forum.Specifically, we conduct two such experiments: (a) Garage4Hackers (GH) as source and Offensive Community (OC) as target, and (b) Garage4Hackers (GH) as source and RaidForums (RF) as target.Given our focus on technogeek names, we selected Garage4Hackers as source since it exhibits a higher level of slangified and chunkified usernames, based on our analysis of these forums (see Section II).On the other hand, the target forums were picked randomly.To focus on technogeek usernames, we selected usernames from our source forum with either 3 or more chunks or slangification.This way we obtain the D All dataset, which is roughly 10% of the Garage4Hackers forum (on purpose small to enable validation as we discuss below).We also divide D All into D Multi (3 or more chunks) and D Slang (slang conventions) datasets, in an effort to investigate the interplay between multi-chunk and slangified usernames such as {thegreathacker, T0x1cV3n0m}.We use GeekMAN and baselines to match each user in D All with the most likely matching user in the target forums.
C. Validation and ground truth.Since there is no available groundtruth, we need to establish our own.We resort to sampling and manual verification.The algorithms find the best matching user in target forum for the users in D All.We recruit four domain-expert computer scientists to manually label each of these possible matches as a match or mismatch.To increase the reliability of our ground truth, we ask the annotators to match a username pair only if they were certain they belong to the same user based on usernames only.We only consider a username pair as a verified match if at least three annotators agree it is.Here we focus primarily on verified matches which we will use as true positives.We assess the level of agreement of the annotators using Fleiss Kappa coefficient for labeling the ground truth matchings in Table II and Table III.The Kappa score we get is above 0.5, which is considered as a moderate agreement (0.41-0.60) according to standard practice [24].
D. The evaluation metrics.To compare our algorithms, we consider Precision, Relative Recall, and Relative F1-score for each algorithm in the experiment.The "Relative" term in the metrics represents our effort to approximate the true Recall in the absence of established ground-truth.Our goal is to detect an algorithm that will opt for high Precision at the cost of Recall in the context of the specific comparison.We approximate the number of real matches, which we do not know in our dataset, by providing a lower bound as follows.We take the union of all true positives (validated by our annotators) of all the algorithms in the test.Formally, we define I algo to be the number of matches identified by algorithm algo.We define T P algo to be the true positives for that algorithm as verified by the annotators.We then calculate the union of the true positives T P union as the union of all the true positives of all the algorithms: T P union = a∈Algos T P a .T P union can be seen as a lower bound on the true matches that a perfect algorithm would have identified.E. Choosing the Similarity Score Threshold.We want to identify an appropriate Similarity Score Threshold (SimT) value, which is a critical parameter for our approach.First, we apply GeekMAN and baseline algorithms on D All (D Multi + D Slang) to find matching pairs between source forum and target forum.Second, the matchings are labeled by the annotators.Depending on their annotations, we calculate the defined evaluation metrics for our algorithm at different SimT values ranging between 0.1-1.0 at 0.01 intervals.Finally, we plot the Precision, Rel-Recall, and Rel-F1-score curves at those values.We find that the curves meet at SimT value of 0.64.We notice that after SimT = 0.68, Rel-F1-score decreases, while Precision increases even after 0.70.We opt to prioritize Precision, and we choose a SimT of 0.7 for our study.
F. Evaluation results.We show the performance evaluation for GeekMAN and the two baseline algorithms.What about bonafide technogeek names?We want to further understand how the algorithms perform when the usernames use slang conventions.As expected, our approach does even better here with a difference in the Relative F1-score close to 20%.Focusing on our D Slang dataset, we show the results in Table III.GeekMAN surpasses the baselines in all metrics with a Precision of 81.6% and Relative F1-score of 75.3%, while baseline algorithms achieve a maximum 77.7% Precision and 54.9% Relative F1-score.
V. RELATED WORK Most previous works differ from our approach in that: (a) they are supervised approaches that need training data, (b) they are not focusing on complex-technogeek usernames, and (c) they rely on information beyond the username, such as user profile attributes, content, and social connectivity.By contrast, our approach is designed to: (a) handle technogeek names and (b) rely only on usernames.
a. Username-based matching.We already discussed the two methods that we use in our performance analysis; Wang-16 [4] and UISN-UD [5].Earlier, Perito et al. [25] estimated the uniqueness of a username using a Markov-Chain based language model to compute username similarity.Other efforts by Zafarani et al. [26] exploited the redundant information available in username patterns exposed by user behaviors.
b. Using user profiles.An earlier study by Vosecky et al. [27] introduced a vector based supervised algorithm which utilizes user profile features with differing weights.In some other efforts, Goga et al. [28] and Zhang et al. [29] proposed probabilistic classifiers which link user identities based on profile attributes like description, location, profile image etc including username.Also, a projection based modeling proposed by Mu et al. [30] incorporated the profile features to link users on different platforms.
VI. CONCLUSION
We propose GeekMAN, a systematic approach to identify similar technogeek usernames across online platforms.The key novelty consists of the development and integration of three capabilities: (a) decomposing usernames into meaningful chunks, (b) de-obfuscating technical, and slang conventions, and (c) considering all the different outcomes of the two previous functions exhaustively when calculating the similarity.These three capabilities attempt to emulate the way a human will attempt to "understand" a username.Overall, we see our approach as a fundamental building block for linking technogeek users across different platforms.
Fig. 1 :
Fig. 1: GeekMAN calculates the similarity score for a pair of usernames focusing on technogeek users.Our approach tries to emulate human interpretation by combining the: (a) Chunkification, (b) Deslangification, and (c) Comparison modules.
(a) Digit usage in usernames (b) Popular slangified words
Fig. 2 :
Fig. 2: (a) Distribution of percentage of population using digit in between letters multiple times.(b) The word-cloud that shows popular slangified words used as usernames or as parts of usernames.zeroday.We illustrate the most commonly slangified English words used by technogeek forum users via a word cloud shown in Figure2(b).In addition, we find that approximately 60-70% of the users on each of the online platforms contain 4 or more chunks in their usernames.Validation and groundtruth.We describe our validation approach in Section IV.
= 2 *
Precision * Rel-Recall Precision + Rel-Recall As the names indicate, the Relative Recall and Relative F1-score have only relative meaning within the scope of the comparison with the specific set of algorithms.
TABLE I :
Summary of the dataset.
TABLE II :
Performance comparison analysis for the algorithms in D All using SimT = 0.7 for GeekMAN with Kappa Score K=0.53.In TableII, we show the results for the D All dataset.GeekMAN achieves a Precision of 86.0% and Relative F1-score of 72.6%.By contrast, the baseline algorithms achieve 76.0% and 47.4% Precision with 42.9% and 57.1% Relative F1-score, respectively.It is also worth noting that the baseline algorithms only do well either in Precision or in Relative Recall.For example, Wang-16 offers high Precision (76.0%) but at the cost of the Rel-Recall (29.9%).The opposite is true for UISN-UD.
TABLE III :
Performance comparison analysis for the algorithms in D Slang using SimT = 0.7 for GeekMAN with Kappa Score K=0.54. | 4,853.8 | 2023-11-06T00:00:00.000 | [
"Computer Science",
"Sociology"
] |
Gastric cancer biomarker analysis in patients treated with different adjuvant chemotherapy regimens within SAMIT, a phase III randomized controlled trial
Biomarkers for selecting gastric cancer (GC) patients likely to benefit from sequential paclitaxel treatment followed by fluorinated-pyrimidine-based adjuvant chemotherapy (sequential paclitaxel) were investigated using tissue samples of patients recruited into SAMIT, a phase III randomized controlled trial. Total RNA was extracted from 556 GC resection samples. The expression of 105 genes was quantified using real-time PCR. Genes predicting the benefit of sequential paclitaxel on overall survival, disease-free survival, and cumulative incidence of relapse were identified based on the ranking of p-values associated with the interaction between the biomarker and sequential paclitaxel or monotherapy groups. Low VSNL1 and CD44 expression predicted the benefit of sequential paclitaxel treatment for all three endpoints. Patients with combined low expression of both genes benefitted most from sequential paclitaxel therapy (hazard ratio = 0.48 [95% confidence interval, 0.30–0.78]; p < 0.01; interaction p-value < 0.01). This is the first study to identify VSNL1 and CD44 RNA expression levels as biomarkers for selecting GC patients that are likely to benefit from sequential paclitaxel treatment followed by fluorinated-pyrimidine-based adjuvant chemotherapy. Our findings may facilitate clinical trials on biomarker-oriented postoperative adjuvant chemotherapy for patients with locally advanced GC.
Predictive biomarkers for selecting patients likely to benefit from sequential paclitaxel therapy. We conducted multivariable Cox regression analysis to assess the potential relationships between gene expression level and overall survival (OS), disease-free survival (DFS), or cumulative incidence of relapse after sequential paclitaxel therapy; the genes were ranked based on the interaction-related p-values. Visinin-like 1 (VSNL1) and CD44 were the only genes with mRNA expression levels that were statistically significant as predictive biomarkers of sequential paclitaxel treatment for all three endpoints (Supplementary Table S2, Online Resource 1).
A total of 191 (36.2%) patients showed combined low expression of both genes, which was associated with the greatest benefit from sequential paclitaxel treatment compared to fluorinated-pyrimidine monotherapy ( Table 2). Patients with low levels of expression of VSNL1, CD44v, or both, had significantly longer OS and DFS after sequential paclitaxel treatment than after monotherapy (Fig. 2a,b). However, no such effect was observed in the cumulative incidence of relapse (Fig. 2c).
Patient stratification based on pTNM stage showed that OS improvement in response to sequential paclitaxel treatment in patients with low VSNL1 and/or CD44v expression was the greatest in patients with stage IIIB/ IIIC GC (Fig. 3).
Internal validation. The overall performances of the different statistical models, including the interactions between VSNL1 mRNA expression and the treatment group, as well as the clinical and pathological factors for OS prediction with C statistics using the bootstrap 0.632 + estimator (0.7111) and apparent estimator (0.7266), were evaluated. The accuracy of OS prediction based on CD44 and VSNL1 mRNA expression levels was comparable when the apparent estimator was used (0.7252), whereas it was not sufficiently accurate when the bootstrap 0.632 + estimator was used (0.7083) ( www.nature.com/scientificreports/ with high expression. In contrast, there was no significant relationship between CD44 mRNA expression and any clinicopathological factors (Supplementary Table S4, Online Resource 1).
Relationship between mRNA expression levels and protein expression levels of VSNL1 and
CD44v. Protein expression levels of VSNL1 and CD44 were investigated in a subgroup of patients based on immunohistochemistry (IHC) analyses, and patients were dichotomized into low and high expression groups, based on an immune response scoring system. For CD44v IHC, since there are eight variant isoforms (CD44v1-8) created by mRNA splice variants, we analyzed the relationship between CD44v1-8 and CD44 using data from NanoString analysis and found that all CD44v mRNA expression was strongly correlated with that of CD44 mRNA ( Supplementary Fig. S1, Online Resource 1). Therefore, CD44 expression in IHC was examined as a representative of CD44 and CD44v1-8. The relationship between VSNL1 and CD44 protein expression levels and mRNA expression levels by IHC analysis showed that mRNA expression levels were significantly higher in the high protein-expression group than in the low-protein expression group, based on the Mann-Whitney U test ( Fig. 4; P < 0.0001, P < 0.0001, respectively). In addition, the concordance between high/low mRNA expression levels and high/low protein expression levels were 79.8% and 81.9% for VSNL1 and CD44, respectively (Table 3).
Furthermore, patients were divided into low expression groups of both VSNL1 and CD44 proteins (n = 53) and high expression groups of either VSNL1 or CD44 protein (n = 41), according to the VSNL1 and CD44 protein expression results in the IHC analyses. In each group, the OS of sequential paclitaxel and fluoropyrimidine monotherapy was evaluated using a log-rank test. The results showed that the OS of sequential paclitaxel was significantly better than that of fluoropyrimidine monotherapy in patients with low levels of expression of both VSNL1 and CD44. Conversely, no difference was observed in the high expression groups of either VSNL1 or CD44 (Fig. 4), which was consistent with the mRNA results.
Examination of the usefulness of the algorithm with the four biomarkers (GZMB, WARS, SFRP4, and CDX1) validated in the CLASSIC study sample to stratify the risk of recurrence and select patients who would benefit from adjuvant chemotherapy with paclitaxel followed by sequential pyrimidine fluoride using the sample from this biomarker study. In the sample of the current biomarker study (n = 527), the algorithm based on GZMB, WARS, and SFRP4 mRNA expression levels did not significantly stratify the risk of recurrence ( Supplementary Fig. 2a,b, Online Resource 1). Subsequently, when the patients were separated into "chemotherapy benefit group" and "chemotherapy no-benefit group" www.nature.com/scientificreports/ according to the algorithm based on GZMB, WARS, and CDX1 mRNA expression levels, in the chemotherapy no-benefit group, the survival rates of patients in the chemotherapy-responsive group were the same regardless of the type of adjuvant treatment. However, in the chemotherapy-naive group, characterized by high immunity (GZMB + , WARS +) and low epitheliotropism (CDX1-), patients treated with sequential paclitaxel had significantly longer survival ( Supplementary Fig. 2c,d, Online Resource 1).
Discussion
The present study explored biomarkers for identifying gastric cancer (GC) patients that are likely to benefit from sequential paclitaxel treatment followed by fluorinated-pyrimidine-based adjuvant chemotherapy at the mRNA level using clinical samples and data from GC patients treated in a randomized controlled phase III trial of adjuvant chemotherapy, SAMIT 15 . Although previous studies using clinical samples from the ACTS-GC have revealed several novel molecular GC biomarkers, significant interactions between S-1 treatment and RNA expression levels have not been observed [8][9][10][11] . In a study of clinical samples from the CLASSIC trial, an algorithm based on the RNA expression levels of three genes was able to predict patients who were likely to benefit from adjuvant chemotherapy with capecitabine plus oxaliplatin 12 .
Although several candidate biomarkers of resistance or sensitivity to paclitaxel, such as Tau, COL4A3BP, UGCG, MCL1, FBW7, SLC31A2, SLC35A5, SLC43A1, SLC41A2, and CCNG1 have previously been suggested [16][17][18][19][20][21][22][23] , none have been validated in a second independent series. Hence, there remains a clinical need to validate the proposed biomarkers and/or identify new biomarkers that can be used in routine clinical practice to identify patients likely to benefit from paclitaxel therapy 24 . Moreover, associations between the expression of several genes or proteins and the benefits of paclitaxel, such as CCND1, ABCB1, BCL-2, and SPARC in different tumor types, have been reported in multiple studies [25][26][27][28][29] . For example, CCND1 overexpression promotes paclitaxel-induced apoptosis in breast cancer 26 . BCL-2 family members such as BCL-2, BCl-xL, BAX, and ABCB1, have been reported to be involved in paclitaxel resistance in esophageal cancer 27 . In addition, SPARC expression in tumor stromal cells is a potential negative predictor of paclitaxel treatment in patients with lung cancer 28,29 . However, the expression levels of all previously suggested biomarkers were not significantly associated with patient outcomes in the present study. This may be related to the cancer type, sample size, case mix, ethnic differences, or methodological differences.
In the present study, we identified the expression levels of VSNL1 and/or CD44v as potential novel predictive biomarkers to identify patients who could benefit from postoperative adjuvant chemotherapy with sequential Table 2. Effects of sequential paclitaxel followed by UFT or S-1 on overall survival, disease-free survival, and cumulative incidence of relapse, based on gene expression levels. HR hazard ratio, CI confidence interval, UFT tegafur/uracil. www.nature.com/scientificreports/ www.nature.com/scientificreports/ paclitaxel followed by a fluorinated-pyrimidine after curative gastrectomy. Although the combined low expression of the two biomarkers predicted the greatest benefits from adjuvant chemotherapy with sequential paclitaxel and a fluorinated-pyrimidine, no clear interaction between VSNL1 and CD44v has been reported to date. The VSNL1 gene encodes visinin-like protein 1 (VILIP-1), a member of the neuronal calcium sensor protein family that regulates calcium-dependent cells and signaling adenylate cyclase 30 . In cancers, VSNL1 is overexpressed in various cancers such as GC, colorectal cancer, non-small cell lung cancer, and squamous cell carcinoma [31][32][33][34] , and inhibits cell proliferation, adhesion, and infiltration. In addition, it has been reported to function as a tumor suppressor gene 33,34 . Deficiency or reduced expression of VSNL1 by knockdown in vitro has been reported to increase the motility of cancer cells, suggesting a potential tumor suppressor function of the protein. VSNL1 regulates SNAIL1, which is a transcription factor with cAMP-dependent function, and SNAIL1 expression prevents epithelial-mesenchymal transition in cancer cells 34 . In recent years, it has been reported that high expression of VSNL1 promotes the proliferation and migration of GC cells by regulating the expression of P2X3 and P2Y2 receptors, and that high expression of VSNL1 in GC tissue may be a good clinical indicator for poor prognosis in GC patients 35 . However, in the present study, VSNL1 expression in GC tissue was not a prognostic factor. Regarding the association with chemotherapy, VSNL-1 has been reported to be involved in epithelial-mesenchymal transition (EMT) of cancer cells by regulating the transcription factor Snail1 in a cAMPdependent manner 34 . Therefore, high expression of VSLN1 suppresses EMT by regulating Snail1, which may weaken chemoresistance to anticancer agents, including paclitaxel, and increase chemosensitivity.
The CD44 gene encodes the CD44 protein, an adhesion molecule that uses hyaluronan as a ligand, and there are eight isoforms (CD44v1-8) that are created by mRNA splice variants. In the present study, we initially investigated only CD44v1 mRNA expression and identified it as a biomarker. Additional analysis of the relationship www.nature.com/scientificreports/ Figure 3. Forest plot of the study results. After patient stratification based on the pTNM stage, the survival benefit from sequential paclitaxel treatment was greater among patients with stage IIIB gastric cancer with a low expression of either gene or both. The association between the low expression levels of VSNL1 and CD44 and potential benefits from sequential paclitaxel treatment were significant for disease-free survival and cumulative incidence of relapse. www.nature.com/scientificreports/ between CD44v1-8 and CD44 using data from NanoString analysis showed that the expression of all CD44v isoforms was strongly correlated with CD44 expression, indicating that CD44 and CD44v1-8 mRNA expression may be biomarkers in the present study. CD44 protein is overexpressed on the cell surface of cancer stem cells in GC tissues, and binding of hyaluronan to CD44 has been reported to affect various downstream signaling pathways, leading to cancer invasion, metastasis, and resistance to chemoradiotherapy [36][37][38][39][40][41][42] . As for paclitaxel resistance, ovarian cancer has been reported to exhibit higher levels of CD44 expression than paclitaxel-sensitive cancer cells 43 .
To the best of our knowledge, this is the first and most comprehensive study to identify biomarkers for the prediction of patients with survival benefit from sequential paclitaxel followed by fluorinated-pyrimidine adjuvant chemotherapy in GC patients. However, the present study has several limitations. First, although we demonstrated that the study cohort was representative of the entire SAMIT patient cohort, with respect to clinicopathological characteristics, including survival, we were only able to retrieve material from approximately a third of the original SAMIT population. Furthermore, the number of samples in which biomarkers identified at the mRNA level were validated at the protein level was limited. Second, we only analyzed RNA samples from a single tissue block, not whole tumors. Therefore, the intertumoral heterogeneity may not be sufficiently assessed. Third, SAMIT recruited patients with serosal invasion (e.g., cT4 tumors), a major risk for peritoneal recurrence, and randomized them to receive fluorinated pyrimidine monotherapy or sequential paclitaxel, which was hypothesized to reduce postoperative recurrence, such as peritoneal recurrence, and improve prognosis. However, it should be noted that there was a small number of patients with pT4 tumors in the SAMIT.
In conclusion, the biomarkers for selecting patients with GC who would most likely benefit from adjuvant chemotherapy with sequential paclitaxel and fluorinated-pyrimidine treatment after curative gastrectomy were identified. Although the validation of our findings in a second independent series followed by a prospective trial is necessary, personalized adjuvant chemotherapy using these biomarkers may further improve treatment outcomes in patients with locally advanced GC.
Methods
Patients and sample collection. This biomarker study was conducted using GC specimens and clinicopathological data from patients who participated in a phase 3 randomized comparative study (SAMIT) performed using a two × two factorial design of postoperative adjuvant chemotherapy after D2 gastrectomy. SAMIT was performed in 230 hospitals in Japan in patients with GC. Patients aged 20-80 years with an ECOG performance score of 0-1 who were diagnosed with cT4a or T4b GC by preoperative diagnosis were enrolled. The patients were randomly assigned to one of the four postoperative adjuvant chemotherapy groups (tegafur and uracil [UFT] monotherapy, S-1 monotherapy, three courses of paclitaxel followed by UFT, or three courses of paclitaxel followed by S-1) after undergoing D2 gastrectomy.
The completion rate of the trial was 60% in the UFT-only group, 62% in the S-1-only group, 68% in the UFTtreated group after paclitaxel treatment, and 70% in the S-1-treated group after paclitaxel 15 .
The present study was approved by the Institutional Review Board (IRB) of Kanagawa Cancer Center, the central institute for this study (approval number: [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42], as well as the IRBs of all institutions that participated in the present study. Representative blocks from formalin-fixed paraffin-embedded (FFPE) gastrectomy specimens were collected retrospectively from participating institutions according to the following inclusion criteria: (1) patients were participants in the SAMIT, (2) FFPE blocks or unstained cut sections were available, and (3) the translational study protocol was approved by the IRB. Samples were collected from the data center of the Kanagawa Cancer Center and shipped to Yokohama City University for RNA extraction and analysis. Sections (each 10-μm thick) were cut from the FFPE blocks and stored at 4 °C until microdissection.
RNA extraction and complementary DNA (cDNA) synthesis. Hematoxylin and eosin-stained slides
were reviewed, and the area with the highest tumor content was manually outlined. After manual microdissection, total RNA was isolated using NucleoSpin FFPE RNA XS (Macherey-Nagel GMBH & Co. KG, Düren, Germany). For RNA quality control, the OD 260 /OD 280 ratio was measured using a NanoDrop 2000 (Thermo Fisher Scientific Inc., MA, USA; RRID:SCR_018042). The total RNA integrity number was measured using an Agilent 2100 Bioanalyzer (Agilent Technologies Inc., Waldbronn, Germany, RRID:SCR_018043). To confirm that the total RNA samples were not contaminated with DNA, RNA18S1 expression was evaluated by quantitative realtime PCR (qRT-PCR) in each sample before cDNA preparation. cDNA was prepared from samples that passed all the quality control checks. cDNA was synthesized from 0.4 µg of total RNA using an iScript cDNA Synthesis Table 3. Relationship between VSNL1 mRNA expression and VSNL1 protein expression, and for the relationship between CD44 mRNA expression and CD44 protein expression. Gene selection. The RNA expression levels of 105 genes were quantified in the present study (Table 4).
Fifty-eight genes were selected from a previous DNA microarray study 44 . An additional 47 genes were selected from 14 categories previously linked to tumor progression or survival in GC patients, along with 14 genes that did not overlap with the 58 genes mentioned above. The 14 categories are described in Table 4 (categories 1-14).
The 105 selected genes included 63 genes analyzed in an exploratory biomarker study of ACTS-GC participants 10 . Among them, 57 genes have been previously reported as biomarkers of paclitaxel resistance or sensitivity. The functional annotation of each gene carried out using DAVID 6.7 (https:// david-d. ncifc rf. gov/), is outlined in Supplementary Table S6 (Online Resource 1).
Defining the predictive value of the biomarkers. The mRNA expression level of each gene was classified as low versus high using the median mRNA expression level as a cut-off point, as described previously 44 . If the mRNA expression level of a particular gene was below 1.0 × 10 -8 ng/μL, the expression level was set to '0.00' . The value of a biomarker in predicting the benefit of sequential paclitaxel treatment based on the OS, Table 4. Genes investigated (n = 105). TYMS DPYD UMPS UPP1 TYMP GGH DUT MTHFR RRM1 RRM2 FPGS DHFR TOP1 ERCC1 TOP2A www.nature.com/scientificreports/ DFS, and cumulative incidence of relapse was determined by examining the p-values of the interactions between the dichotomized gene expression level and the treatment group (sequential paclitaxel versus monotherapy) after adjusting for clinical and pathological factors using Cox regression or Fine-Gray models 45,46 . The genes were ranked according to treatment interaction-related p-values. Values were considered significant at p < 0.05. Additionally, we combined the expression levels of selected genes to identify sensitive and non-sensitive patient subsets.
Genes encoding proteins related to the metabolism or activation of anticancer agents
Immunohistochemistry (IHC) of VSLN1 and CD44. IHC .3] containing 1% BSA, 50% glycerol, and 0.02% sodium azide) were used. Preliminary testing was performed using positive controls to determine the optimal dilution of each antibody. Peroxidase-labeled polymers (EnVision + , Rabbit, DAKO, Glostrup, Denmark) and diaminobenzidine were used for detection. All sections were counterstained with hematoxylin. Immunohistochemical assessments were performed based on the Immune Response Scoring system. Intensity scores were used to classify the strongest positive immunostaining tumor cells as absent (score 0), weak (score 1), moderate (score 2), and strong (score 3). Typical VSNL1 and CD44 intensity score classifications are shown in Supplementary figures S3a, b. Proportion scores were used to classify the proportions of positive immunostained tumor cells into four grades (0, 1, 2, 3, 4, and 5) based on a marker-specific approach (Supplementary Fig. S4). The sum of the scores for the intensity and proportion scores ranges from 0 to 8. A score of 0-4 was defined as negative/low protein expression, and a score of 5-8 was defined as high protein expression, in both VSNL1 and CD44.
Examination of the relationship between VSNL1 and CD44 mRNA expression and those protein expression. We investigated each VSNL1 and CD44 mRNA expression levels in each negative/low protein expression group or high protein expression group. In addition, we investigated the concordance between mRNA expression levels split into two by the median used in the present study and the protein expression levels in immunohistochemical analyses. In addition, patients were divided into low expression groups for both VSNL1 and CD44 and high expression groups of either VSNL1 or CD44, according to VSNL1 and CD44 protein expression in IHC. In each group, the OS of sequential paclitaxel and fluoropyrimidine monotherapy was evaluated.
Internal validation. We adopted an internal validation strategy, as proposed by Wahl et al. 47 , to address the potential overestimation of the standard error owing to multiple imputations and optimism in the predictive performance. We used Harrell's C statistics to analyze the predictive performance of the survival data and addressed the optimistic bias by Harrell's C statistics using the bootstrap 0.632 + method with 20 bootstrap samples from the original dataset with replacement, followed by multiple imputations.
Statistical analysis. The pre-defined statistical analysis plan for this study has been reported previously 48 .
The primary and secondary endpoints were the OS and DFS, respectively. The OS and DFS curves were constructed using the Kaplan-Meier method, and the cumulative incidence curves of relapse were constructed using the Aalen-Johansen method 49 to compare sequential paclitaxel and monotherapy, considering the expression levels of the selected genes either individually or in combination. The adjusted hazard ratios (HRs), 95% confidence intervals (CIs), and p-values of the major treatment effects and interactions were estimated for the entire patient population and subgroups according to the Union for International Cancer Control TNM 8th ed stage 2 . We used multiple imputations to handle missing clinical and pathological factor data and generated 20 multiply imputed datasets for parameter estimates. The reported p-values were two-tailed, and the major effects and interactions were considered statistically significant at p < 0.05. Statistical analyses were performed using SAS version 9.4 (SAS INSTITUTE, Inc., Cary, NC, USA).
Ethical statement. All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and later versions. Informed consent or a substitute for it was obtained from all patients for inclusion in the study. | 4,862.2 | 2022-05-20T00:00:00.000 | [
"Biology"
] |
Cortical Inactivation Does Not Block Response Enhancement in the Superior Colliculus
Repetitive visual stimulation is successfully used in a study on the visual evoked potential (VEP) plasticity in the visual system in mammals. Practicing visual tasks or repeated exposure to sensory stimuli can induce neuronal network changes in the cortical circuits and improve the perception of these stimuli. However, little is known about the effect of visual training at the subcortical level. In the present study, we extend the knowledge showing positive results of this training in the rat’s Superior colliculus (SC). In electrophysiological experiments, we showed that a single training session lasting several hours induces a response enhancement both in the primary visual cortex (V1) and in the SC. Further, we tested if collicular responses will be enhanced without V1 input. For this reason, we inactivated the V1 by applying xylocaine solution onto the cortical surface during visual training. Our results revealed that SC’s response enhancement was present even without V1 inputs and showed no difference in amplitude comparing to VEPs enhancement while the V1 was active. These data suggest that the visual system plasticity and facilitation can develop independently but simultaneously in different parts of the visual system.
INTRODUCTION
Repetitive visual training is a rapidly developing tool to modulate neuronal plasticity in the visual system for both research and clinical application (Sabel, 2008). Appropriate visual training protocols can strengthen residual vision in patients with visual impairments such as glaucoma (Sabel and Gudlin, 2014), optic nerve neuropathy (Mueller et al., 2007), or hemianopia (Gall et al., 2008;Poggel et al., 2008;Sabel, 2008).
Simultaneously repetitive visual training is successfully used in studies on the visually evoked potential (VEP) plasticity in the visual system of mammals. Repeated exposure to sensory stimuli can induce neuronal plasticity and leads to an enhanced visual response of these stimuli. Numerous researches have shown that the VEP enhancement might reflect synaptic plasticity (Heynen and Bear, 2001;Sawtell et al., 2003;Teyler et al., 2005;Frenkel et al., 2006;Ross et al., 2008;Bear, 2010, 2012). A few days long, repeated presentation of gratings with a single orientation resulted in a potentiation of the cortical VEPs amplitude to the presented stimuli (Frenkel et al., 2006). The mechanism underlying this form of training-dependent plasticity is known as long-term potentiation (LTP) of the cortical response (Frenkel et al., 2006;Kuo and Dringenberg, 2009;Hager and Dringenberg, 2010). There are well-described types of rapid VEP plasticity evoked by a few minutes of "photic tetanus" stimulation (Clapp et al., 2006a). The aforementioned repetitive stimulation results in positive changes in the visual system, for example, an expansion of neuronal receptive fields into unresponsive regions of the visual field (Eysel et al., 1998). The enhancement of the cortical response is an effect of sensory LTP dependent on NMDA-receptors in animals (Clapp et al., 2006a) and humans (Teyler et al., 2005;Clapp et al., 2006b;Ross et al., 2008). Studies on humans showed that repeated exposure to a checkerboard reversal stimulation leads to an increase of cortical VEP amplitude (Teyler et al., 2005;Normann et al., 2007;Elvsshagen et al., 2012).
Little is known about the effect of visual training at the subcortical level. A study carried out on the rat's dorsal lateral geniculate nucleus (dLGN), the primary recipient of visual information, suggests that the response properties of thalamic neurons are subject to experience-dependent long-term plasticity (Jaepel et al., 2017;Sommeijer et al., 2017). Similarly, in the superior colliculus (SC), the second major target of retinal input, repetitive exposure to dimming stimuli effectively induced the LTP of developing retinotectal synapses in Xenopus tadpole (Zhang et al., 2000). These reports suggest that appropriate sensory stimulation may induce plasticity also at the subcortical level. To test this hypothesis, we used 3 h of repetitive visual training to induce and determine the VEP plasticity in the primary visual cortex (V1) and the SC. Our study showed that visual training evokes enhancement of visual responses both at the cortical and subcortical levels. Further, we revealed that repetitive visual training evokes response enhancement in the SC even if cortical input is turned off by xylocaine.
Surgical Procedures
Rats were deeply anesthetized with urethane (1.5 g/kg, Sigma-Aldrich, Germany, administered intraperitoneally) and placed in a stereotaxic apparatus. Additional doses of urethane (0.15 g/kg) were administered when necessary. Body temperature was maintained between 36 and 38 • C using a heating blanket (Harvard Apparatus, MA, United States). Every hour fluid requirements were fulfilled by subcutaneous injection of 0.9% NaCl. The skin on the head was disinfected with iodine, and local anesthetic (1% lidocaine hydrochloride; Polfa Warszawa S.A, Poland) was injected. The craniotomy was done above the binocular V1 [6.5-7 mm posterior to Bregma; 4.5 mm lateral; (Paxinos and Watson, 2007)] and the SC contralateral to the stimulated eye [7.0 mm posterior to Bregma; 1.5 mm lateral; (Paxinos and Watson, 2007)]. During recording, the right eye (not stimulated) was covered with black tape. The Vidisic gel (Polfa Warszawa S.A, Poland) was applied to prevent the cornea from drying.
Local Field Potential Recordings and Visual Stimulation
Local field potentials (LFPs) were collected using linear electrodes made of 25 µm tungsten microwire in HML (Heavy Polyimide, 0.1-0.3 m ) insulation (California Fine Wire, United States). The ground-reference electrode (Ag/AgCl wire) was located in the neck muscles. The cortical electrode (eight channels) was made with an inter-channel distance ranging from 100 to 300 µm and inserted 1.8 mm below the dura, passing through all cortical layers (supragranular, granular, and infragranular). The SC electrode was located in the upper layers; stratum griseum superficiale-SGS, stratum opticum-SO. The SC electrode consisted of seven wires with a ∼ 200 µm vertical recording site arrangement and inserted 4 mm (tip) below the cortical surface. After the initial insertion, the electrode was let to stabilize for about 60 min. After that, we presented several flashes of light to obtain the VEP cortical profile and compare response shapes in all channels. The same procedure was repeated in every animal. The cortical profile was adjusted to see the most similar shapes of the responses on the same levels in each experiment. Signals were recorded with a multichannel data acquisition system (USB-ME64-System, Multichannel Systems, Germany), amplified 100 times (USB-ME-PGA, Multichannel Systems, Germany), filtered at 0.1-100 Hz, digitized (1 kHz sampling rate) and stored on the computer for offline analysis. Multichannel recordings allow us to choose the channels that corresponded to a specific layer, which was similar in all animals. The multiunit spiking activity was obtained by high pass filtering using Butterworth filter with 500 Hz cutoff frequency of raw signal recorded at a 20 kHz sampling rate. The multi-unit activity was extracted from the filtered signal using the 3.5 standard deviation threshold for spike detection. Visual stimulation was controlled by Spike2 software (Cambridge Electronic Design, United Kingdom). Stimulation marks were recorded along with the electrophysiological signals. Flash VEPs were evoked using light-emitting diodes (LEDs, 2200 lx) positioned 15 cm in front of the rat's left eye. The repeated visual training consisted of 300 flashes at 0.5 Hz repeated every 15 min for 3 h ( Figure 1A; Foik et al., 2015). Control recordings were carried out (100 flash repetitions at 0.1 Hz) before and after visual training to investigate the effect of training. A schematic diagram of the experimental protocol is shown in Figure 1.
Temporal Inactivation of the Cortex
A plastic chamber placed above the right hemisphere was filled with a xylocaine solution (2.5%, Lidocainum Hydrochloricum WZF, Polfa Warszawa S.A, Poland) to inactivate cortical activity (action potential blockage) during visual training (Wagman et al., 1967). The solution was replaced every 30 min during electrophysiological recordings.
Data Processing
Collected data (seven-visual training, seven-visual training with cortical inactivation) was analyzed using Matlab (Mathworks, Natick, MA, United States). The LFP signal was preprocessed using the 1st order band-stop Butterworth filter at 50 Hz, a highpass filter at 0.1 Hz, and a low-pass filter at 100 Hz. Continuous signals were divided into trials 1.2 s long (from 0.2 s before to 1 s after stimulus). The peak-to-peak amplitude of VEPs was calculated at every hour during visual training and control recordings in the 0-0.2 s time range, where 0 is a stimulus onset time. Each channel was normalized in the following way: the training data (VEP amplitude from 1, 2, and 3 h of training) was compared to the mean response magnitude of the first series of visual stimulation (300 light flashes, time 0), which was assumed as a 100%. For every hour, the percent increase of the VEP amplitude (%) was computed. In the same way, the difference between the post and pre-training controls was measured, where the pre-training was assumed as 100%. The LFP data is presented in the mV unit. Cortical LFPs were analyzed in the delta (1-4 Hz), and theta (4-7 Hz) frequency ranges to monitor brain state. The mean frequency of the EEG signal occurring during recording was estimated at 1-5.6 Hz level. The signal to noise ratio as a percent change was obtained by dividing the VEP amplitude by the peak to peak amplitude of the whole particular time recording (raw signal; %). This analysis was performed to check whether the VEP amplitude during a few hours of training will separate more from the noise generated by spontaneous activity of the brain. The VEP area for each averaged VEP was calculated as a sum of an absolute value of a voltage from each averaged evoked potential in a time range of 0-0.5 s. For statistical analysis, two recording channels for each layer from a given structure were taken in each animal. The VEPs amplitudes and signal to noise ratio factor of pre-and post-training recordings were compared using two-tailed paired t-tests. To compare mean collicular VEP amplitude between the two experimental paradigms: V1 activated and inactivated, the two-tailed unpaired t-tests with Welch correction was used. One-way repeated-measures ANOVA with greenhouse Geisser correction was used with the Dunnett's post hoc test to investigate changes in the mean VEPs responses and signal to noise ratio for three h of visual training. The statistical comparison between the average difference of pre and post-training responses for SC and V1 was computed by the Mann-Whitney test. Results are presented as a mean % ± SEM.
Histology
Electrodes coated with DiI (1,I'-dioctadecyl-3.3,3' ,3' tetramethylindocarbocyanine perchlorate; Sigma-Aldrich, Germany) were used to facilitate electrode tract reconstruction (DiCarlo et al., 1996). At the end of the experiments, rats were given an overdose of Nembutal (150 mg/kg) and transcardially perfused with 4% paraformaldehyde in 0.1 M PBS (Sigma-Aldrich, Germany). The brains were removed, postfixed for 24 h in 4% paraformaldehyde, stored successively in 10, 20, and then in 30% sucrose in 0.1 M PBS before sectioning. Brains were cut into 40 µm slices and stained with cresyl violet. The data from the experiments with incorrect electrode placements were excluded from further data analysis.
RESULTS
Analysis of VEP amplitude for the V1 included the granular layer (400-800 µm), with a typical reversal of potential polarity and infragranular layers (1-1.8 mm), identified by negative components. We considered layer V and VI as infragranular layers. We also analyzed the two retino-recipient layers of the SC:SGS (2.8-3.4 mm) and SO (3.6-4 mm), which were characterized by a biphasic oscillatory pattern of the recorded signal.
Effect of Visual Training on the Magnitude of VEPs Amplitude
To test the effect of repetitive visual training, we compared preand post-training responses to light flashes. Examples of cortical and collicular VEPs from a single rat and averaged for all animals are shown in Figures 2A-H. The comparison of pre-and posttraining VEPs showed a larger response magnitude after 3 h training session (see section "Materials and Methods" for more details), both in the cortex and the SC (Figures 2I-L). In the SGS layers of the SC, the amplitude of the post-training response was significantly greater (180 ± 9 %, n = 10, p < 0.001) than the pre-training amplitude. The increase in the SO layers was even stronger (208 ± 17 %, n = 10, p = 0.004). Results for both layers in the V1 revealed a significant increase of the VEP amplitude after visual training. The potentiated response in the granular layer after visual training was higher (179 ± 18 %, n = 12, p = 0.005) than the increase seen in the infragranular layers (144 ± 10 %, n = 14, p = 0.003). The potentiation following 3 h training was also indicated by an increase of the signal to noise ratio for post-training control compared to pre-training both in the V1 and the SC (Figures 2M-P). Analysis of VEP area difference both for collicular and cortical pre vs. post-training response revealed statistical significance for SC SGS (p = 0.007) and SC SO (p = 0.004, Figures 2Q-T). Figure 3 presents VEPs obtained for every hour of visual training in the V1 and the SC, including division into layers. Examples of cortical and collicular VEPs from a single rat and averaged for all animals are shown in Figures 3A-H. The largest increase of VEP amplitudes was observed after three hours of stimulation in all studied cases. Response in granular layer ( Figure 3I; F(3,11) = 4.76, p = 0.009) was significantly greater after second hour (183 ± 17 %, p = 0.04) and thrid hour of training (215 ± 33 %, p = 0.04) in comparison to control. In the infragranular layers [ Figure 3J; F(3,13) = 4.76, p = 0.0006] the greatest amplitude was during third hour (163 ± 8 %, p = 0.0005) as well as second hour of training (138 ± 6 %, p = 0.009) than in control.
Effect of V1 Inactivation on Visual Training Efficiency in the SC
The potentiation in the SC following visual training may be a result of two phenomena: (1) as a result of changes to retinotectal synapses independently of the V1; (2) or enhancement in the V1 that modulates response in the SC. To resolve this problem, we selectively inactivated the V1 through the application of xylocaine solution on the surface of the V1 for the duration of visual training. Chemical inactivation of the V1 caused strong silencing of cortical responses during visual training (Figures 4A,C), leaving only the incoming volley (the incoming thalamic information). This phenomenon was also observed in a study with the barrel cortex inactivated through the cooling cortical surface (Kublik et al., 2001). The effect of xylocaine (sodium channel blocker) application was also visible as a multi-unit activity drop shown in Figure 4B in the form of a raster plot and comparison of two histograms from the same recording site. The chemical inactivation prevented cortex from response enhancement as presented in Figure 4D and revealed no difference in amplitude before and after training (83 ± 4%, n = 14, p = 0.25). This result confirms an effective deactivation of the V1 during training and blocking the learning effect in this structure. The comparison between pre-and post-training recordings in the SC again, showed significant enhancement of the response as a result of the visual training while V1 deactivation. In the SGS layer, the amplitude of the visual response was significantly greater in post-training control compared to pre-training recording (Figure 4D; 212 ± 26%, n = 10, p = 0.0025). Moreover, the amplitude was higher than changes evoked in the SO layers ( Figure 4C; 161 ± 23 %, n = 10, p = 0.03). We observed an increase of the collicular VEP amplitudes during visual training even when cortical activity was silenced [for SGS; F(3,9) = 6.59, p = 0.005, for SO; F(3,9) = 9.28, p = 0.02]. For the SGS layers ( Figure 4E) VEP amplitude was the highest after 3 h of visual training compared to control time (262 ± 36%, p = 0.01), then second hour (222 ± 25 %, p = 0.02) and after first hour (152 ± 10 %, p = 0.08). We found similar tendency in the SO layer ( Figure 4F) where the potentiation of visual response was the greatest after third hour of stimulation (222 ± 31 %, p = 0.04), then second hour (213 ± 36 %, p = 0.1), and after first hour (173 ± 24%, p = 0.1). A comparison of the visual training effects of the two conditions (V1 activated and inactivated) did not show a significant difference in the SC (Figure 4F). The increase of visual response for every hour of training in both SC layers was similar for both V1 conditions. This result indicates that the temporary blocking of V1 did not inhibit VEP plasticity in SC evoked by repeated visual training. We can also conclude that the enhancement of VEP amplitude in the SC occurred mainly through a retinotectal synapse. The stronger enhancement occurred in the SC compared to the V1. The average difference between pre and posttraining responses for SC = 106 ± 17% was higher than for V1 = 17 ± 4 % without xylocaine (Figure 4H; p = 0.002) and even higher with xylocaine application during the training (Figure 4I, p < 0.0001).
DISCUSSION
Our results showed that the 3 h visual training evoked strong enhancement of the VEPs both in the V1 and the SC. Moreover, our paradigm of repetitive visual training evoked a stronger increase in the response in the SC than in the V1 (Figure 2). We confirm that visual training causes enhancement of VEP amplitude in the V1 as it was described before (Sawtell et al., 2003;Teyler et al., 2005;Frenkel et al., 2006;Ross et al., 2008;Bear, 2010, 2012) and extends the knowledge showing positive results of this training in the rat's SC. Specifically, we showed in an electrophysiological study that single training session, lasting several hours, induces plasticity of visual responses.
Based on the literature we can enumerate several welldescribed paradigms of repetitive visual stimulation which differ from each other mainly in the type of visual stimulus, presentation timing, a number of repetitions and the frequency of the stimulus (Furmanski et al., 2004;Clapp et al., 2006a;Frenkel et al., 2006;Kuo and Dringenberg, 2009;Cooke and Bear, 2010;Hager and Dringenberg, 2010). One well-known protocol uses repeated presentations of a specifically oriented visual stimulus through several days that causes stimulus-selective potentiation of a cortical response in awake mice and enhancement of a signal detection power in humans (Seitz et al., 2009). This type of plasticity is well described in layer IV of the V1 (Frenkel et al., 2006;Cooke and Bear, 2012). In our study, we found that the potentiation of the visual response to a flash stimulus also occurs in the infragranular layers. Specific types of repeated visual stimulation (shorter and more intense) are also able to induce the modulation of VEP plasticity and modifications of synaptic connectivity in the mature V1. Studies carried out on humans demonstrated how 10 min presentation of checkerboard reversals (Teyler et al., 2005) resulted in sustained amplitude modulation of early components of subsequent VEPs, whereas a rapid (9 Hz, for 2 min) checkerboard stimulation might induce the enhancement of visual response in the adult rats V1 (Clapp et al., 2006a). Our experimental paradigm provides a form of rapid visual training consisted of a series of flashes repeated every 15 min through 3 h. The frequency of stimulus presentation (0.5 Hz) is lower than was used by Clapp et al. (2006a); nevertheless, it was sufficient to evoke significantly enhanced responses in the VC and SC.
Flashing stimuli compared to drifting sinusoidal gratings or reversal checkerboards are rather strong stimuli, thus can evoke changes faster. It was shown before that in adult mice flashing stimuli evoke robust long-term changes in the V1 neuronal response and increase the broadband power of LFP signal (Funayama et al., 2015(Funayama et al., , 2016. Thus, the flash stimulus might be successfully used for the induction of modulation in the neuronal response (Minamisawa et al., 2017), which we confirmed in our study.
It is considered that the reinforcement of neuronal response occurring following repetitive stimulation in the visual system in awake animals, including humans might be triggered by increasing the number or gain of neurons involved in response to the trained stimulus (Furmanski et al., 2004;Hager and Dringenberg, 2010). In our study, we observed facilitation of visual response to a flash stimulus, although the experiments were carried out on anesthetized rats. Previous studies also confirmed the occurrence of learning processes in the visual system in animals under deep anesthesia, where LTP dependent of NMDA receptors was effectively induced in the V1 through electrical theta-burst stimulation of the visual pathway (Heynen and Bear, 2001;Kuo and Dringenberg, 2009) or via repetitive visual stimulation (Clapp et al., 2006a).
We found that 3 h of repeated visual stimulation evoked enhancement not only in the cortex but also in the SC (Figures 2, 3). We also show for the first time that, stronger response enhancement occurred in the SC than in the V1. So far, little attention has been devoted to the investigation of this effect. Zhang et al. (2000) shown that the repetitive exposure to dimming stimuli effectively induced the LTP of developing retinotectal synapses in Xenopus tadpoles. This effect is attributed mainly to changes in synaptic efficacy at retinotectal projections. However, potentiation of neuronal response at this level of the visual processing can also derive from the cortex due to inputs from layer 5 of the V1 (Waleszczyk et al., 2004;May, 2006). Our results revealed that collicular response potentiation is not changed when V1 is blocked during repetitive visual training. This indicates that the increase of neuronal responses in the SC is most likely due to the enhancement of the retinotectal projection. There are supportive studies performed on rats where ablation of the V1 caused facilitation of the LTP formation in the SC, indicating a suppressive influence of the V1 inputs into the SC (Shibata et al., 1990;Okada, 1993). In our studies, we observed potentiation of the SC response during visual training regardless of whether the V1 was activated or inactivated (Figures 3, 4).
In summary, the data presented here show a new form of plasticity occurring after 3 h of repeated visual training in the primary VC and SC. We demonstrated that the enhancement of neuronal responses in the SC following our paradigm of visual stimulation occurred independently of the V1, most likely through retinotectal projection. Further research will be needed to better understand the mechanisms responsible for this phenomenon.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because the datasets can be available only in collaboration with authors. Requests to access the datasets should be directed to KK<EMAIL_ADDRESS>
ETHICS STATEMENT
The animal study was reviewed and approved by the First Warsaw Local Ethical Commission for Animal Experimentation (521/2018). | 5,275.2 | 2020-03-20T00:00:00.000 | [
"Biology",
"Psychology"
] |
The Formal, Financial and Fraught Route to Global Digital Identity Governance
How can we understand the progressive, piecemeal emergence of global digital identity governance? Examining the activities of the Financial Action Task Force (FATF) - an intergovernmental organization at the center of global anti-money laundering and counter-the-financing of terrorism governance-this paper advances a two-fold argument. First, the FATF shapes how, where and who is involved in developing key standards of acceptability underpinning digital identity governance in blockchain activities. While not itself directly involved in the actual coding of blockchain protocols, the FATF influences the location and type of centralized modes of control over digital identity governance. Drawing on the notion of protocological control from media studies, we illustrate how centralized control emerging in global digital identity governance emanates from the global governance of financial flows long considered by international organizations like the FATF. Second, we suggest that governance by blockchains persistently shapes the ability of the FATF to stem illicit international financial flows. In highlighting both the influence of FATF on blockchain governance and blockchain governance on the FATF, we draw together two strands of literature that have been considered separately in an analysis of the formal, financial and fraught route to global digital identity governance.
INTRODUCTION
How can we understand the on-going emergence of global digital identity governance? The seemingly ever progressing digitalization of human activities, accelerated by the Covid-19 pandemic, is not a smooth, linear and all-encompassing affair. Rather, it remains patchy and tension-filled. While activities like digital payments flourish (Boakye-Adjei, 2020; Frazier, 2020), others remain marked by longstanding conflicts. The progressive and piecemeal digitalization of identities exemplifies these broad tensions including, amongst others, user privacy and the informational needs of regulators charged with prevent exploitation, abuse and illicit activities. Blockchains and other novel technologies are continually emerging to square the circle of privacy and surveillance. Yet, their applications often merely shift the location and form of such tensions, rather than resolving them.
Analysis of emerging blockchain-based attempts to resolve these and other longstanding tensions in contemporary governance generally considers governance by and of blockchain systems (Campbell-Verduyn, 2018b;Atzori, 2017;de Filipi, 2018;Herian, 2018;Hooper and Holtbrügge, 2020;Jones, 2019;Reijers et al., 2016). 1 The former stress how blockchain applications themselves govern an organization or process while the latter emphasize how blockchains are governed by a range of state and non-state organizations. While generating increasingly nuanced understanding, this growing literature has granted surprisingly little attention to the interplay between governance of and by blockchains. In particular, little attention has been granted to relations between informal and formal forms of blockchain governance beyond passing mentions to the likes of the International Monetary Fund (IMF) and Organization for Economic Cooperation and Development (OECD).
This article contributes to the filling of this gap by examining relations between evolving forms of governance by blockchains and the governance of blockchain emanating from the Financial Action Task Force (FATF). Attracting more industry attention than in academic studies of blockchain (Table 1), 2 this Parisbased intergovernmental organization is responsible for setting global standards for anti-money laundering and counter-thefinancing of terrorism governance (AML/CFT). In tracing both 1) the underappreciated role of this formal organization in shaping the emergence of global digital identity governance and 2) the implications of blockchain activities on its attempts to stem illicit financial flows, this paper draws together of analysis of governance by and of blockchain. To do so, we harness and extend the notion of protocological control. Developed by media studies scholar Alexander Galloway (2004): 6-7, who built on insights from French philosophers Michel Foucault and Gilles Deleuze, the notion of protocological control helps illustrate how the embedding of specific standards of behaviour into computing protocols provide the key "standards governing the implementation of specific technologies." We show how protocols serve as key forms of governance themselves while also drawing out how the location and form of protocological control is itself shaped. In other words, we clarify the who, where and how of protocological control by pointing to the influence of the FATF on the location and form of protocological control in blockchain-based activities. In doing so, we show that, despite claiming to distribute power across the nodes in novel digital networks, applications of blockchains instead frequently shift the location and type of actors exercising centralized control.
Two central contributions are made in this article. First, we illustrate how the FATF shapes how, where and who is involved in developing key standards of acceptability underpinning digital identities. While not itself directly involved in the actual coding of protocols, the FATF influences the location and type of centralized modes of control. We stress how protocological control emerging in global digital identity governance emanates from the global governance of financial flows long considered by intergovernmental organizations like the FATF. In elaborating the role of this organization, we extend studies identifying the financial roots of digital identity governance beyond informal interactions between the public sector and financial technology industry at the national level (Eaton et al., 2018;Faria, 2021). Second, we suggest that governance by blockchains persistently shapes the ability of the FATF to stem illicit financial flows. In highlighting tensions between both the influence of FATF on blockchain governance and blockchain governance on the FATF, we draw two strands literature together in identifying both the formal and financial, as well as the fraught route to global digital identity governance.
We elaborate these arguments over three further sections drawing on primary documents, including guidance and reports of the FATF, 3 as well as secondary documents from blockchain industry news sites. The following section analyzes how the FATF shapes the exercise of protocological control in regards to governance of blockchains generally and digital identity governance specifically. A third section highlights how forms of governance by blockchains shaped by the FATF paradoxically undermine the objectives of this organization of reducing illicit international financial flows. A final section summarizes and offers directions for future research. Assessing risks and applying a risk-based approach 2 National cooperation and coordination B-MONEY LAUNDERING AND CONFISCATION 3 Money laundering offence 4 Confiscation and provisional measure C-TERRORIST FINANCING AND FINANCING OF PROLIFERATION 5 Terrorist financing offence 6 Targeted financial sanctions related to terrorism and terrorist financing 7 Targeted financial sanctions related to proliferation 8 Non-profit organisations D-PREVENTIVE MEASURES 9 Financial institution secrecy laws Customer due diligence and record keeping 10 Customer due diligence 11 Record keeping Additional measures for specific customers and activities 12 Politically The FATF was established in 1989 as part of inter-state efforts by the Group of 7 (G7) countries to stem global money laundering. It promulgated an initial 40 recommendations for supporting global anti-money laundering efforts (AML) that were supplemented with 9 special counter-the-financing-of-terrorism (CFT) recommendations following the 11 September 2001 attacks ( Table 2). The task force issues official reports and guidance on the implementation of these 40 + 9 recommendations for countering the financing of the proliferation of weapons of mass destruction (FATF, 2018) and illicit wildlife trade (FATF, 2020c), as well as extending its recommendations to virtual currencies (FATF, 2015), virtual asset service providers (FATF, 2019) and digital identities (FATF, 2020e). Scholarly literature has long debated the origins and impacts of FATF's activities (Gutterman and Roberge, 2019: 462;Tsingou, 2010;Hülsse, 2008;Hülsse and Kerwer, 2007;Truman and Reuter, 2004). On the one hand, are critiques of its symbolic "security theatre" as providing weak attempts to show member states that it is "doing something" about international money laundering and the financing of terrorism. On the other hand, FATF activities are regarded as successfully motivating a range of state and non-state actors to prioritize AML/CFT efforts while setting the requirements for the proper monitoring of identity systems in finance. These latter accounts stress how enforcement of the FATF's non-binding, voluntary standards relies on periodic monitoring of compliance with its 40 + 9 recommendations. When what the FATF calls "strategic deficiencies in their regimes to counter money laundering, terrorist financing, and financing of proliferation" is identified, the Task Force enhances its monitoring. 4 However, it lacks direct enforcement mechanisms itself. Instead, the FATF issues warnings to exercise caution to its global network of 39 official state members, as well as non-members in its wider network of some 170 associate and observer members, and related regional bodies around the world. These warnings caution state and non-state actors globally about interacting with "Jurisdictions under Increased Monitoring" (the FATF's unofficial "grey list") 5 and countries on its unofficial FATF "blacklists" 6 The effectiveness of the FATF ultimately relies on peer pressure for its members and non-members alike to sanction jurisdictions on its unofficial lists. The FATF's power is thus indirect: it is a standardsetter and monitor rather than an enforcer. It shapes global regulatory responses, but it relies on others to develop and enforce them, including its member states, who have in turn tended to "deputize" banks and other financial market actors as AML/CFT enforcers to develop and undertake Know Your Customer (KYC) procedures (Amicelle, 2011; see more generally; Avant, 2005). Such enforcement-by-proxy entails a chain of enforcement in which the FATF relies on member states who in turn rely on market actors in their jurisdictions to implement the intergovernmental organization's guidance.
In this section, we build on and extend insights into the FATF's exercise of indirect power. We show how this IO shapes the location of protocological control by tracing the financial and formal lineage of global digital identity governance. The FATF, we argue, shapes the hard code of the computer protocols underpinning blockchain-based activities through soft international law issuance of guidance and recommendations. A first sub-section considers the impacts of the FATF, 2015 guidance on virtual currencies before a second examines the 2019 guidance on virtual asset service providers. Both of these "risk-based" guidances, we argue, shaped the market-based location of protocological control over blockchain technology. The FATF enabled private actors to take charge of monitoring the flow of blockchain transactions and the identities of the entities undertaking them. It has done so by guiding member-states towards letting market actors exercise protocological control in the emerging governance of digital identities. Although becoming more explicit, this steering towards private-led governance is in line with this IO's longerstanding risk-based approach that, as we elaborate, attempts to weigh the costs and benefits of greater public involvement in rapidly evolving technological changes. It is also in line with the wider approach towards innovation and the knowledge economy promoted by leading international organizations like the OECD, at whose Paris headquarters the FATF's secretariat is housed (Hasselbalch, 2018;Campbell-Verduyn and Hütten, 2019).
Guiding Protocological Control by Market Forces
While always a consideration in AML/CFT discussions (see for instance FATF, 2013), technology came to the forefront of FATF activities in the past half decade as financial technologies ("FinTech") and regulatory technologies ("RegTech") gained attention globally. The FATF launched a FinTech/RegTech Forum in 2017 for stimulating more effective monitoring and compliance with its 40 + 9 recommendations. The FATF's engagement with blockchain applications began earlier, with a 2014 report weighing the potential benefits and risks from virtual currencies, which included cryptocurrencies based on blockchain technology. The report specifically highlighted identity topics. On the one hand, the FATF (2014) flagged concerns about the anonymity provided by the technology, the limited possibilities for identification and verification of network participants, as well as a lack of clarity for formal regulatory responsibilities. On the other hand, the FATF (2014) also identified legitimate benefits such as lower transaction costs and possibilities for enhancing financial inclusion. Based on this initial risk assessment formal guidance on how its members should apply its 40 + 9 AML/CFT recommendations to virtual currencies was issued in 2015. 7 The 2015 FATF guidance shaped the location and form of protocological control across emerging blockchain-based activities in two interrelated ways. First, it downplayed the growing calls for public authorities to apply direct control. Instead, it recommended letting market actors develop appropriate protocols for ensuring AML/CFT controls. This recommendation emerged at a time when formal laws were emerging to restrict blockchain-based activities in member countries like Russia and China. As outright bans on the leading application of the technology to cryptocurrencies were being imposed in prominent jurisdictions, the FATF called for looser, "light touch" regulations. It waded off calls for strong "hands on" public control by not recommending formal regulation of blockchain applications. This was despite the growing gulf between AML/CFT identity requirements and the quasi-anonymity of many cryptocurrencies. The 2015 FATF guidance did call for close monitoring of cryptocurrency exchanges by member states. Yet FATF members and nonmembers alike were also encouraged to avoid formal bans and other actions that could lead blockchain activities shifting to less regulated jurisdictions. The guidance instead called for members to "take into account, among other things, the impact a prohibition would have on the local and global level of ML/TF risks, including whether prohibiting VC [virtual currency] payments activities could drive them underground, where they will continue to operate without AML/CFT controls or oversight" (FATF, 2015: 8-9).
In toning down the growing chorus of calls for "stricter" state regulation of blockchain-based activities, the 2015 FATF guidance on virtual currencies re-enforced the longstanding roles of private actors as enforcers of AML/CFT specifically, and the market-based location of protocological control generally. The 2015 FATF guidance extended the private-led development of standards of information communication in and between blockchain activities. Rather than state authorities, a range of competing start-up technology firms like Mastercoin, Counterparty and Interledger proposed manners of connecting together various protocols building on the Bitcoin protocol. Protocological control was equally left to market actors in governing the kinds of "forks" from the Bitcoin blockchain. The market-based competition led to what was dubbed a civil war in the 2017 "hard fork" of the original computer protocol that became Bitcoin Core (BTC) and Bitcoin Cash (BCH) (Coin Idol, 2019). The latter maintained the features and transaction history of the former protocol, while also introducing a fundamental change in acceptable standards of behavior: the ability to spin-off new protocols or "forks" from an existing protocol. The new BCH protocol then itself split in two as debates over the appropriate block size for recording verified transactions on the shared ledger led to the creation of both Bitcoin Cash Satoshi Version (SV) and Bitcoin Cash Adjustable Blocksize Cap (ABC) in late 2018. Whereas the development of these multiple, overlapping protocols was left to market forces, protocological control in the Ethereum blockchain was centralized in its Foundation and founder, Vitalik Buterin. A major flaw in the protocol of The DAO, a utopian experiment with automatic management of crowdsourced funds, led to a hack and withdrawal of the equivalent to $120 million raised in 2016 before informal centralized control was exercised to repair the underlying code (Hütten, 2019). A year later, the centralized group of "core" Ethereum developers formally adopted a previously informal set of rules in standardizing interactions between the disparate applications on this blockchain (Buntix, 2017). The adoption of what is still known as the "Ethereum Request for Comment" (ERC) number 22 further illustrated how protocological control was left to be exercised by non-state actors. This episode also highlighted the relevance of the identities of the programmers shaping these protocols. Departing from the substantial efforts Satoshi Nakamoto took to remain anonymous, developers behind blockchain protocols became increasingly public figures exercising protocological control over quasi-anonymous payment systems.
What the 2015 FATF guidance contributed to then was a taming of growing worldwide expectations that direct state control could, should and would be exercised over blockchain protocols in quarrels over questions of identity requirements. Key actors at the intersection of cryptocurrencies and fiat currency exchange became increasingly monitored. Yet, protocological control remained exercised by non-state actors. In the Bitcoin protocol debates of 2017 those users most able to harness computing power-the large "pools" of miners-exercised decision-making power in "forking" the original protocol. Similarly, the 2016 hack of The DAO saw a distributed community of users rally around the creator of Ethereum, then 24-year Russian-Canadian Vitalik Buterin, who undertook centralized amendment of this protocol. Both of these instances revealed the degree to which power and control remained market-based and how the FATF guidance did not alter non-state control but extended it, just as it would do again 4 years later.
Extending Protocological Control to New Markets
The 2019 FATF Guidance assembled fiat-to-cryptocurrency exchanges together with other actors linking real-world identities with the quasi-anonymous payments facilitated by blockchain protocols into a category called "virtual asset service providers" (VASPs). The FATF's guidance on extending its 40 + 9 recommendations to VASPs contained a controversial amendment to its 16th recommendation stipulating that financial institutions should collect and share customer information amongst one another. 8 The FATF specified that by June 2020 VASPs also implement the "travel rules" on customer information adhered to by other financial actors, like banks. The guidance specified that the following identity attributes should "travel" along the chain of transactions exceeding $1,000: -"(i) originator's name (i.e., sending customer) -(ii) originator's account number where such an account is used to process the transaction (e.g., the Virtual Asset wallet) -(iii) originator's physical (geographical) address, or national identity number, or customer identification number (i.e., not a transaction number) that uniquely identifies the originator to the ordering institution, or date and place of birth.
-(iv) beneficiary's name -(v) beneficiary account number where such an account is used to process the transaction (e.g., the Virtual Asset wallet)" (FATF, 2019: 29) The collection and transfer of such identity attributes stood in tension with the quasi-anonymous identity standards underpinning digital transactions in and across many blockchain protocols. As one article in the leading cryptocurrency news website CoinDesk put it, the extension of the Travel Rule to VASPs "goes against the grain to shoehorn an identity layer onto a technology specifically designed to be pseudonymous" (Allison, 2020a). The FATF guidance and its recommendation to extend the Travel Rule specifically was perceived by many industry actors as "excessively onerous to manage" and decried for the possibility that it "could drive the entire ecosystem back into the dark ages" (Weinberg in Hochstein et al., 2019).
Contrary to these views and critiques of the FATF's exercise of "draconian" power (Hamacher, 2019), however, the task force once again left protocological control to markets rather than state-controlled bodies. Notably, the FATF did not call for public entities to develop or enforce any set of uniform standards for identity information sharing between VASPs. Instead, the 2019 FATF guidance spurred an intense "race" between market players seeking to develop key standards for behavior underpinning digital identity systems that could enable VASPs comply with the Travel Rule and AML/CFT recommendations (De, 2019). Moreover, prior to the publication of the 2019 guidance, the FATF had engaged in a multi-year formal regulatory dialogue with industry actors. It gave dozens of so-called "identity startup" firms opportunities to develop and test protocols for squaring the circle of, on the one hand, enabling VASPs to collect and transfer data on users, while on the other hand ensuring that user anonymity would remain protected (Henry et al., 2018). What we call a protocol dialogue involved industry-FATF deliberations on how protocols can and should be developed and applied by blockchain start-ups and other technology companies. The development of the FATF, 2019 guidance was summed up by the FATF Secretariat in an interview stating how "[w]e didn't want FATF to sit down and tell technical details of exactly how companies should comply with it because that would quickly become out of date." (Oki, 2019). Once again, the FATF steered protocological control towards the market rather than calling on member states themselves to develop key standards. The FATF governance of blockchain relied closely on governance by blockchain developers. Table 3 provides an indication of how the aggregate reception of the 2019 FATF guidance grew more positive as fears of its "draconian" actions subsided. 9 Note: Outcomes in boxplots differentiated by year illustrate general sentiments by blockchain industry actors. Sentiments are averaged of 184 articles from Coindesk and 142 articles collected from Cointelegraph. The scale ranges from -1 (most negative) to +1 (most positive) with 0 being neutral. 9 We used automatized webscrapping with Python to collect articles mentioning the FATF from the leading blockchain media platforms CoinDesk and Cointelegraph utilizing the search feature of the respective sites. Documents were coded manually using the qualitative data analysis software NVivo 12 Plus treating the coding itself as part of the analysis (Basit, 2003). This means that we treated coding as a heuristic, more akin to an exploratory problem-solving technique (Saldana, 2009), starting with an in vivo approach that coded sections with a word or short phrase taken from each document. The sentiment analysis of the total 326 articles using Python TextBlob module to compare change over the three years containing the most attention to the FATF activities, 2018-2020. TextBlob assigns polarity values between −1 and 1 to certain words and word combinations in each article indicating if a sentence is more positive or more negative terms. Scores per word or word combination are predefined. For example, the word "great" receives a score of 0.8, the word "bad" scores −0.7, but a negation like "not bad" scores a 0.35. TextBlob then averages the all together for longer text and returns a total polarity value for each article (Schumacher, 2015). While a machine learning approach may yield better results, we used the data predominantly in an explorative fashion, limiting our approach to simple text processing.
A trio of caveats are necessary to clarify our central argument thus far regarding how FATF governance of blockchain activity through its formal guidance on virtual currency and virtual asset service providers impacted the location of protocological control. First, the FATF itself did not exercise protocological control but rather shaped the location where such control would be exercised. The task force did so by avoiding recommending public approaches encouraging top-down implementation of its AML/CFT recommendations. Instead, the task force sought to ensure that protocol development, implementation and control remained a more bottom-up affair with "identity" start-ups competing with one another. Second, guidance towards market-rather than government-led development of key standards of behavior to novel blockchain activities is an extension of the FATF's longstanding risk-based approach. The approach attempts to weigh the challenges and opportunities involved with implementing the 40 + 9 AML/CFT recommendations, recognizing that "harsher" clamp downs and even bans on certain activities may merely send illicit activities to other jurisdictions while undermining possible benefits of technological innovation. In the context of blockchains, the risk-based approach is one that weighs the risks of illicit activities with cryptocurrencies with the promises of financial surveillance offered by its underlying distributed ledger technology. Third, public actors and official policymakers were not absent, but actively encouraged privatesector standard-setting for squaring the circle of privacy and surveillance in blockchain activities. At the so called Virtual 20 (V20) event, held in parallel to the 2019 Group of 20 meeting in Japan, policymakers including ex-FATF President Roger Wilkins, Japanese Congressman Naokazu Takemoto and Taiwanese Congressman Jason Hsu were present in the signing of the national VASP industry agreement to co-develop standards for digital identities (Zmudzinski, 2019). Representatives from the United States Department of Homeland Security and Treasury Department's Financial Crimes Enforcement Network (FinCEN) were all present at the November 2019 Travel Rule Compliance Conference and Hackathon in San Francisco, California, where the Travel Rule Information Sharing Alliance (TRISA)-a private sector grouping consisting of some 50 blockchain firms and nonprofits-pledged to develop "key technical solutions that include a directory of validated VASPs as well as a Certificate Authority (CA) model to ensure the public key cryptography". 10 In summary, formal FATF guidance influenced the location where key standards of information communication between VASPs are developed: in the market rather than (international) state bureaucracies. The FAFT did not encourage either top-down or draconian enforcement of its legally non-binding standards. Rather its official guidance has recommended that key protocols and identity standards be persistently set by bottom-up market activities. The persistent stress on protocological control by market actors is in line with the wider spate of FATF activities and the organization's longstanding openness to private sector influence (Favarel-Garrigues et al., 2009;Amicelle, 2011;Liss and Sharman, 2015). Indeed, it has been argued that "the private sector-in particular the financial services industry and its high-level representatives-is becoming a "non-great power influencer in FATF" (Nance, 2018: 118). At the same time, former FATF personnel have joined efforts to develop "Travel Rule solutions", such as those offered by the Barbados-based Shyft Network (Allison, 2020b). What we identify as "protocol dialogue" was both present in the development of the 2019 guidance and its on-going implementation. Limits on the effective form of protocological control that the FATF helped steer in turn shape the intergovernmental organization's goal of preventing money laundering and the financing of terrorism.
FORM OVER FUNCTION: THE FRAUGHT EXERCISE OF PROTOCOLOGICAL CONTROL
In this section we highlight tensions between governance of blockchains and governance by blockchain. Specifically, we illustrate how the market-based form of protocological control the FATF has promoted fails to overcome the "pitfalls of private governance of identity" (Goanta, 2020; see more generally; Ronit and Schneider, 1999) and undermines the objectives reducing illicit finance in blockchain-based activities. While this argument can only be confirmed through analysis of events unfold over the coming years, we mobilize initial support for our position across two subsections. First, we point to the growing divide between standards of behavior in two spheres of blockchain-based activity, noting the development of dualling identity protocols. Second, we examine the 2020 FATF guidance on digital identities where we note a doubling down on the existing form of market-led protocological control. These trends, we contend, contribute to the fraught route towards global digital identity governance, one in which the reducing illicit activities appears increasingly unattainable.
Dualling Identity Protocols
Protocological control by market actors in blockchain activities has taken on a dual form undermining rather than addressing the goals of the FATF of reducing international illicit financial flows. Highly fragmented and split standards of behavior emerged for blockchain activities governed by market forces. On the one hand are protocols integrating the identity requirements of the Travel Rule. On the other hand, are protocols disregarding FATF recommendations and seeking to maintain the anonymity of their users. While both sets of protocols pledge to maintain user privacy, only the former incorporate blockchain-based activities into the identity requirements of the existing global AML/CFT regime. The latter protocols, meanwhile, push blockchain-based activities further out of the reach of formal remit of AML/CFT enforcement. This leads the very illicit activity the FATF is charged with reducing and stamping out to be progressively driven further into, rather than out of, the shadows of the "dark net". In elaborating this argument, we first detail the "dualling identity protocols" before situating their importance in the emergence of global digital identity governance. Protocological control is exercised by some market actors in ways that closely accord with FATF guidance. Here user identity information is collected and exchanged between and beyond VASPs in ways that closely resemble the more established forms of centralized governance that blockchains originally arose to bypass and counter. Centralized messaging platforms for "VASPs to share encrypted transmittal information with each other securely and privately" are provided by firms like Taiwanbased Sygna. 11 Other start-ups such as Coinfirm, Netki, Shyft and KYC Chain all provide similar "solutions" and are based on private or permissioned blockchain protocols with centralized gatekeepers akin to those of traditional digital systems. Even purportedly "decentralized" solutions offered by blockchain alliances and associations take-on degrees of centralized control. A prominent example is the Travel Rule Information Sharing Alliance, an association of more than 50 entities "focused on security and interoperability between the travel rule standards and protocols". 12 In December 2019, this alliance developed the Intervasp Messaging Standard 101 (IVMS-101) standard (Allison, 2020d), described as "a universal common language for communication of required originator and beneficiary information between VASP" 13 . In May 2020, InterVASP was launched as a technical standard providing a common language for communication between originator and beneficiary VASPs. 14 Such private sector-self governance closely emulates longstanding types of global associations of highly centralized financial exchanges like the World Federation of Exchanges (McKeen-Edwards, 2010).
Further steps towards "decentralized" peer-to-peer solutions being developed also contain persistent elements of centralization. For example, certificates holding transacting users' Personally Identifiable Information are maintained by centralized authorities. TransactID is overseen by California-based Netki, while the free open-source peer-to-peer VASP Address Confirmation Protocol is developed by California-based CipherTrace, 15 which sells the above type of "forensic tool" for the United States Department of Homeland Security. 16 These "blockchain forensics tools" developed to extend CFT/AML standards clearly recentralize in collaborating not only with traditional financial intermediaries, but also with governments (Nelson, 2020). The degree of such collaboration became apparent in a leaked 2019 report provided to the United States Financial Crimes Enforcement Network 17 and other financial regulators by the Cryptocurrency Indicators of Suspicion (CIOS) Working Group, a network of blockchain intelligence firms, exchanges and big banks that detailed dozens of illicit patterns of transactions on blockchain along with a "road map" for tackling them (del Castillo, 2019). Given these connections, it is not inconceivable that these firms enable the sharing of customer information not only between VASPs, but also with law enforcement and intelligence agencies, many of whom are their clients or prospective future clients. Sharing of such information would replicate the kinds of longstanding relationships between such agencies and banks (Amicelle, 2011), the latter of whom are also developing protocols such as Travel Rule Protocol developed between Dutch bank ING, British bank Standard Chartered and United States brokerage firm Fidelity (Allison, 2020e).
A parallel form of protocological control is exercised by market actors eschewing customer identification and information sharing requirements and pushing blockchain activity further from official regulatory remit. So-called "privacy protocols" Cashshuffle/ Cashfusion, 18 Enigma, 19 Lelantus, MimbleWimble, OpenBazaar and others being tested like Lelantus (Powers, 2020a) provide enhanced standards of anonymity that do not attempt to maintain compliance with either AML/CFT or the Travel Rule customer identification and information exchange requirements. While some protocols here aim for compliance with FATF recommendations and are incorporating blockchain-based activities into formal global AML/CFT governance, 20 most protocols push blockchain-based activities further out of the reach of formal remit of AML/CFT enforcement. The FATF, 2019 guidance has affected what we label the protocol selection of VASPs undertaking selective, ad hoc compliance with AML/CFT rules. For example, fit-to-cryptocurrency exchanges have delisted cryptocurrencies whose protocols facilitate high standards of anonymity. OKEx Korea confirmed in 2019 that it would halt trading of privacy-coins Monero (XMR), Dash (DASH), Zcash (ZEC), Horizen (ZEN) and Super Bitcoin (SBTC), citing grounds of conflicts with FATF guidelines (Suberg, 2019). Nonetheless, around a third of the top 120 exchanges themselves were found in a survey to have little in the way of AML/CFT controls themselves (Palmer, 2019).
Protocol selection leads to patterns of illicit activity the FATF is charged with reducing and stamping out to be driven deeper into the shadows of the "dark net." Blockchain intelligence firm CipherTrace, for example, reported in 2020 that some 90% of suspicious transactions in cryptocurrencies were being missed by financial institutions (Haig, 2020). The FATF itself lamented these trends in a September 2020 report on "Virtual Assets Red Flag Indicators of Money Laundering and Terrorist Financing." This report was based on more than one hundred 11 https://www.sygna.io/blog/types-of-fatf-r16-crypto-travel-rule-solutions/. 12 https://trisa.io/. 13 https://intervasp.org/. 14 https://trisa.io/. 15 https://ciphertrace.com/travel-rule-info-sharing-architecture/. 16 https://ciphertrace.com/ciphertrace-announces-worlds-first-monero-tracingcapabilities. 17 Which issued and began immediately enforcing a version of the Travel Rule for United States-based exchanges in 2019. 18 https://github.com/cashshuffle/spec/blob/master/CASHFUSION.md. 19 https://enigma.co/. 20 For example the "trust framework" released by Norwegian start-up Notabene in June 2020 reportedly provides a know-your-customer (KY) through "elements of decentralized identity management to link blockchain addresses to verified profiles" (Allison, 2020c). Switzerland-based OpenVASP, to which Notabene is a member, coordinates the development of a protocol based on Ethereum that "puts privacy of transferred data at the center of its design". It suggests the use of a peer-to-peer messaging system called Whisper which "employs so-called dark routing to obscure message content and sender and receiver details to observers, a bit like anonymous web browsing Tor" (Allison, 2020a). Here identity management is undertaken by a smart contract-based "blockchain public key directory for the VASP and an IBAN-like numbering format: the virtual asset account number" (ibid). case studies of what it noted are "indications of suspicious activities or possible attempts to evade law enforcement detection" (FATF, 2020b: 5). Meanwhile, FATF's 1 year progress survey of the status of Travel Rule extension to VASPs reported that despite "progress in the development of technological standards for use by different travel rule solutions," there was less implementation of the Travel Rule than other AML/CFT standards (FATF, 2020d: 11). The uneven outcome was blamed on a lack of "sufficient holistic technological solutions for global travel rule implementation that have been established and widely adopted." (FATF, 2020d: 12). Recognizing the "decentralisation ethos that underpins virtual assets, there appears to be a general desire for multiple potential solutions, rather than one centralised travel rule solution." (ibid.). The FATF stressed how the "usage of common standards will assist in ensuring different solutions are interoperable" (ibid.) and called upon "the VASP sector to redouble its efforts towards the swift development of holistic technological solutions encompassing all aspects of the travel rule" driving technology convergence (ibid.). Given the widely reported "struggle to implement" the non-binding "rule" around the world, a 1-year "sunrise period" review extension was granted to VASPs (Bryanov, 2020). By mid-2020, it was reported that authorities in 35 of 54 jurisdictions had implemented Travel Rule standards into domestic legislation and that another 19 had not yet done so (FATF, 2020d). The FATF doubled down on the roles of market actors and emphasized the need for "quick development of technology solutions" (FATF, 2020c: 12). Governance of blockchain by the FATF shaped the location of protocological control in ways that allow for the persistent obscuring of identities in blockchain-based activities such as quasi-anonymous payments. Wasabi Wallet, for example, was launched in 2018 to scramble transactions and is based on "secret contracts." In contrast to the smart contracts in Ethereum, secret contracts have nodes capable of calculating data without ever "seeing" them (EC3 Cyber Intelligence Team, 2020). The Secret Network launched a bridge between its privacy-focused blockchain and Ethereum in late 2020 (Powers, 2020b). Europol cited such "privacy-enhanced wallet services" as a "top threat" in its 2020 Internet Organized Crime report. 21 Meanwhile, so-called "decentralized exchanges" (DEXs), developed largely on Ethereum protocols, expanded as increasingly important forums for users to meet and build some semblance of trust in arriving at peer-to-peer agreements to directly exchange cryptocurrency without the use of a formal intermediary or verifying of identities. While still representing a small percentage of overall cryptocurrency trading at the time of writing (around one per cent), the aggregate monthly volumes on DEXs hit records in 2020. British defense and security think-tank RUSI warned that DEXs "have the potential to weaken the role of centralized VASPs and so blunt the effect of governmental regulation" (Moiseienko and Izenman, 2019: viii). The largest DEX by value exchanged, Uniswap, had digital tokens equivalent to more than $1 billion trade in September 2020, yet neither listing rules nor KYC verification procedures (Madeira, 2020). DEXs thus stood at the same crossroad of dualling identity standards as the FATF (2021) proposed in draft guidance published in March 2021to consider them "high-risk" VASPs if they did not implement the Travel Rule standards. The draft guidance also highlighted a number of new "elements of risk," including "[e]xposure to Internet Protocol (IP) anonymizers such as The Onion Router (TOR), the Invisible Internet Project (I2P) and other darknets, capable of further obfuscating transactions or activities and inhibiting a VASP's ability to know its customers and implement effective AML/CFT measures" (FATF, 2021: 15).
In sum, the key risk is that "harsher," "hands-on," state-led restrictions on blockchain activities have the potential to merely shift rather than reduce illicit activities has emerged in part due to the FATF's shaping of private-sector led exercise of protocological control. While risks pertain to any and all forms of governance, the risks of bottom-up governance strategies are well known now more than a half decade into the FATF's governance of blockchain. Calls began to emanate in 2020 for "developing entirely new approaches to manage money laundering and terrorist financing risks" by key industry players (Sian Jones quoted in Allison, 2020f). Without tabling a completely new approach, the FATF (2021) did nevertheless propose some substantial changes in draft guidance published in 2021 that suggested self-regulatory bodies were insufficient for VASP supervision and only "competent authorities" (FATF, 2021: 5) could act as supervisors. The draft guidance proposed in March 2021 also suggested new Principles of Information-Sharing and Co-operation Amongst VASP Supervisor that "[g] iven the pseudonymous, fast-paced, cross-border nature of VAs [Virtual Assets], international co-operation is all the more critical between VASP supervisors." It called for a more "proactive" roles for supervisory authorities rather than self-regulatory industry organizations (FATF, 2021: 94). Even though the six principles the FATF outlined were general and the proposed guidance emphasised in bold that they are non-binding, the draft guidance proposed in March 2021 marked a shift in the FATF's emphasis on closer international cooperation between public supervisors. This shift was emphasized by its guidance, issued just a year earlier, for digital identity (DID) systems. A May 2020 report on how "effective authentication of customer identity for authorizing account access" can enhance "certain elements of customer due diligence (CDD) under FATF Recommendation 10" (FATF, 2020a: 5-7), had still largely called for market-based exercise of protocological control. It recommended member states to leave standard setting to non-state actors, even when using the standards for their own government backed DIDs. 22 21 https://www.europol.europa.eu/activities-services/main-reports/internet-organisedcrime-threat-assessment-iocta-2020. 22 Government authorities should be "supporting the development and implementation of reliable, independent digital ID systems by auditing and certifying them against transparent digital ID assurance frameworks and technical standards, or by approving expert bodies to perform these functions. Where authorities do not audit or provide certification for IDSPs themselves, they are encouraged to support assurance testing and certification by appropriate expert bodies so that trustworthy certification is available in the jurisdiction" (FATF, 2019: 6). Government authorities were also recommended to remain "flexible" and merely monitor "the rapid evolution of digital ID technology" in order to "help promote responsible innovation and future-proof the regulatory requirements," as well as to support "the development and implementation of reliable, independent digital ID system" along with "assurance testing and certification by appropriate expert bodies." 23 The thrust of the May 2020 guidance persistently focused on ensuring "multi-stakeholder" solutions through constructs such as regulatory "sandboxes" where government authorities monitor private sector trials rather than lead them in any meaningful way. The March 2021 proposed guidance suggesting greater public supervisory cooperation marked a departure from the longstanding emphasis on market-based governance. Future research will have to determine whether the former proposals were mere blips in the longer trend emphasizing the latter.
CONCLUSION: PERSISTENT FORM AND UNACHIEVABLE OUTCOMES?
How can we understand the progressive, piecemeal emergence of global digital identity governance? This paper advanced a twopronged argument that highlighted the need to consider interactions between governance of and by blockchains. First, formal governance by the FATF has shaped the "financial route" to global digital identities. Building on its governance of financial flows, the FATF has extended its risk-based approach to digital identity. Second, this model of leaving the reins of governance to blockchain developers and start-up firms is fraught with problems. The persistent encouragement of a reliance on market actors in developing blockchain protocols has led to the development of what we identified as dualling identity protocols, or the situation in which some activities are underpinned by standards of activity adhering to AML/CFT rules while others are not at all in accordance with such standards. The persistence of the latter, we argued, undermines FATF goals of reducing rather than just shifting illicit international financial flows. Tensions thus exist and persist between governance by and of blockchains. Blockchain studies, and emerging literatures on digital identity governance, need to consider the interplay between both forms of governance and how they interact in (un)predictable manners in order to come to a clearer understanding of the roots and evolving forms of digital identity governance.
Future studies should maintain a critical focus on the activities of the FATF and other international organizations, particularly those that have become increasingly vocal about using blockchain to "fight fire with fire," (Lagarde in Wilmoth, 2018) as the former IMF Managing Director Christine Lagarde put it in a 2018 speech. The shaping of protocological control by formal standard-setting organizations is essential to investigate in relation to informal modes of control in developing more nuanced understandings of global digital identity governance. The 2020 "Global Standards Mapping Initiative" of the World Economic Forum and Global Blockchain Business Council, for instance, flagged digital identity as one of the five main areas where overlapping standards have led to gaps in other places (World Economic Forum, 2020). The formal activities of IOs like the International Standards Organization (ISO) require much further attention going forward, especially regarding its various blockchain working groups. 24 Further scholarship would identify whether and how these IOs influence the location and forms of protocological control. They should provide normative assessments of the shifting forms, impacts and limits such forms of control have on actually stemming illicit activity, as well as on socioeconomic development more widely. Finally, the extent to which the forms of protocological control shaped by the FATF and other global north rich country clubs can also be effectively contested and challenged by actors in the global south deserves further investigation. In sum, there are promising and pressing research pathways for future studies to explore at the intersection of governance by and of blockchains.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. | 9,821 | 2021-05-20T00:00:00.000 | [
"Computer Science",
"Political Science",
"Law"
] |
CNTNAP2 variants affect early language development in the general population
Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males and 543 females) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers and with a consistent direction of effect (rs2710102, P = 0.0239; rs759178, P = 0.0248). On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG, P = .0014). Our study suggests that common variants in the exon 13–15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.
Although nearly all children learn to talk, there is substantial variation in the timing of language development. Around 10% of children can talk in sentences at 18 months of age, whereas the slowest 10% produce at most a handful of single words at this age (Neligan & Prudham 1969). Many late-talkers are actually 'late bloomers', catching up with their peers by the time they are 3 or 4 years old (Thal & Katich 1997). Nevertheless, in some children late talking is the first indication of persistent language impairment (Haynes & Naidoo 1991) and in a minority of these it may be a symptom of autistic disorder (Hagberg et al. 2010).
It is often assumed that the age at which a child develops language is largely dependent on the language input he or she receives. However, a recent epidemiological study found that family history of delayed language development predicted late talking in 24-month-olds, while other factors, such as maternal education, birth risks and maternal depression, did not have significant influence (Zubrick et al. 2007). Data from twin studies indicate that inherited factors make substantial contributions to early language development (Dale et al. 1998) and affect levels of performance on components of language in the normal range of abilities (Kovas et al. 2005). Still, at this point very little is known regarding the specific genetic variants that are associated with language development in toddlers from the general population. Here, we address this issue through analyses of early communicative behavior in a large epidemiological sample.
Our investigations were tightly constrained by prior evidence from molecular studies of neurodevelopmental disorders, which have converged on CNTNAP2 as a gene with relevance to language learning. One notable study reported associations between markers in CNTNAP2 and parental report of 'age at first word' in probands with autism (Alarcón et al. 2008). Independent analyses of children with specific language impairment (SLI), but not autism, identified association of CNTNAP2 variants with reduced performance on quantitative indices of language ability (Vernes et al. 2008). Intriguingly, these separate investigations of distinct language-related disorders (Whitehouse et al. 2007) highlighted the same markers and alleles within CNTNAP2 as risk factors. CNTNAP2 encodes a member of the neurexin superfamily -neuronal transmembrane proteins involved in cell adhesion -and shows enriched expression in languagerelated circuits of the brain (Abrahams et al. 2007). Moreover, this gene is directly regulated by FOXP2, a transcription factor mutated in rare monogenic forms of speech and language disorder (Fisher & Scharff 2009).
Thus, in the current investigation, we carried out a hypothesis-driven study of links between common CNTNAP2 variants and early language proficiency, assessed at 24 months of age, in an epidemiological sample of over a thousand children (the Raine sample). We specifically targeted the same single-nucleotide polymorphisms (SNPs) across the CNTNAP2 gene as those previously investigated in SLI by Vernes et al. (2008). Our hypothesis was that the particular CNTNAP2 markers implicated in language impairments of SLI and delayed language in autism would extend their influence beyond disorder, to show association with early language acquisition in the general population.
Participants
The Western Australian Pregnancy Cohort (Raine) Study is a longitudinal investigation of 2900 pregnant women and their offspring consecutively recruited from maternity units between 1989 and 1991 (Newnham et al. 1993). The inclusion criteria were (1) English language skills sufficient to understand the study demands, (2) an expectation to deliver at King Edward Memorial Hospital (KEMH) and (3) an intention to remain in Western Australia to enable future follow-up of their child. Ninety percent of eligible women agreed to participate in the study.
From the original cohort, 2868 children have been followed over two decades. Participant recruitment and all follow-ups of their families were approved by the Human Ethics Committee at King Edward Memorial Hospital and/or Princess Margaret Hospital for Children in Perth. The Raine sample is representative of the larger Australian population (88% Caucasian); only those children with both biological parents of White European origin were included in the current analyses. DNA and phenotypic data were available for 1149 children (606 males and 543 females).
Phenotypic measure
Our study specifically concerned early indicators of language acquisition in toddlers, where direct assessment of ability can be challenging. For phenotyping at such young ages, parental report has been shown to provide a robust alternative to direct testing (Johnson et al. 2008). The Communication subscale of the Infant Monitoring Questionnaire (IMQ) (Bricker & Squires 1989) was administered when the child was 2 years old. This parent-completed checklist contains seven items assessing early communicative behavior, such as protoimperative actions (e.g. looking or pointing at an item to request it), the following of simple commands (e.g. 'come here', 'sit down'), and the use of two-or three-word strings (e.g. 'go, car', 'shut door'). Parents indicate whether their child shows this behavior always (2 points), sometimes (1 point) or never (zero points), yielding an overall score ranging from 0 to 14. The validity and reliability of the IMQ range from 0.85 to 0.9 (Bricker et al. 1988). Questionnaires with one missing item (n = 155) were prorated to yield a score out of 14. Scores were transformed from centile equivalents to z-scores to give a normally distributed variable.
Genetic data
For the Raine study, DNA samples have been collected using standardized procedures at 14 or 16 years of age, followed by genotyping on an Illumina 660 Quad Array (San Diego, CA, USA). SNPs that did not meet quality control criteria (call rate ≥95%; minor allele frequency >0.05; Hardy-Weinberg disequilibrium P value >0.000001) were discarded. It is important to emphasize that, although genomewide SNP data have been collected for this sample, we did not perform a hypothesis-free genome-wide association scan for our measure of interest. Instead, this study was a tightly constrained hypothesis-driven candidate gene approach, based on prior literature, which considered a set of 30 SNPs from the CNTNAP2 gene [matching those from Vernes et al. (2008)]. This led us to a focused analysis of the rs2710102-rs759178-rs17236239-rs2538976 multimarker combination. No other markers from elsewhere in the genome were assessed for association with early communicative behavior in this sample.
Data analysis
Our panel of 30 SNPs matching those used to study SLI in previous CNTNAP2 analyses (Vernes et al. 2008) constituted the majority of the 38 SNPs assessed in the prior study. Each biallelic SNP was first tested for association with the quantitative measure of the communication phenotype using an allelic test of association within R (R Development Core Team 2009). On the basis of the previous findings by Vernes et al. (2008), our model assumed that the risk allele of the SNP had a dominant mode of action. Consideration of the singlepoint SNP findings, and their convergence with earlier studies, led us to test the four-marker haplotypes of rs2710102-rs759178-rs17236239-rs2538976, analyzing the three common alleles using R. Our analysis of each such multimarker allele involved two factors: (1) comparison between harboring two copies and one copy of the haplotype and (2) comparison between harboring two copies and no copies of the haplotype -allowing us to separately assess the modes of action of each of the three alleles. To minimize multiple testing, we did not analyze any further marker configurations. Linkage disequilibrium (LD) among CNTNAP2 SNPs was determined with Haploview version 4.2 (http://www.broadinstitute.org/haploview/haploview) (Barrett et al. 2005). Haplotypes were inferred using SimHap version 1.0.2, and the most-likely haplotypes of each individual used as inputs for the R analyses described above.
Principal components analysis of genome-wide SNP data with Eigenstrat (Price et al. 2006) has revealed evidence of population stratification in the Raine sample, and so the first two principal components were included as cofactors in all analyses. This procedure has been used previously in genetic analyses of the Raine cohort (Paracchini et al. 2011).
Results
We assessed the same panel of markers across CNTNAP2 as Vernes et al. (2008), but focusing instead on a quantitative measure of early language in a general population cohort. This panel included most of the key SNPs that were significantly associated in that study, as well as the flanking markers from elsewhere in the gene that had not shown association. Our hypothesis was that a similarly localized subset of SNPs within the panel would show evidence of association in our sample, against a background of nonsignificant results. The pattern of single SNP associations in our general population sample (Table 1) was strikingly reminiscent of that observed by Vernes et al. (2008) in their SLI families, highlighting an almost identical subset of markers, located in the exon 13-15 region of CNTNAP2. Two neighboring SNPs -rs2710102 and rs759178 -showed nominal significance (P = 0.0239 and 0.0248) and another three markers in their vicinity -rs17236239, rs2538976 and rs2710117 -displayed suggestive trends (P values between 0.05 and 0.085). These markers corresponded to those showing strongest associations in the Vernes et al. (2008) study of SLI and overlapped with the most significant findings from the Alarcón et al. (2008) investigation of language delay in autistic probands. The effects observed were consistently in the same direction as prior studies; the alleles that correlated with reduced language performance in the Raine sample (Table 2) were the same as those identified as putative susceptibility alleles in studies of disorder [c.f. Table S3 in Vernes et al. (2008) and Table S1 in Alarcón et al. (2008)]. For example, risk alleles in SLI and autism were C for marker rs2710102 (C/T polymorphism) and G for marker rs759178 (G/T polymorphism); these same alleles were associated with lower early language scores in our general population sample (Table 2). In the main cluster of associated SNPs -rs2710102, rs759178, rs17236239, rs2538976 -the markers were in strong LD, with D values of 1 for all pairwise comparisons ( Figure S1, Supporting information). Notably, these four SNPs were central to a nine-marker risk haplotype previously studied by Vernes et al. (2008). We therefore constructed multimarker haplotypes using these four neighboring SNPs and identified three common combinations (TTAA, CGGG and CGAG), representing 98% of individuals (Table 3). As expected from the direction of effects observed in the singlepoint results (Table 2) and consistent with prior published results (Vernes et al. 2008), the TTAA multimarker allele was associated with higher scores on the measure of early language, whereas the CGGG and CGAG alleles were associated with reduced scores. TTAA showed nominal significance (P = 0.0488) and CGGG displayed a suggestive trend (P = 0.0627), but the strongest association was for CGAG (P = 0.0014); this remains significant after accounting for the number of tests that we performed in the study (30 singlepoint tests and 3 haplotypic analyses). Children carrying two copies of this haplotype obtained substantially lower scores (mean = −0.355, SE = 0.169) than those with one copy (mean 0.313, SE = 0.055) or no copies (mean = 0.223, SE = 0.033).
Discussion
Our results suggest that variants in the exon 13-15 region of CNTNAP2 previously associated with deficits in SLI (Vernes et al. 2008) and delayed language in autism (Alarcón et al. 0.15 0.0014 2 * Alleles are given with respect to the forward strand of chromosome 7. † Frequency of haplotype within the Raine sample. ‡ Analysis in R assessed two factors: 1 = comparison between harboring two copies and one copy of the haplotype; 2 = comparison between harboring two copies and no copies of the haplotype. This column indicates which factor yielded the most significant result, as reported in the preceding column. 2008; Poot et al. 2010) also affect the early stages of language development in children from the general population. This was a targeted hypothesis-driven study of a single gene, focusing on specific markers that have been strongly implicated in multiple prior reports of language-related disorder, rather than a genome-wide search for new variants.
The consistencies in findings across multiple investigations are noteworthy given several key differences in the natures of these studies. Alarcón et al. (2008) studied probands with autism in an American sample, employing a parental report of language delay. Vernes et al. (2008) assessed a UK sample, examined language test scores in older children and focused on families selected for SLI. In this study, we investigated an Australian sample, used a parental report measure assessing language development at age 2, and tested for association across the normal range. Despite the obvious differences in sample ascertainment and phenotypic characterization, there was agreement not only regarding the pattern of SNPs that were associated but also in the direction of allelic effects.
In our study, we constructed a single set of haplotypes using four neighboring markers in high LD which, based on the singlepoint pattern of results, appeared to form a core site of association. Although we did not genotype every associated marker from the Vernes et al. (2008) study, these four markers were central to the nine-marker haplotypes that they previously assessed in SLI. Thus, our haplotypic alleles would be expected to capture much of the relevant variation from the earlier investigation. Indeed, haplotypic analyses from the two studies are generally concordant -both investigations found that the TTAA multimarker allele of rs2710102-rs759178-rs17236239-rs2538976 is associated with higher scores, whereas the alternative CGGG/CGAG alleles are associated with reduced performance (c.f. Table S4 of Vernes et al. 2008). However, although the CGGG allele showed the strongest association in the SLI study, our analyses of the Raine sample identified much more significant effects for the rare CGAG combination, which here had particularly dramatic effects on language scores. These differences in haplotypic background could relate to the distinct population history of the samples. Regardless, the data suggest that in the vicinity of rs2710102-rs759178-rs17236239-rs2538976 there lie specific functional risk variants (as yet unidentified) with particular relevance to early language acquisition. Of note, the CNTNAP2 gene locus is one of the largest in the genome and could potentially contain multiple additional sites with functional relevance to neurodevelopmental phenotypes, to be clarified in future with high-density SNP screening and sequence-based strategies.
A methodological conclusion from our study is that a simple parental questionnaire focused on early language development can provide valuable phenotypic information for molecular genetic analyses, which may be particularly pertinent given the difficulties in directly assessing a child's performance in the earliest years of life. This is consistent with the core findings of Alarcón et al. (2008), who reported that rs2710102 and neighboring variants were associated with just a single item from the Autism Diagnostic Inventory -Revised (Lord et al. 1994),'age at first word', in autistic probands. In addition, in a recent study of multiple traits contributing to the autistic spectrum, Steer et al. (2010) reported a nominal association between rs17236239 and a factor they termed 'language acquisition', which primarily loaded on parental report measures of early language development. Our conclusion is also in line with the findings of Johnson et al. (2008), who showed good agreement between parent report and direct assessment of children's abilities at 2 years of age.
In terms of theoretical implications, it is clear that these common CNTNAP2 variants are not sufficient by themselves to account for language and communication disorders in children. This conclusion is in line with the current consensus that both SLI and autism are complex disorders resulting from the combined effect of multiple influences (Geschwind 2008). We hypothesize that CNTNAP2 variants which usually yield only a small boost or lag in language acquisition will have more marked consequences when they occur in concert with other genetic or environmental risk factors. Bishop (2010) suggests that autism may result from epistatic rather than additive interactions between genes. From this perspective, it would be of considerable interest to see whether there are additive or interactive effects of CNTNAP2 with genetic variants affecting social cognition, such as a recently described locus on chromosome 5p14 (St Pourcain et al. 2010).
Supporting Information
Additional Supporting Information may be found in the online version of this article: Figure S1: Location and linkage disequilibrium of 30 SNPs on the CNTNAP2 gene. The top of the figure provides an indication of the genomic location of each SNP on chromosome 7q. In total, 30 SNPs were analyzed across a 2000-kb interval. Black lines indicate the position of each SNP within CNTNAP2. Inter-SNP linkage disequilibrium was generated with Haploview. The upper panel reports D values within cells. Empty red cells represent full LD and empty blue cells represent lack of LD. The lower panel reports r 2 values within cells. Empty white cells represent lack of LD and darker shading represent increasingly stronger LD. Haploview identified five LD blocks (black solid lines) using the confidence interval method (Gabriel et al. 2002).
As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer-reviewed and may be re-organized for online delivery, but are not copy-edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors. | 4,216.8 | 2011-06-01T00:00:00.000 | [
"Medicine",
"Linguistics"
] |
Financial Liberalisation, Political Stability, and Economic Determinants of Real Economic Growth in Kenya
This study aimed to analyse financial liberalisation, political stability, and economic determinants of Kenya’s real economic growth using time series data over the period of 1970–2016. The authors specified quadratic and interactive models to be estimated by employing a quantile regression analysis. The traditional and quantile unit root test was used in testing the stationarity issue. The co-integration findings indicated that the capital account openness and financial development impede on real economic growth; and the political stability also had potential influence on the real economic growth of Kenya. Interestingly, there is a nonlinear U-shape link between financial development and real economic growth that undermined the real economic growth at its onset, but as it advanced, it enhanced the growth of the country in the long run. The policymakers should ensure that the capital account is more liberalised so that it will continue to stimulate the financial development. In the same way, the liberalisation of the domestic financial market should be taken in earnest to overcome the negative effects of financial repression in totality, while maintaining the stable political atmosphere.
Introduction
It is theoretically and empirically established that financial development has a significant role in the growth of an economy. Bhattacharyya [1] made efforts to show the various ways in which financial market development enhances economic growth. However, they emphasise that an advanced domestic financial system is essential for the substantial growth of foreign capital and trade that enhanced economic growth. To avoid the undesirable impact on the economy, developing countries did move to improve their financial system in order to meet the required standards for integration into the global economy for the flow of foreign direct investment (FDI) and equity. The benefits of financial liberalisation on output growth in the developing economies are in three dimensions, according to Ibrahim [2]. First, the enhancement of specialisation in production, opportunities to share risks, and hedge portfolio of investors from shocks that can hinder their participation in high return projects. Second, financial liberalisation enhances the global flows of capital resources to fund investment in less developed countries, given their marginal productivity. Third, it paves a way for the penetration of more foreign banks that are efficient to diversify domestic opportunities, which reduces risk and smooths consumption.
This study attempts to explore the linkage between capital account openness, financial development, trade openness, government expenditure, and political stability; as well as its impact on real Gross Domestic Product (GDP) per capita using the availability of the time series data for 1970 until 2016 for the economy of Kenya. The country has made a deliberate and steady important financial market reform program as far back as the early 1980s [3]. This is to pave their way into the global capital market to improve trade and investment flows. The reforms include interest rate liberalisation, bank denationalisation, liquidation, restructuring, privatisation, etc. Similarly, in 2008, the Kenya Vision 2030 aimed to achieve a 10% GDP growth rate per annum over 25 years, which was the fashion to improve the domestic financial market [4]. These changes are hoped to improve and liberalise the domestic financial system and attract more foreign capital flows. The available statistical data shows that FDI inflows into the Kenya economy has increased recently, and there is an emergence of portfolio equity. Capital flows have grown over the years at US 9578 million in 1991 to US 40,029 million in 2014 [5]. Indeed, this is interesting for the economy as there is a lag between savings and the investment rate. Table 1 shows that the GDP per capita growth in annual % declined from a high point of 3.265 to −0.822 between the period 1970-1979 and 1990-1999, respectively, then started increasing in the period 200-2009 and 2010-2016 from 0.774 to 3.271, respectively. This could be due to the sustained increase in domestic credit as a % of GDP and FDI, which indicated to be at a high point, especially in the period 2010-2016. However, for gross domestic savings as a % of GDP, it depicts a shortfall to the gross capital formation as a % of GDP in all the periods, indicating that savings are augmented by FDI and other sources in the financing investment in Kenya. Total trade as a % GDP did not reveal an increasing trend, it swung, and remained at a relatively high level, meaning it could have an impact on the economy.
Literature Review
In the financial liberalisation policies, trade openness should precede capital account liberalisation [6]. When the economy is fully liberalised, economic growth is feasible because of the bridge created in the savings and the foreign exchange gap. Furthermore, the liberalisation of capital account allows for an international portfolio diversification, which domestic market agents enjoy in diversifying country-specific risks that cannot be diversified under capital account restriction. Omoruyi [7] has viewed capital account openness as the process of removing restrictions from international transactions related to the movement of capital. With financial liberalisation, the level of competition that causes convergence prices increases, as well as an increase in the firm's output and the allocation of capital to a lucrative investment [8]. However, despite the theoretical claims of the impact of financial liberalisation, empirically, there is no firm conclusion. The lack of that claim in developing countries is because of an 'allocative puzzle' [9].
Gehringer [10] examined the impact of financial openness on economic growth on two channels: The manufacturing and service industry in eight European countries. The results indicated that the impact on manufacturing is more than the service industry on economic growth. Bekaert et al. [11] contributed that the capital account openness spurs economic growth, but the impact relies on the quality of the institution. In another perspective, Ibrahim [2] stated that although the capital account openness impacts economic growth, it depends on the level of economic development. This, in a nutshell, indicates that the more the economy develops, the better the impact of capital account openness on economic growth. However, Hye and Wizarat [12] do not find a significant impact between capital account openness and economic growth. This is a similar finding of Ahmed and Mmolainyane [13] who reported that the capital account openness impedes economic growth, but it is positive and significantly correlated with financial development. This means that the capital account openness can enhance economic growth through financial development. Other researchers, such as Berthelemy and Demurger [14], revealed that the capital account openness promotes the transfer of technology and managerial skills, which increase production output.
On the contrary, other studies mainly were not able to find a definite relationship between financial liberalisation and growth in developing economies. They indicated that the growth side of the lagging economies could not be identified significantly. Gourinchas and Jeanne [9], Alfaro et al. [15], and Prasad et al. [16] studied the impact of financial openness fields. The results of those studies disagreed with the assertion that it stabilises the fluctuations in consumption in developing economies. In addition, Bussiere and Fratzscher [17] do not find a link connecting capital account openness to economic growth. However, the findings of that study concluded that the capital account openness merely causes excess borrowing in the short term that creates booming and, in the midterm, brings recession. However, the importance of capital account openness in developing economies cannot be ignored by Rajan and Zingales [18] on the simultaneously opened hypothesis of capital and trade effect on financial development.
Meanwhile, Onanuga [19] demonstrated that the simultaneous opening of the capital account and the trade spurred the sophistication of the domestic financial market in Nigeria. The results of that study argue that if either capital account is opened and trade is closed, the impact will be detrimental to economic growth. Mubi [20] found that the interaction of capital inflow with trade openness spurs economic growth through financial development using the ARDL estimates in Nigeria. However, Gossel and Biekpe [21] found that the capital inflow hindered economic growth in South Africa over the period 1995 to 2011. This is evidence that more studies are needed in African countries, especially Kenya, to ascertain the effect of capital openness on economic growth as there is a dearth of empirical studies on that.
On the other hand, McKinnon [22] and Shaw [23] postulated that countries that adopted financial liberalisation policies might become more equipped and matured as savings and investment will increase to overcome future growth challenges. This is an indication of experiencing the U-shape curve relationship between financial development and growth. Shen et al. [24], however, found the contrary, as there was an inverted U-shape between the banking sector development and economic growth. This was supported by Yang and Liu [25] and Ibrahim and Alagidede [26] that found a U-shape relationship between financial development and economic growth. Similarly, Adeniyi and Oyinlola [27] contributed that financial development negatively impacts economic growth, which reverses itself accounting for threshold-type effects, but the impact is infinitesimal.
A recent study by Ashraf [28] suggested that the higher trade openness is vital for financial development as it increases the volume and reduces the cost and risk of bank credit because of the increase in demand for finance, the liberalisation in the domestic financial sector reforms, and the diversification of lending opportunities created through the higher trade openness. Notwithstanding, Redmond and Nasir [29] found that trade openness and financial development had significant negative impacts of economic development. Atil et al. and Mercado [30] opined that the abundance of natural resources and financial development are positively correlated, and economic growth stimulates financial development, but on the contrary, economic globalisation is a bane of financial development. This economic globalisation can be the causal factor that the financial development undermines economic growth as it causes inflows of capital in some economies that do not stay long to impact the economies, which Gourinchas and Jeanne [9] argued as the detrimental factor to the economy.
Political institutions are critical to the growth of an economy in a financial reform policy, and this is because it provides the conducive environment for financial development to promote growth [31]. In the same vein, Bhattacharyya [32] postulated that the democratic institution is an incremental process, not a one-off phenomenon. Countries with a more democratic system have a more market-based financial system. The quality of political institutions when setting at the optimal threshold level minimised political risk, and by that, economic growth can be achieved through financial development [33]. Thus, the economy gains from financial development when the political system is above a threshold, and it is relatively stable. Nevertheless, if there is political instability, it affects the inflows and sustainability of foreign investors, as any shock and sudden changes in the political arena mar all their investment outlets. This causes 'stops' of an extreme state of lower capital inflows in an economy, which is the opposite of 'surges' [34].
The remainder of this paper is structured. Thus, Section 2 is the literature review and Section 3 captures the data and model specification. Section 4 presents the empirical results, while Section 5 gives the concluding remarks. Table 2 presents the variable description and the sources of data utilized in the study.
FD
Represents the financial development index measured using the Principle Component Analysis (PCA) that includes the broad money, domestic credit to the private sector, domestic credit to the private sector by banks, and domestic credit provided by the financial sector (all as a % of GDP).
World Development Indicators (WDI)
TOP It is indicating the total countries volume of export and import as measured in US currency (% of GDP).
World Development Indicators (WDI)
GEX Represents the total government expenditure on final goods and services, excluding military expenditure (% of GDP).
World Development Indicators (WDI)
PST Represents the stability of political institutions of the country measured using a specific formulation.
Centre for Systemic Peace In Figure 1, we show the estimated series of plots within the estimation period of this study. We begin our analysis from the 1970s, and some of the series appears with a robust structural break in the middle of the 1990s which may well be due to changes in Kenya's macroeconomics fundamentals, such as the FDI, financial development, and inflation. Hence, to avoid bias estimation, this plot urgently needs a structural break analysis to identify the structural break effects of each series which contains useful information for policymakers. The relationship between the variables was tested using the following equations: where all the variables stated in Table 2 are gathered from [5,[35][36][37], while FD 2 represents the squared value of financial development. The variable RGDP per capita is the proxy of economic growth as used by [27,31,37,38] in their studies shown clearly in Equation (1). While Equation (3) concentrates on the interactive effects between the capital account and trade openness with the financial development series: Notably, most of the time series data always faced a unit root problem, such as the random walk, cycle, and trend effects, as shown in Figure 1. This causes spurious regression estimation among the series in the study. Thus, the unit root issue needed to be addressed in the series. Prior to the empirical analysis, all the series are transformed into a natural logarithm. The unit root tests employed are the traditional unit root tests, such as the [39,40] as a traditional unit root and Perron's test for unknown break determination [41].
Energies 2020, 13, x FOR PEER REVIEW 5 of 16 where all the variables stated in Table 2 are gathered from [5,[35][36][37], while FD 2 represents the squared value of financial development. The variable RGDP per capita is the proxy of economic growth as used by [27,31,37,38] in their studies shown clearly in Equation (1). While Equation (3) concentrates on the interactive effects between the capital account and trade openness with the financial development series: Notably, most of the time series data always faced a unit root problem, such as the random walk, cycle, and trend effects, as shown in Figure 1. This causes spurious regression estimation among the series in the study. Thus, the unit root issue needed to be addressed in the series. Prior to the empirical analysis, all the series are transformed into a natural logarithm. The unit root tests employed are the traditional unit root tests, such as the [39,40] as a traditional unit root and Perron's test for unknown break determination [41]. To capture the long-run co-integration, we used the long-run equation by emphasising the Dynamic Ordinary Least Square (DOLS) and Fully Modified Ordinary Least Square (FMOLS) estimations proposed by the [42,43] test, respectively. The DOLS estimation will address the asymptotic bias of the estimation series, which contains the normal OLS estimation by including the leads and the lags of the first difference of the series. After confirming the long-run co-integration relationship, a robust test of combined co-integration developed by Bayer and Hanck [44] was employed. This is better than the Johansen [45] on the Johansen Maximum-Eigenvalue test that allows more than one co-integration relationship. An improved joint test-statistics by Bayer and Hanck [44] gives room for the combinations of others to provide conclusive findings in a multidimensional manner. Thus, it was applied in the study. The combination of the individually computed p-value following Fisher's formula is as follows: where the p sign indicates the probability value of each individual test of co-integration through the Bayer and Hanck's combined co-integration estimation. The criteria for the rejection of the null hypothesis of co-integration is for the critical values to be lower than the Fisher's statistics. This type of co-integration test has an advantage, according to Polat et al. [46], Rafindadi [47], and Bekun [48] who employed it in some African studies. This technique is the combination of the Engle and Granger [49] and the Johansen [50], the error correction F-test of Boswijk [51], and the error correction t-test of Banerjee et al. [52]. Moreover, the BH relies on the order of one level of integration of the series, and one-off determination of the stated techniques. The EG-JOH in Equation (4) represents the combination of the Engle and Granger and Johansen probability value. While the EG-JOH-BO-BDM in Equation (5) is the combination of the Engle and Granger, Johansen, Boswijk, and the Banerjee probability value of those co-integration tests. The Koenker and Xiao [53] quantile unit root test stresses on the different quantile effects not minding that the deterministic trend effect was employed in this study. This was because of its accuracy, flexibility, and the reduction of estimation uncertainty. Furthermore, Bolat et al. [54] stressed the advantages of the quantile unit root test over the traditional unit root test when the shock displays a heavy-tailed behaviour. This technique explores the speed of mean reversion in a series under different magnitudes and signs of shock. To put it succinctly, it captures possibly different mean-reverting patterns by explicitly testing for a unit root at different quantiles. The quantile unit root test depends on the conditional quantile auto-regression (AR) model for the RGDP as follows: where Q RGDP (τ|F t ) is a conditional distribution function of the RGDP series to a level r ∈ (0, 1), and the F t series stands for the accumulated information up to the time frame of t. The null hypothesis is H 0 : a 1 (τ) = 1 for the given quantile (τ). The authors of [53] introduce the Kolmogorov-Smirnov test to indicate the unit root properties of a range of quantiles not only dwelling on some quantiles as follows: where τ n (τ) stands for the t ratio and is computed at τ i ∈ Γ; therefore, the QKS test is formed using the maximum over Γ. The distribution of the τ n (τ) is limited. The QKS test depends on the parameter, which is a nuisance. The next step is the quantile unit root test and the quantile regression. As the dependent variable is the RGDP and the independent variable is the COP. The expression of the conditional quantile regression function τ th is stated as follows: where F RGDP (COP) is the function of the conditional distribution of RGDP and COP series, while β(τ) is the dependence relationship of the regressed series of the specified quantiles (τ). The authors of [55] estimated that β(τ) for each quantile is calculated through the minimisation of the weighted deviation between the series estimated as follows: The coefficient can vary across quantile (τ) and it can be extended through calculating β (τ) K when τ equals 0.10, 0.25, 0.50, 0.75, and 0.90; and K represents the number of parameters including the intercept. The aim is to indicate the different effects on the dependent variable by the independent variables across the quantile. Therefore, the quantile regression was formulated as follows:
Empirical Results
In Table 3, the descriptive statistics are presented. The Jarque-Bera (JB) statistics show that the RGDP per capita and TOP are not normally distributed, and the financial development and political stability series have the highest spread. The most volatile series in the financial development is with 1.9 standard deviations, while the capital account openness is the least with 0.246 standard deviations. The ADF, PP, and Perron's tests of unit root results are reported in Table 4, which indicates that all series are stationary at the first difference I(1). These reject the null hypothesis of no stationarity. Moreover, the single break date of the RGDP per capita, according to Perron's [41] unit root test with unknown breaks is in 2004. While in 1992 and 1993, the RGDP growth of Kenya falls massively as a result of the decline in manufacturing activities. Therefore, we found that there is an unknown structural break faced by TOP and GEX in 1992 and 1990, respectively. The fiscal indiscipline and macroeconomic imbalances, as well as the imposition of import licenses and foreign exchange control, caused the slow growth in export. This necessitated the adoption of financial reform policies in the early 1990s in Kenya [4].
The results of the quantile unit root test were depicted in Table 5. These include the constant term value α(τ 0 ), the coefficient of autoregressive value α(τ 1 ), the value of the QKS test, and the probability of the values. To start with, the test of QKS gives a mean-reverting behaviour of each variable. It illustrates evidence in favour of mean-reverting in political stability at the 10% level of significance. Subsequently, the behaviour of the variables in a specified quantile was examined through the estimated value of the intercept α(τ 0 ) and the coefficient of autoregressive α(τ 1 ). The intercept α(τ 0 ) gives the size of the shock observed within the quantiles that affect the series. In the case of RGDP, it has a shock of significant extent in the lower quantile, while in COP and TOP the shock is at the lower and upper quantiles. As for FD, the shock is very high at the upper quantiles between the range of 0.539 to 0.981. For GEX, the shock is the lowest at the lower and upper quantile, while the shock in PST is the most concentrated running from 0.115 to 0.287 across all the quantiles. These results show that FD experienced the highest shock level to the point that it moves far away from its long-run equilibrium level at about 0.981 units. The α(τ 1 ) value estimated is less than one (1) in all variables, but the probability value that rejects the non-stationarity of COP is at the lower quantile, while TOP rejects the unit root null up to a 75% quantile. Indeed, PST rejects the non-stationarity across all quantiles; in other words, it displays a mean reversion.
We have performed the tests of co-integration since all series faced I(1) based on the ADF and PP test of unit root, but for Perron's test of unit root with an unknown break date, only PST is integrated at I(0). The results of the OLS, FMOLS, and DOLS techniques are compared. Both DOLS and FMOLS gave good and significant estimates. The results in Table 6 showed that COP and FD have a negative effect on RGDP, particularly, GEX and PST have a positive impact on the normalised RGDP. The result of the negative effect of capital account openness does not agree with [2], as political stability is statistically significant to economic growth. Interestingly, the FD 2 coefficient is positive and statistically significant, according to the OLS, FMOLS, and DOLS techniques. The positive sign of the FD squared series indicated a U-shape curve between financial development and RGDP per capita. This indicates that there was little advancement of financial development in Kenya towards the real GDP per capita after the attainment of the threshold, but the impact is very small between the range of 0.038 to 0.043. This agreed with the results of Adeniyi et al. [27] in Nigeria that reported a nonlinear threshold type-effect between financial development and economic growth. The TOP series has a positive relationship with economic growth, but not significant, while GEX and PST showed a significant positive relationship with economic growth. For GEX, it gave a high magnitude indicating that the economy is more driven by the public sector than the private sector. On the other hand, the PST caused economic growth in Kenya, depicting that the regime change has no negative impact on the economy. The combined co-integration of the Bayer and Hanck [44] test facilitates the establishment of the multidimensional co-integration link of the study series. In Table 7, the estimated equations reject the null hypothesis of a no combined co-integration condition, significantly at the 1% level. This proves that a long-run relationship appeared between the estimated series, indicating the existence of equilibrium of the real GDP per capita and its determinants. Moreover, Polat et al. [46], Rafindadi [47], and Bekun [48] also equally discovered similar results in their studies. Note: * indicate significance levels at 1% level and the optimal lag length selection is based on the AIC (k = 4).
In Table 8, the bootstrap quantile regression at quantile 0.10, 0.25, 0.50, 0.75, and 0.90 is presented. From the estimated results, COP has a negative effect on the real GDP per capita with coefficients ranging between −0.77 and −2.084. This is in line with the 'allocative puzzle' of [9] that instead of capital inflow into the developing economies, is the outflow of capital from the developing economies. The coefficient of financial development produces a threshold type effect, as it was negative before, and after the threshold, it turned positive and statistically significant at the lower and middle quantile (10th, 25th, and 50th). It means that financial development stimulates growth at the advanced stage. The financial liberalisation policies of 1991 could be the factor that contributed to the square of financial development to spur economic growth. However, due to the global financial crisis between 2007 and 2008, which stifled the Kenya domestic financial market development, the impact of the square financial development impedes economic growth from the 75th to 90th quantiles. TOP hindered RGDP at the 10th quantile only, but at upper quantiles, it has a positive relationship with economic growth, although not significant. Since the COP and FD could not enhance RGDP, it is a sign that the TOP chances to stimulate RGDP were slimmed. GEX is positive and statistically significant from the 10th to 50th quantiles, but not at the higher quantile. This can be a result of the economic imbalance between 2007 and 2008. Mwega [4] stated that apart from the global financial crisis of that period, the Kenya economy was affected by the post-election crisis of 2007/2008. In addition, adverse weather affected agricultural activities, which undermined the implementation of the first and the second Medium Term Plan of the period 2008-2017 of Kenya's Vision 2030 Development Plan. Table 9 illustrates the bootstrap quantile regression under interactive effects. The results presented indicate that financial development is statistically significant to economic growth at the 10th and 25th quantiles. This is contrary to model 1, but because it is an interactive model, FD spurs RGDP. Uddin et al. [56] agreed that financial development influences economic growth in Kenya. It is further validated with the complimentary coefficient of the interaction of FD with COP. This agrees with Berthelemy and Demurger [14] that found that financial development stimulates economic growth only via the capital account openness. The liberalisation of current and capital accounts stated that in 1991 in Kenya, through foreign exchange bearer certificates of deposits. These certificates enable both residents and non-residents to trade in the secondary market freely. This could be an essential factor that causes the complementary relationship of the variables FD and COP towards economic growth in the 25th quantile.
The reason is not far-fetched due to the economic reform policy that came with the crisis, which started in the late 1980s to 1990s. The situation makes the government authority tighten the monetary and fiscal policy. The policy position contributed to strangulating the weak economy by affecting domestic demand. In contrast, financial development, when interacting with trade openness, does not influence economic growth. This implies that trade openness with financial development is not complementary to economic growth as it is indicated at the lower quantile. The third MTP of Vision 2030 of Kenya that commenced in 2018-2023 is hoping to tackle the deficiencies and challenges of the first and second MTP [4]. Figure 2 presents the coefficient of the different quantiles in a trend pattern of the study. In a nutshell, it illustrates how the independent variables affect the RGDP per capita. The COP moved down, as revealed by the 95% confidence interval with the mixed condition at the 70th quantile, while FD went upward with the unstable condition at the 30th quantile. On the coefficient of TOP, it is relatively stable initially at the lower 20th quantile where there was a difficult moment. The coefficient of GEX and the square of financial development had an unstable moment at the 70th quantile, political stability was relatively stable but at the 50th middle quantile, the coefficient of the 95% confidence interval changes at an undesirable condition. The sparsity coefficient at different quantiles as supported by the significant Quasi-LR statistics illustrates the over dispersion and the heterogeneity of the series. The reason is not far-fetched due to the economic reform policy that came with the crisis, which started in the late 1980s to 1990s. The situation makes the government authority tighten the monetary and fiscal policy. The policy position contributed to strangulating the weak economy by affecting domestic demand. In contrast, financial development, when interacting with trade openness, does not influence economic growth. This implies that trade openness with financial development is not complementary to economic growth as it is indicated at the lower quantile. The third MTP of Vision 2030 of Kenya that commenced in 2018-2023 is hoping to tackle the deficiencies and challenges of the first and second MTP [4]. Figure 2 presents the coefficient of the different quantiles in a trend pattern of the study. In a nutshell, it illustrates how the independent variables affect the RGDP per capita. The COP moved down, as revealed by the 95% confidence interval with the mixed condition at the 70th quantile, while FD went upward with the unstable condition at the 30th quantile. On the coefficient of TOP, it is relatively stable initially at the lower 20th quantile where there was a difficult moment. The coefficient of GEX and the square of financial development had an unstable moment at the 70th quantile, political stability was relatively stable but at the 50th middle quantile, the coefficient of the 95% confidence interval changes at an undesirable condition. The sparsity coefficient at different quantiles as supported by the significant Quasi-LR statistics illustrates the over dispersion and the heterogeneity of the series.
Concluding Remarks
The overall results of this study revealed that the capital account openness has not directly impacted the economic growth in Kenya, but political stability indicators enhance economic growth, implying that the economy is matured to support capital inflows. The foreign investors are wary of
Concluding Remarks
The overall results of this study revealed that the capital account openness has not directly impacted the economic growth in Kenya, but political stability indicators enhance economic growth, implying that the economy is matured to support capital inflows. The foreign investors are wary of political tension as it mars whatever efforts they made and intended to do in the economy. The 1991 liberalisation of capital accounts are capable of causing economic growth as the economy can rely on political stability to achieve its target goals and objectives. Companies should continue having foreign currency accounts, both at home and abroad, while banks should develop innovative financial products and be allowed to carry transactions in foreign exchange directly as it started in 1991. The removal of all constraints on the purchase of shares and government securities by non-residents in 1995 and the earlier full interest rate liberalisation in 1991 should be strengthened. The economic crisis of the late 1980s to 1990s undermined the economic reform policy. This creates macroeconomic imbalances in the period that exposed the economy to the vulnerability of capital outflow. It could be the primary reason why government expenditures remain a very key driver of the economy of Kenya.
Thus, the policymakers can improve economic growth by checking the outflows of capital. To avoid the chances in which financial development will hang back in allocating credits to private enterprises, the reserve ratio should not be raised high. Similarly, the credit ceiling should not be imposed unnecessarily as that can strangulate the flows of credits to enterprises. Furthermore, of paramount importance is the macroeconomic environment; it should be stable. This is necessary because any macroeconomic instability can cause the real interest rate to be more than the return on investment, and that can cause capital flight. Another critical policy of interest is ongoing in Kenya's Vision 2030. The first and second Medium-Term Plan was over. The third that commenced in 2018 should have emphasised more on the improvement of the financial system. The proper supervision and regulation of the domestic financial sector can ensure the fair use of capital inflows that will influence economic activities.
The ease of doing business is a vital policy direction that should be strengthened as it could create jobs and improve productivity. This is better in Kenya now as it has increased from 2015 from 58.01 to 73.22 in 2019, which is better compared to other African countries. Thus, efforts should be a channel to sustain it. The current realisation of oil is a clear opportunity to stimulate economic growth, as it can attract FDI with much multiplier effects, the government should not slack in making policy that will ensure maximum benefits accrue to the economy through it. With prudent use, it can add to the speedy achievement of Kenya's Vision 2030 goals. In the tourism sector, it is the third-largest foreign exchange inflow to Kenya, which is traditionally centered on the national parks and the coastal areas, and the strategical location of Kenya is making it a hub for regional and international conferences. Nevertheless, the only limitation is the security concern in Kenya, that policymakers should endeavour to improve as it is capable of undermining all opportunities that can avail the economy. | 7,782.6 | 2020-07-03T00:00:00.000 | [
"Economics",
"Political Science"
] |
The Spin1 interactor, Spindoc, is dispensable for meiotic division, but essential for haploid spermatid development in mice
In mammals, germline development undergoes dramatic morphological and molecular changes and is epigenetically subject to intricate yet exquisite regulation. Which epigenetic players and how they participate in the germline developmental process are not fully characterized. Spin1 is a multifunctional epigenetic protein reader that has been shown to recognize H3 “K4me3-R8me2a” histone marks, and more recently the non-canonical bivalent H3 “K4me3-K9me3/2” marks as well. As a robust Spin1-interacting cofactor, Spindoc has been identified to enhance the binding of Spin1 to its substrate histone marks, thereby modulating the downstream signaling; However, the physiological role of Spindoc in germline development is unknown. We generated two Spindoc knockout mouse models through CRISPR/Cas9 strategy, which revealed that Spindoc is specifically required for haploid spermatid development, but not essential for meiotic divisions in spermatocytes. This study unveiled a new epigenetic player that participates in haploid germline development.
Introduction
In mammals, the production of functionally competent sperm is a lengthy and complex biological process, which is in general divided into three successive stages -the proliferation and differentiation of spermatogonia derived from neonatal gonocytes, one time of DNA replication followed by two times of cell divisions, termed meiosis, and subsequent haploid germline development, also named spermiogenesis [1]. To achieve this end, this whole developmental process necessitates the expression of abundant germline-specific or -predominant genes in testis, including those genes encoding the structural components of acrosomes and tails of sperm cells [2]. On the other hand, the germline undergoes a drastic and sophisticated process of epigenetic programming, such as the dynamic chromatin remodeling and histone modifications, which requires a rich set of testis-preferential epigenetic modifiers [1]. These enable the deposition (writer), recognition (reader), and removal (eraser) of specific histone post-translational modifications (PTMs) primarily residing in N-terminal histone tails. These combinatorial modifications render the transition of local chromatin between a closed, transcriptionally inert state and an open, transcriptionpermissive state, thereby facilitating the fine-tuning of gene expression in response to developmental cues. Spindlin1 (Spin1) is a transcriptional coactivator that comprises three Spin/Ssty motifs, which are individually folded resembling a Tudor-like β-barrel conformation [3][4][5]. Early biochemical studies revealed that Spin1 is a multifunctional histone reader with the second Tudor module recognizing the H3K4me3/ H4K20me3 while the first one binding H3R8me2a, which stimulates gene expression involved in ribosomal DNA (rDNA) transcription, Wnt/β-catenin signaling and MAZ (Myc-associated zinc finger protein) target gene activation [6,7]. More recently, structural studies have shown that Spin1 recognizes a bivalent histone methylation signature, H3" K4me3-K9me3/2″, with a four-fold higher binding affinity, than H3 "K4me3-R8me2a" signature [8,9]. H3K4me3 was traditionally regarded as an active histone mark for transcriptional activation, whereas H3K9me3/2 function as canonical repressive histone marks; However, they are not mutually exclusive, but indeed coexist in coding genomic regions of rDNA with active transcription [10]. In agreement with this, the histone H3K9 demethylase KDM7B (PHF8) harbors PHD and JmjC domains, and has been shown to activate rDNA transcription in nucleoli, through the PHD module recognizing H3K4me3 and JmjC domain promoting the H3K9me3/2 demethylation in cis [11]. Intriguingly, recent studies implicated that both H3K4me3 and H3K9me3 are present in haploid genome and participate in transgenerational epigenetic inheritance in mice and C. elegans [12,13].
Spin1 was initially identified as a maternal protein highly expressed in oocytes and embryos at the 2-cell stage in mice. Subsequent studies have shown that Spin1 is dispensable for folliculogenesis, but is required for meiotic division in female mice [14,15]. In porcine oocytes, Spin1 is localized in both the cytosol and the nucleus, and maintains the MII-arrested state [16]. While the functional mechanism of Spin1 remains not fully characterized, recent studies identified that the in vivo Spin1-interacting protein, C11orf84, also termed Spindoc, modulates the transcriptional coactivity of Spin1 through its interaction with the third Tudor domain of Spin1, raising a possibility that Spindoc might play an important role in germline development [9,17].
Here we generated Spindoc knockout mouse models via CRISPR/CAS9, and reported that mouse Spindoc is not required for meiotic divisions in spermatocytes, but is essential for the haploid spermatid development after meiosis. Spindoc-deficient males displayed subfertility owing to the decreased sperm number and abnormal sperm morphology. This study added a new epigenetic player that exerts pivotal roles in germline development.
Spindoc is predominantly expressed in testis
To study the functional roles of Spindoc in germline development, we firstly examined the multi-tissue expression patterns of Spindoc in humans and mice. GTEx database has revealed the highest levels of Spindoc transcript in the testis, as compared to that in other somatic tissues, in humans ( Fig. 1 A). Consistently, the quantitative PCR (qPCR) assays showed predominant expression of Spindoc mRNA transcript in testis, as compared in other somatic organs, in mice ( Fig. 1 B). At the protein level, the expression levels of Spindoc slightly differed from that of its mRNA abundance among different tissues, with testis being one of the strongest expression organs in mice, suggesting protein translation of Spindoc is subject to post-transcriptional regulation ( Fig. 1 C,D). In mammals, germline development is strictly time-defined, with different stages of germ cells occurring at specific time points. Thus, we next investigated how Spindoc is expressed during postnatal germline development. As shown in Fig. 1 E, while the Spindoc mRNA displayed an increased expression trend during spermatogenesis, its protein expression sustained a relatively high level throughout postnatal testicular development ( Fig. 1 F), suggesting that the mRNA and protein of Spindoc are present from spermatogonia to post-meiotic spermatids. Further [18]. SSC, Spermatogonial Stem cells. (H) Dynamic expression levels of Spindoc mRNA from single cell RNA-seq analyses in RA-synchronized testicular cells [19]. A1, type A1 spermatogonia; In, intermediate spermatogonia; BS, S phase type B spermatogonia; ePL, early preleptotene; mPL, middle preleptotene; lPL, Late preleptotene; L, leptotene; Z, zygotene; eP, early pachytene; mP, middle pachytene; lP, late pachytene; D, diplotene; MI, metaphase I; MII, metaphase II; RS1-2, steps 1-2 spermatids; RS3-4, steps 3-4 spermatids; RS5-6, steps 5-6 spermatids; RS7-8, steps 7-8 spermatids single cell RNA-seq analyses validated that Spindoc displays a highly dynamic mRNA expression pattern at various developmental stages of germ cells, with higher mRNA levels detected in early spermatogonia, late spermatocytes and haploid spermatids in both humans ( Fig. 1 G) and mice ( Fig. 1 H) [18,19]. Together, these evidences suggest that Spindoc might play important roles in germline development.
Generation of Spindoc knockout mouse models
To investigate the in vivo function of Spindoc, we next chose to generate a mouse model deficient in Spindoc through the CRISPR/CAS9 technology. We designed a pair of sgRNAs, which targeted the exon2 of Spindoc gene (Fig. 2 A). Cas9 mRNA and sgRNAs were microinjected into fertilized zygotes. The two-cell embryos were subsequently transferred into surrogate pregnant mothers. After birth, Sanger sequencing validated that we obtained two different founder lines: one line carrying a single nucleotide (T) insertion in exon2 ( The testis of KO was smaller than that of the WT; the epididymis of KO was more transparent than that of the WT. (G) Histogram showing the testis weights in WT and KO adult mice. Data are presented as mean ± SEM, n = 3. p < 0.001 by student t-test premature STOP codon, thus leading to the activation of nonsense-mediated mRNA decay (NMD) pathway. The F0 founder mice were crossed with WT females to achieve the heterozygous offspring, which were further inter-crossed to gain Spindoc-null (KO) pups. Western blot analyses confirmed that we successfully generated Spindoc-null mice (Fig. 2 D,E). Morphological examination demonstrated that Spindoc-null testes exhibited reduced weight as compared to that in WT littermates, indicative of impaired germline development postnatally (Fig. 2 F,G). Given that both Line 1 and Line 2 KO males exhibited the similar phenotype, we henceforth focused on Line1-derived offspring in our subsequent studies.
Spindoc KO caused impaired sperm production leading to male subfertility
To test the effect of Spindoc KO on male fertility, we crossed the adult Spindoc-null males with WT females. Over a 4-month breeding period, we observed reduced litter size in Spindoc-null males as compared to in WT males (Fig. 3A), indicating the male subfertility upon Spindoc KO. Therefore, we next performed the Hematoxylin & Eosin (HE) staining on the cauda epididymis from the WT and Spindoc-null mice. In contrast to WT cauda as shown in Fig. 3B, the KO cauda was filled with a smaller number of condensed sperm. Detailed counting indicated that the total sperm number in Spindoc-null cauda is reduced to one third of that in WT cauda (Fig. 3B). Furthermore, careful examination of the sperm morphology by HE staining showed that Spindoc-null mice produced higher numbers of sperm with defects in the mid-piece and head condensation, suggesting defective haploid spermatid development ( Fig. 3 C,D,E), suggesting that Spindoc KO possibly caused a developmental defect in the haploid spermatids.
Intact meiotic divisions in spermatocytes in Spindoc-null mice
Given that Spin1 has been previously shown to drive the meiotic resumption in oocytes [15,16], we next determined to test if meiosis was impaired in Spindocnull spermatocytes. To this end, we performed the immunofluorescent staining using SYCP3 and γH2AX on chromosome spreads prepared from postnatal day 21 (P21) testes following a standard drying-down preparation protocol [20]. Surprisingly, the progression of meiotic prophase I in Spindoc-null spermatocytes appeared to be normal, resembling those observed in the WT spermatocytes without any discernable morphological abnormality (Fig. 4 A), which is not consistent with the reported function of Spin1 in meiotic division of oocytes previously [15]. The percentage of Spindoc-null spermatocytes at various stages was comparable to that in WT testis (Fig. 4 B). We further executed the double staining with SYCP1 and SYCP3, which labelled the central and lateral synaptonemal complex filaments, respectively. These further corroborated that the paring and synapsis between homologous chromosomes in Spindoc-null spermatocytes were indistinguishable from those in WT spermatocytes (Fig. 4 C). In accord with this finding, Hematoxylin and Eosin (HE) staining demonstrated the normal meiotic progression in Spindoc-null testes, as evidenced by the presence of round spermatids in the seminiferous tubules (Fig. 4 D) in P21 testes. Together, these data suggest that Spindoc is not necessary for meiotic divisions in spermatocytes in mice.
Defective transition from round spermatids to elongated spermatids
As described above, Spindoc-null spermatocytes appeared to progress normally during meiosis as observed in WT testes; However, the declined sperm count and aberrant sperm morphology suggested that there must be post-meiotic defects occurring during the haploid spermatid development. To test this hypothesis, we first examined the cell number ratio of the germ cells to Sertoli cells through the co-staining with respective markers, GCNA and SOX9, in 5-month testes. Not surprisingly, we didn't observe noticeable changes in the ratio of germ cells to Sertoli cells (Fig. 5A). In mice, spermatogenesis takes place in successive waves along the epithelia of seminiferous tubules [2]. In any cross-section of the tubules, there is a total of 12 stages (I ~ XII stage) consisting of germ cells at various developmental steps lining up from the basal membrane to the lumen of the tubules [21]. Therefore, the comparison of the staging on the basis of the cellular morphology and associations between WT and KO testes was commonly adopted to pinpoint the specific developmental defects during spermatogenesis. We thus subsequently performed the HE staining on the paraffin-embedded testicular sections, and compared the spermatogenic stages between WT and KO testes from 2-monthand 5-month-old mice side by side (Fig. 5 B,C). At 2 months, there appeared to be similar number of round spermatids (RS) between Stages I ~ VIII, albeit with more aberrant elongated spermatids (ES) and condensed spermatozoa (CS) in KO seminiferous tubules as compared to in WT testes (Fig. 5 B). Intriguingly, when spermatogenesis moved progress beyond stage VIII, the elongating or elongated spermatids were present between Stage IX ~ XII in WT testes, whereas abundant round spermatids-like (RSL) germ cells were observed in KO testes (Fig. 5 B). This finding indicated a developmental arrest occurring during the transition from round spermatids to elongated spermatids upon Spindoc KO.
The similar developmental arrest was further recapitulated in testes from 5-month-old mice (Fig. 5 C). Taken together, these evidences suggest that the development of haploid spermatids was disrupted in the absence of Spindoc in male mice.
Discussion
Chromatin modifiers, including the epigenetic writers, readers, and erasers for specific histone post-translational modifications (PTMs), have been discovered to play significant roles during germline development. Spin1 was originally identified as a highly abundant maternal protein in mouse oocytes, and was shown to resume meiotic divisions of fully grown oocytes at the GV stage in the juvenile female mice [15]. It is almost solely comprised of three tandem Tudor-like domains, of which the first two Tudor domains engage in the recognition of cis-tail H3 "K4me3-R8me2a" histone marks. Until recently, the third Tudor module was identified to strongly bind a cofactor, namely Spindoc, in vivo [17,22]. However, how Spin1 and its cofactor, Spindoc, function during male germline development is largely unknown. In somatic cells, it has been shown that Spin1 is localized to the nucleoli, where it is highly enriched in the rDNA repeats with active transcription, thereby promoting the expression of rRNA genes [6]. In addition, ectopic overexpression of Spin1 led to the transformation of NIH3T3 cells showing disruption of the cell cycles and chromosomal instability [23]. Furthermore, Spin1 expression was elevated in some types of cancers, including the seminoma [24]. Our comprehensive examinations indicate that Spindoc is preferentially expressed in testes as compared to in other somatic tissues. Interestingly, the mRNA expression levels of Spindoc exhibit a highly dynamic pattern ranging from spermatogonia to late stage of spermatids (Fig. 1). In line with the Spindoc expression pattern, the Spin1 mRNAs were also highly detected in the spermatogonia as well as in the spermatocytes (data not shown). These evidences suggest that Spin1/Spindoc complex might coordinate and execute pivotal roles in germ cells, in particular at the stage of meiosis, as seen in the female oocytes [14]. However, to our surprise, in the Spindoc KO mouse models, we observed only the post-meiotic defects during spermiogenesis but not the meiotic anomaly. Haploid spermatids were arrested during the transition from round spermatids to elongating and elongated spermatids. Additionally, the condensed spermatozoa that "escaped" the developmental arrest exhibited aberrant morphological defects, including the misshapen heads and abnormal midpiece of the sperm tails, reminiscent of the disrupted nuclear condensation of the spermatozoa head in the absence of Spindoc. Taken together, our study revealed an epigenetic factor, Spindoc, is essential for post-meiotic haploid germ cell development in mammals.
Generation of the Spindoc knockout mouse model
The Spindoc knockout mouse model with the premature STOP codon was generated via the CRISPR/Cas9 technology through zygotic microinjection of the Cas9 mRNA and sgRNAs mixture. One pair of single-guide RNAs (sgRNAs) were designed against exon 2 of Spindoc (NM_001033139.3). Sanger sequencing was used to identify the genotypes of the offspring. Two founder mouse lines were validated by Sanger sequencing. One line carried one base pair (T) insertion of exon2, the other harboring combinatorial deletions as indicated (∆15 bp and ∆7 bp occurring at the sites targeted by the two sgRNAs, respectively); The sequences of the sgRNAs and the genotyping primers are listed in Table 1. All Mice were from the C57BL/6 J genetic background, and were bred in a specific pathogen-free (SPF) facility with a 12 h light/dark cycle and free access to food and water. All animal experiments have been approved by the Animal Care and Use Committee of University of science and technology in China.
RNA extraction, reverse transcription and RT-qPCR
Total RNA was isolated from different tissues in mice with TRIzol Reagent following the manufacturer's instructions as described previously [25]. Freshly collected or frozen tissues were homogenized in 1 ml TRIzol reagent per 50 mg tissue. The quantity and quality of RNA were determined by measurement using NanoPhotometer ® N50 (Implen, Germany). The RNA samples with OD values of 260/280 ≥ 1.9 were selected for downstream analyses. To compare the Spindoc mRNA levels across tissues, equal amount of total RNAs was loaded to synthesize cDNAs using the RevertAid First Strand cDNA Synthesis kit (K1622, Thermo). Quantitative PCR (qPCR) was performed using ChamQ Universal SYBR qPCR Master Mix (Q711-02, Vazyme) with the Real-Time PCR machine (Roche). Primers for the qPCR were listed in Table 1.
Western blot
Samples were freshly collected from different tissues in mice at different ages. Protein lysates were prepared in RIPA lysis buffer [100 mM Tris-HCl (PH7.4), 1% Triton X-100, 1% Sodium deoxycholate, 0.1% SDS, 0.15 M NaCl, supplemented with Protease inhibitor cocktail]. Protein concentrations were determined using the BCA protein assay kit. All protein samples were run in denatured 10 ~ 12% sodium dodecyl sulfate polyacrylamide (SDS-PAGE) gel, followed by the wet-transfer to PVDF membranes. Membranes were blocked in 5% non-fat milk for 1 h at room temperature, and next were incubated with primary antibody overnight at 4 °C. After washing for three times in 1XPBS buffer, membranes were incubated with secondary antibody for 1 h at room temperature. The primary antibodies for immunoblotting included: rabbit anti-Spindoc (1:1000; PA5-65609; Invitrogen), rabbit anti-GAPDH (1:5000; 21,612; SAB).
Sperm counting
Sperm were released from the cauda epididymis of adult mice by puncture using sharp forceps, and were incubated in HTF medium for 30 min at 37 °C. For KO mice, the cauda was slightly squeezed to push out sperm using a tweezer in order to minimize the retention of sperm inside the cauda owing to the declined motility of sperm in the KO mice. Sperm number was calculated using a hemacytometer. Smear slides of sperm were stained by standard Hematoxylin & Eosin to distinguish the morphological abnormality between WT and KO groups.
Statistical analysis
Statistical analysis was performed using Student's t test unless stated elsewhere. The value of p < 0.05 was deemed significant statistically. Statistical data was processed by GraphPad Prism 6. | 4,165 | 2021-09-15T00:00:00.000 | [
"Biology"
] |
Use of Web 2.0 Technologies in the Teaching/Learning of Business Education in Nigerian Universities
This paper assessed the use of web 2.0 technologies in the teaching/learning of business education courses in Nigeria Universities. The paper sought answers to the research question "what are the web 2.0 technologies use by lecturers and students of business education in Nigerian Universities? The study adopted both qualitative and quantitative approaches. The descriptive survey was the design used for the quantitative method while content analysis for the qualitative method. A sample of 38 lecturers and 113 students were used for the survey. A total of 151 copies of the questionnaire were administered altogether, and all copies were retrieved and used for the study. A semi-structured interview was the instrument used to gather the qualitative data. Mean, standard deviation and ranks were used to analyze the quantitative data collected. Independent samples t-test statistic was used to test the null hypothesis at the 0.05 level of significance. The qualitative data were analyzed using two themes. The findings of the study revealed that web 2.0 technologies are not used in the teaching/learning of business education. It was also found that lack of technical expertise and uneasiness with openness and public discourse and interactions are some of the reasons why web tools are not used in teaching and learning. Based on the findings, it was concluded that graduates of business education would not be able to get the required skills and competencies to be capable of operating effectively in the 21st world of employment. Based on the findings of the study, the study recommends among others that; there is the need for business education teachers and students to be given technical support to help them to divert the use of web 2.0 technologies from entertainment to educational uses.
Introduction
The widespread usage of technology is shifting the way we work, play, learn and communicate, even the way we carry out away our regular day-to-day activities. In higher education, technology has induced a striking impact on teaching and learning. Higher educational institutions are well-positioned to take advantage of rapid changes in the field of education and business education in particular. Most importantly, among these technologies, the webbased technologies are at the forefront now and have grabbed the attention of educators. This situation is because these web-based technologies in education offer new opportunities for teachers and students. In particular, the availability of large data sets and real-life cases have the potential to enhance higher education teaching (New Media Consortium, n.d). The proliferation of the Web is now making teachers at all levels of education to reconsider what of teaching, learning, and schooling are. It has been argued that the Web can remove the physical boundaries of classrooms and time restraints of class schedules, with the implication that teaching and learning can take place even on the go. Traditional lectures and demonstrations, according to Edda (2012) can become Web-based multimedia learning experiences for students. Learning resources of universities can be augmented by learning resources of the world via the Web (Conole, 2010). Moreover, the Web can help us re-focus our institutions from teaching to learning, from teacher to student and the students made responsible for their learning. This will go a long way making the students active learners instead of passive listeners in the classroom. full range of interactive and collaborative web-based technologies and services which have educational value. The application of these tools to the teaching of business education will of great benefit to students and teachers alike. Business Education according to Ubulom (2003) is an aspect of the educational programme which prepares students for careers in business. It is the education that is needed to enlighten people about business, education needed to train people to conduct their personal affairs in order to be productive citizens of society. Some schools of thought believed that business education is a programme of study to produce teachers for secondary and postsecondary schools (Obi, 2012). However, from the foregoing, this paper shares the same view with the Ubulom (2003).
Teachers at all levels of education especially at the university level are always encouraged to look for innovative ways for their students to learn with the use of social media or Web 2.0 tools. Many pedagogical rationales have been advanced to support this change in learning activities. Web 2.0 tools provide a wide range of opportunities for incorporation into higher education activities. These tools can doubtlessly change higher education by helping in the preparation of lesson and actual presentations, assessment of the progress made by the learners, time management, planning the timetable and the calendar of activities, developing projects in collaboration, digital storytelling and students' eportfolios (Conole, 2010). This implied that there are three paradigm shifts: "a shift from a focus on information to communication, a shift from a passive to more interactive engagement, and a shift from a focus on individual learners to more socially situated learning" (Conole, 2010). The areas which web 2.0 might promote new ways of learning include inquiry-based and exploratory learning; new forms of communication and collaboration; new forms of creativity, co-creation, and production; and richer contextualization of learning (Conole and Alevizou, 2010). These tools are very attractive to teaching/learning because learners cannot be passive in these set of technologies in which the learner contributes rather than passively consuming content (as with television). They are also generally easy to apply and affordable (Strawbridge, 2010).
Today's students entering universities and colleges use Web 2.0 applications like wikis, blogs, RSS, podcasting and social networking in their daily lives (Lenhart & Madden 2005, 2007. Researchers believe that Web 2.0 technologies should be integrated into higher education because today's learners expect to learn with new technologies and for the fact higher education should prepare students for the workplace of the future (Alexander 2006, Prensky 2001, Roberts, Foehr & Rideout 2005, Strom & Strom 2007. Researchers have identified several benefits of Web 2.0 technologies to learners in higher education (Alexander 2006, Elgort, Smith & Toland 2008, Lamb 2004. Multiple studies have focused on one tool, for example, blogs, within a certain discipline. Ellison andWu (2008), Farmer, Yue andBrooks (2008), Hall and Davison (2007), Williams and Jacobs (2004) and Xie, Ke and Sharma (2008) reported that blogs encourage students to read and provide peer feedback and enhance reflection and higher-order learning skills. Wikis have been found to not only improve students' writing skills but engage students and facilitate collaborative learning in various disciplines (Luce-Kapler 2007;Parker & Chao, 2007).
The different types of web 2.0 tools available for teaching/learning according to Dhamdhere, (2012) include the following among others: Learning Management System: Moodle (Course management system) is a Course Management System (CMS), and known as a Learning Management System (LMS) or a Virtual Learning Environment (VLE). These systems are used for creating and deliver training/education through an organized delivery system. Blackboard (Course Management System): This platform helps to engage more students in exciting new ways, reaching the students on the level with which they desire with the use of their devices and connecting more effectively, keeping them updated, involved by way of effective collaboration. Mashups: This platform is used to create and integrate information in a very active and user-friendly interface. Files/Information Sharing: The tools in this platform include: drop.io, myfiles.bgsu.edu, furl.net, del.icio.us, Scribd (Document sharing tool) among others. RSS: This platform allows users to seek out, updates to contents of RSS enabled websites, blogs, podcasts without having to go and visit the site, but information from the site is collected within a 'feed' and 'piped' to users in a process known as syndication. Wikis: A webpage or a set of web pages can easily be edited by anyone who is allowed to access. It is a collaborative tool which helps in the production of group work, hyperlinking for linking pages. Library wiki as a service can enable social interaction among librarians and patrons, essentially moving the study group room online. Podcasting: Podcasts are usually included within the Web 2.0 galaxy as another example of user-generated content. Within academic publishing, podcasts are becoming an increasingly common adjunct to online journals and are reported to be very popular.
Discussion forum/Internet forum: This is an online discussion site where professionals, readers, and others can effectively hold conversations in the form of posted messages. This site is different from chat rooms in that messages are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes visible. e.g., mlosc google group, linked in groups, among others. Flicker (Photos): This is a picture database and an experimental information architecture algorithm that can help the students to find images not by metadata, but by the data itself. Users can search for images by sketching images themselves. It has the features of share, comment, add notes to photos and images which can be used in the classroom/environment. Video Sharing: One of the video sharing platforms is YouTube which is a video sharing website where users can upload, view and share video clips. Google Video is another tool. Tokbox is used for video chat and video message amongst the study group. Social Networking: Facebook is a social utility that connects people with friends and others who work, study and live around them. Linked In is a professional social network gives the learner the keys to controlling their online identity and connects them to their trusted contacts and helps the learner exchange knowledge, ideas, and opportunities with a broader network of professionals/users. My space, Twitter, Ning, myspace also shares you with others. For classroom announcements, creating classroom community and learning both inside and outside classroom these tools are used. Classroom 2.0 is a social networking site for collaborative technologies in education. Slideshare (Presentation sharing): This platform enables users to upload slides to share with others, rate, and comment on the slideshows of others. Blogs: This is a simple web page consisting of information or links called posts. This platform allows the user to add comments. Communication process through the posting is what is called blogging. Edublog which referred to as Educational Blogging allows users to create and manage student, teacher and library blogs, quickly customize designs and include videos, photos, and podcasts. Collaborative Authoring: This is also known as collaborative note-taking. In e-learning environment for peer review work, group projects/documents, track changes, collaborative note-taking following tools are used Google Doc, Wikipedia, pbwiki, Wikispaces, stu.dicio.us, among others (Dhamdhere, 2012).
These technologies have come to stay and have found their ways in our everyday life. They are available to make the way we conduct our activities easier, especially teaching and learning. Today's university students are always using web 2.0 technologies. Arguably, the 21 st -century students have their lives soaked in these technologies, which implied that they are always using them almost all the time. The implication for teachers here is that, if today's students are taught with these technologies, a better result may be gotten. This situation is the reason why this study is conducted to assess the use of web 2.0 technologies in teaching and learning business education in Nigeria universities. The present study is different from previous surveys in several aspects. This paper looks at web 2.0 tools together with teaching/learning and Business Education as the subject matter. It is also asserted that very little has been written about web 2.0 technologies and no or little literature on web 2.0 technologies is available in Nigeria. The researcher, therefore, believes that this study contributes to the existing research literature and at the same time provides teachers, students, curriculum planners University communities and other stakeholders relevant information on the use of web 2.0 tools in teaching and learning. The study sought answers to the following questions: Malete, Tai Solarin University of Education, Ijagun and Ekiti State University, Ado Ekiti. The researcher believes that the lecturers and students of the Universities are in a better position to know the types of web technologies that are in use in Business Education programme of the Universities. The sampling procedure adopted for the study was proportionate stratified random sampling technique. The lecturers and students are the identified strata. A sample of 30% of each stratum totaling 38 and 113 of lecturers and students respectively, were selected for the study. This selection was in line with Uzuagulu (1998) who stated that when the population of the study is in a few hundred, the researcher should select 30% as the sample.
A structured questionnaire tagged Use of Web 2.0 in Business Education (UWBE) designed by the researcher with a split-half reliability coefficient of 0.78 was used to gather the quantitative data for the study. The questionnaire was made up of a total of 24 carefully designed items after an extensive review of the literature. The items are placed on a four-point rating scale of Highly Used (MU), Fairly Used (FU) Not Used (NU), Never Heard Of (NHO) for research question one and Strongly Agreed (SA), Agreed (A), Disagreed (D) and Strongly Disagreed (SD) for research question two. The scales were scored as Highly Used 3.10 -4.0, Fairly Used 2.50 -3.09, Not Used, 1.25 -2.49 and Not Heard Of 0 -1.24. One hundred and fifty-one copies of the questionnaire were administered and collected. The hypothesis was tested using independent t-test statistic at 0.05 level of significance. For the qualitative aspect of the study, five lecturers and 15 students were used as the sample.
A semi-structured interview was conducted to gather qualitative data. The interview guide was designed by the researcher and made up of just one question. The open-ended question was meant to probe into the reasons why lecturers and students that are not using for teaching and learning web 2.0 tools do so. The interviews were conducted at the point of retrieving the copies of the questionnaire administered. This was to make sure that the participants to be interviewed indicated that they are not using web 2.0 tools for teaching and learning. English Language was the medium of communication for the interview because all participants were fluent in English language speaking. Ethical considerations such as informed consent, voluntary participation, and confidentiality were considered. The data was analyzed using content analysis where two emerging themes on the reasons why lecturers and students are not using web tools for teaching and learning were developed.
Results and Discussion of Findings
Research Question One: What are the web 2.0 tools used by lecturers and students of Business Education in Nigeria Universities? Table 2 reveals that the respondents disagreed with almost all the construct in the table except for items 8 and 10 which they agreed. This is because the two items (8 and 10) had mean scores of 3.21 and 2.67 respectively which were above the fixed decision value of 2.50. This indicated that lecturers and students that use web 2.0 technologies use them for entertainment while some of them agreed that they have not heard of web 2.0 technologies. The standard deviation scores showed that the variability in the responses of the respondents was low. The average mean score showed that respondents disagreed that they use web 2.0 for teaching-learning of Business Education.
Research Question three: What could be the reasons why web 2.0 tools might not be used for teaching and learning by lecturers and students of business education in Nigeria universities?
4.1 Technical difficulties This theme described the participants' perception of the technology related issues why web.2.0 tools are used in teaching and learning. The participants stated that the chief among the reasons why web 2.0 tools are not used in teaching and learning is because they lack the expertise to operate the technologies. Some of the participants said that they do not know how to create groups in the WhatsApp, Facebook and Imo platforms. Some of the participants, especially the teacher participants stated uneasiness when operating web 2.0 tools as the reason for not utilizing them for teaching. Technical difficulties related to students' lack of awareness of new web tools, glitches due to the in-progress nature of many Web 2.0 tools were mentioned by the participants as the reasons for not using web 2.0 tools.
Confidentiality and timing issues
This theme described issues related to the security of the content and planning time in using web tools for teaching and learning. Some of the participants reported uneasiness with openness and public discourse and interactions. The participants seem to express fears on the security of the post and how the conversation could be controlled during the teaching and learning process. More than 50% of the participants said that planning to use web 2.0 tools for teaching will waste much time. They used words such as boring, excessively timing wasting, complicated, unreliable, uncontrollable, and not practicable to describe the usage of web 2.0 tools for teaching and learning.
H01: There is no significant difference between the mean responses of lecturers and students regarding the web 2.0 tools mostly used by lecturers and students in the teaching and learning of Business Education in Nigeria Universities. The result in Table 3 reveals that the t-calculated of 0.47 with the observed p-value of 0.073 which is higher than the fixed p-value of 0.05. This indicated that the null hypothesis which states that there is no significant difference between the mean responses of lecturers and students regarding the web 2.0 tools used by lecturers and students in the teaching and learning of Business Education in Nigeria Universities was, therefore, not rejected (t149 = 0.47, P=0.073). This implied that teachers and students agreed that the construct listed in Table 1 are not used in the teaching/learning of Business Education.
Discussion of Major Findings
This study determined the perception of lecturers and students of the use of web 2.0 technologies in the teaching and learning of business education courses in Nigerian Universities. The study found that Facebook, WhatsApp, YouTube are the most used web 2.0 technologies by Business Education lecturers and students. This finding supports the earlier finding of Tyagi (2012) who found that Facebook is at the forefront of web tools used in these days. These findings imply that these web tools have come to stay and their usage would continue to increase. The study also found that the respondents are not heard of three out of 12 web tools used in this study. One of the reasons why web 2.0 tools are not used may because they are recent and most lecturers and students are not aware of the technologies. This finding goes in line with Kelly (2008) who opined that the newness has resulted into fragmented adoption and implementation of these technologies in higher education institutions due to the lack of institutional policies and inadequate knowledge and skills on web 2.0 which have contributed to lack of clear framework on the effective use of these technologies pedagogically or for support. There is no surprise when Grosseck (2009) opined that although there is a consensus on the positive aspects of Web 2.0 in teaching, there is still ignorance of educators as far as its adoption is concerned.
Contrary to this, Laronde (2010) found that faculty members having achieved mastery over technology; educators use it effortlessly as a tool to accomplish a variety of instructional and management goals. Also, in support of this finding, Yuen, Yaoyuneyong, & Yuen (2011) found that social networking sites were the Web 2.0 technology most commonly used by teachers, followed by social video tools. They found that most teachers did not utilize other Web 2.0 services (blogs, collaborative writing tools, podcasts, social bookmarking or tagging tools, social photo tools, thinking tools, virtual worlds, or wikis) which similar the finding of this study. Buttressing this finding, Edda, (2012) stated that web 2.0 is very new and that it has only been around in anything like its present form for about three years, yet it is already having an impact on higher education. In support of this, Christopher, Pritchett, Pritchett, & Wohleb (2012) found that social networking and video sharing are the web 2.0 application most often used, though the percentage usage was not encouraging. No wonder, Sawant (2012) found that many faculty members reported that they never use any web 2.0 technologies because they have not heard about them talk more about using them.
The study also found that among the lecturers and students that use web 2.0 technologies use them for entertainment purposes. They are probably unaware of their benefits in application to teaching and learning. This finding coincides with Conole, (2010) who observed that students use web 2.0 technologies, but they do not know to use them to benefit themselves. They use only for entertainment and fun such as for chatting, pinging, poking unnecessary and trivial issues, among others. Also, supporting this finding, Crook et al. (2008) reported that more than a third (37.4%) of teachers believe that adopting Web 2.0 resources in the classroom would be timeconsuming for them and teachers find that student use of the internet in class can be hard for them to manage. This implied that web 2.0 applications are not utilized for teaching and learning of Business education in Nigerian Universities. The study also found that respondents agreed that they never use web 2.0, to talk about using them for any purpose. This supports the prior finding by Sawant (2012) who reported that some faculties reported that they never use web 2.0 applications for anything. The study also revealed that lack of technical expertise, planning time, uneasiness with openness and public discourse and interactions are some of the reasons why web tools are not used in teaching and learning. This finding is in line with Mamman and Nwabufo (2013) who found that lack of time for planning, adequate technical support and limitation of ICT skills are barriers to the integration of web 2.0 tools. The implication of these findings is that Nigeria University lecturers and students are not updated regarding the integration of technologies in the teaching and learning process.
Conclusion
This study was an attempt to assess the use of web 2.0 technologies in the teaching and learning of business education in Nigerian Universities. The findings of the study indicated that web 2.0 technologies are not used in the teaching and learning of Business education. Even though traditional teaching and learning paradigms have been shaken by the impact of the integration of new technologies into our educational practices, it looks as if business education appears to be immune against this technological virus that has infected virtually all facets of human life. This implied that the teaching and learning of business education are still carried out under the traditional mode where students are told what to learn, as well as when, where and how. Instead, that knowledge should be actively constructed, and students should be made responsible for their learning. This type of traditional environment does not prepare the learner for the contemporary work world that exists today. Since this is the case, it means that business education is not ready for the 21 st century because its graduates would not be able to acquire 21 st -century skills to meet up with the demands of the present world of work. This means that the graduates would half-baked, the consequences of producing half-baked graduate is better imagined.
7.
Recommendations Based on the findings of this study, the following recommendations are made: 1. This study recommends that university management should provide several training courses in web 2.0 technologies which staff should be exposed to in order to respond to the training needs. The respective departments should conduct the training needs assessment.
2. Business education teachers even as digital immigrants should try to stay updated regarding the emerging technologies that can be applied in the teaching/learning process and try to incorporate them into the classroom.
3. There is the need for Business educators to revised and develop Business Education curriculum to deploy ICT application effectively, and they should be specifically designed to fit the emerging Web technologies since they promote and facilitate interactive and collaborative learning.
4. There is a need for university management to provide technical support for staff and students to help them to divert the use of web 2.0 technologies from entertainment to educational uses. | 5,665.8 | 2019-03-01T00:00:00.000 | [
"Business",
"Computer Science",
"Education"
] |
Miniaturized high-NA focusing-mirror multiple optical tweezers
An array of high numerical aperture parabolic micromirrors (NA = 0.96) is used to generate multiple optical tweezers and to trap micron-sized dielectric particles in three dimensions within a fluidic device. The array of micromirrors allows generating arbitrarily large numbers of 3D traps, since the whole trapping area is not restricted by the field-of-view of the high-NA microscope objectives used in traditional tweezers arrangements. Trapping efficiencies of Qmax r 0.22, comparable to those of conventional tweezers, have been measured. Moreover, individual fluorescence light from all the trapped particles can be collected simultaneously with the high-NA of the micromirrors. This is demonstrated experimentally by capturing more than 100 fluorescent micro-beads in a fluidic environment. Micromirrors may easily be integrated in microfluidic devices, offering a simple and very efficient solution for miniaturized optical traps in lab-on-a-chip devices. © 2007 Optical Society of America OCIS codes: (140.7010) Trapping; (170.4520) Optical confinement and manipulation; (350.3950) Micro-optics; (170.4520) Cell analysis; (040.1240) Arrays; (999.9999) Microfluidics; Lab-on-a-chip References and links 1. A. Ashkin, “Acceleration and Trapping of Particles by Radiation Pressure,” Phys. Rev. Lett. 24, 156 (1970). 2. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a Single-Beam Gradient Force Optical Trap for Dielectric Particles,” Opt. Lett. 11, 288–290 (1986). 3. D. R. Reyes, D. Iossifidis, P. A. Auroux, and A. Manz, “Micro total analysis systems.
Introduction
In 1970, Arthur Ashkin demonstrated how milliwatts of laser radiation can be used to accelerate end even trap micron-sized particles suspended in liquid and gas [1] and in 1986 he demonstrated the single-beam gradient force optical trap, commonly referred to as optical tweezers [2].Today, much interest is given to explore possibilities for combining optical forces with microfluidic systems (lab-on-a-chip or miniaturized analysis systems [3]).Optical forces have been proposed for trapping and manipulating [4,5], sorting [6,7] or guiding [8] micron-sized artificial as well as biological particles within microfluidic devices, demonstrating the potential of this micro-manipulation technique for future miniaturized analysis systems.
Integrating large matrices of optical traps in microfluidic devices may allow performing par-allel and well controlled bio-chemical reactions on arrays of mesoscopic objects, including living cells, for the assessment of statistical data, drug screening, or for recovery of rare primary cells.Several multiple optical trapping schemes have already been proposed relying on very different techniques, including diffractive elements [9,10], interfering beams [11,12], VCSEL arrays [13], microlens arrays [14] or optical fiber-bundles [15].Certain optical trapping schemes even allow generating multiple traps that are computer-reconfigurable using laser scanning [16] or spatial light modulators [17,18].However, the miniaturization has essentially been restricted to the microfluidic side.Today's optical trapping schemes mostly rely on macroscopic optical components and on rather complex, cumbersome optical set-ups, commonly arranged around fluorescence microscopes.Also, the very limited field-of-view of high numerical aperture objective lenses commonly employed for optical trapping realistically restricts the number of particles that can be trapped simultaneously, especially if such particles have relatively large dimensions, as it is the case for living cells.The miniaturization of the optical components needed for optical trapping could lead to innovative optical trapping and analysis systems, partially or completely integrating optics and microfluidics within the same analysis biochip.A miniaturized version of the counterpropagating two-beam trap was achieved in the 90's using two facing optical fibers [19].This trapping configuration was recently demonstrated in a completely miniaturized device embedding both the trapping laser sources and the microfluidics within the same monolithic semiconductor [20].Miniaturizing the single-beam gradient force optical trap would require high numerical aperture (NA) micro-optical components, which is hardly attainable.The only successful example so far has taken advantage of a special tapered optical fiber [21].
In this article we demonstrate that miniaturized focusing mirrors can provide the high-NA necessary for generating single-beam optical traps with micro-optical components.Furthermore, arrays of such micromirrors provide a highly scalable approach for generating a large number of optical traps, and may be directly integrated into microfluidic devices.
Parabolic micromirrors as high-NA micro-optics
When operating in a single-beam configuration, optical traps rely on highly convergent light beams (at least NA > 0.7, but typically NA > 1) capable of trapping micrometer-sized dielectric particles in three dimensions.Typically, objective lenses are employed to perform such a tight focusing task.Although single aspheric air-immersed lenses with NAs as high as 0.7 are commercially available, such a high NA can hardly be reached with microlenses [22].Simple calculations show that the sides of a single-sided aspherical microlens should be very steep relative to the substrate if standard optical glass (n 1.56) is used.High-index materials, such as silicon, are not employable in the visible and near-infrared ranges due to their poor optical transmission at these wavelengths.Besides the technical issues related to the fabrication of high-aspect ratio aspherical microlenses, their effective numerical aperture is limited because the high incidence angles strongly restrict the fraction of light which is effectively refracted at the higher NAs.Graded-index (GRIN) lens arrays might also be considered, but their NA is usually limited to 0.5, which is insufficient to generate single-beam optical traps.Special GRIN fiber bundles with NA as high as 1.0 have been used for multiple optical trapping [15], but for some unspecified reason 3D optical trapping could not be achieved.Hybrid approaches, e.g.plano-convex microlenses featuring a refractive index increase towards the side of the lens may provide an opportunity to reach higher NAs, but do not seem to be technically feasible at this time.
Instead, a parabolic mirror directly allows for high-NA light focusing, and it is also well suited for miniaturization.The reflection on the mirror takes place within a solid media of refractive index n solid , allowing a higher NA to be generated (c) Numerical aperture achievable with parabolic mirrors, both considering reflection in air (n = 1) or in a higher refractive index solid (n = 1.56), compared to that of a single plano-convex lens (lower straight line, paraxial approximaton), as a function of the diameter d to radius-ofcurvature R ratio.The star ( ) indicates the aperture achieved in the present work.
along the mirror optical axis, is focused to one point without aberrations in the geometrical approximation.The numerical aperture of a parabolic mirror (PM) is given by where n is the refractive index of the media immediately adjacent to the mirror's reflecting surface.As it will be described in the next section, we have produced focusing parabolic micromirrors by negative replication of an array of plano-convex microlenses.A comparison between the NAs of the master microlenses with that achievable with the molded micromirrors is implemented in Fig. 1(c).Both the lenses and the mirrors are characterized by the same diameter to radius-of-curvature ratio d/R.For the plano-convex lens, we assume a paraxial approximation and consider that the lens is composed of conventional optical glass with index of refraction n lens = 1.56.This paraxial approximation (straight line ending in dots) is reasonable at least for plano-convex lenses characterized by apertures up to NA L 0.2.In the very low-NA limit (d/R << 1), a paraxial approximation may also be considered for the mirrors (NA PM nd/R).Within this limit, the NA of an air-immersed (n = 1) parabolic mirror is more than three times higher than the one of a single plano-convex lens having the same diameter and radius of curvature The factor n in Eq. ( 3) appears because the angles θ at which rays are redirected by the mirror are independent of the adjacent media's refractive index, conversely to the refraction at a lens curved interface.Therefore, if the media on the reflection side of the mirror is characterized by an index of refraction n higher than unity, the NA of the mirror is even further increased by a factor n. The example reported in Fig. 1(c) assumes that the mirror is immersed in a dielectric media characterized by the same refractive index as the one of the lens (n = n lens = 1.56).This corresponds to a ratio NA PM /NA L of 5.57 in the paraxial limit.In the non-paraxial regime, this ratio is somewhat reduced due to the non-linearity of Eq. ( 1), but still is close to five in practical cases.
The particular physical configuration of the parabolic mirrors used in the present experiments is illustrated in Fig. 1(b).The volume on the concave side of the mirror, where reflection takes place, is filled by a solid characterized by a high index of refraction n solid .The focus of the mirror is located within the adjacent fluid containing particles to be trapped (typically water) which has an index of refraction n f luid , lower than the one of the embedding media n solid .The NA gain factor with respect to a mirror that would be filled by the fluid still is of n solid /n f luid .Some spherical aberration is introduced in the system (similarly as with oil-immersion microscope objectives) but a resulting reduction in the trapping efficiency is expected to be limited provided that the distance h between the interface and the foci is kept small [23].
As shown in Fig. 1(c), micromirrors are easily overcoming the NA requirements for 3D trapping (NA > 0.7, horizontal dashed line).This limit is indicative, since it depends on the characteristics of the object to be trapped.The star ( ) indicates the aperture achieved by the micromirrors produced in the framework of this work.Their fabrication is described in the next section.
Micro-mirror array fabrication
An array of parabolic micro-mirrors was successfully produced by molding in UV-curing resist a commercially available array of micro-lenses (Süss MicroOptics, Neuchâtel, Switzerland).These fused silica microlenses (NA = 0.15) have a diameter of 240μm, a radius of curvature of 350μm.Their crucial characteristic for this project is the aspherical cross-sectional profile characterized by a conic constant of K = −1, corresponding to a parabola [24].The microlenses are arranged on a hexagonal array with a pitch of 250μm, the 5mm×5mm array containing more than 400 microlenses.
As illustrated in Fig. 2, a thin gold layer (60nm) is evaporated onto the microlens array prior to the replication of the surface relief into a UV-curing resist (Norland optical adhesive 81, n = 1.56) on a 1mm thick microscope slide.After polymerization and removal of excessive resist, the microlens array is detached from the microscope slide.The low adhesion of gold to the silica of the microlenses -as compared to its adhesion to the hardened resist -ensures that the gold composing the reflective surface of the micromirrors is transferred to the resist side.A second layer of the same UV-curing resist is applied onto the micro-mirrors, and a 80μm thick cover-glass (Menzel #00) is deposited on top prior to a second curing step.
The very thin gold layer ensures that the mirrors are partially transparent to visible light, but highly reflective to the near infrared trapping wavelength (98.6% reflectivity calculated at 1064nm, the rest being essentially absorbed in the gold layer).In addition to the gain in NA with respect to a water-immersed mirror, there are several other reasons why the micromirror array is merged in resist and covered by a thin glass.The first, most important objective, is allowing the foci to be located a few microns above the cover-glass, which will constitute the bottom of a fluidic channel.This ensures that the particles in the channel will flow in the vicinity of the foci and will be captured efficiently.Second, since the refractive index is the same on both sides of the mirrors, the micromirror array does not act as a diverging microlens array when observing in transmission using visible light, allowing undisturbed imaging of the trapping area.Finally, the delicate resist structure composing the micromirrors is mechanically stabilized and the gold layer is well protected; this permits easy cleaning and re-use of the device.
The microlenses that were used as a master mold for the mirrors are made of fused silica, which has a relatively low index refractive index (n lens = 1.45 at 1064nm).As a consequence, they are characterized by a relatively low NA of 0.15, and in this particular case the ratio NA PM /NA L is as high as six.The resist-immersed parabolic mirrors reach an aperture of NA = 0.96.
Fluidic device
The fluidic device developed for the purpose of testing the micro-mirror traps is illustrated in Fig. 3. Its bottom is composed of the microscope slide of Fig. 2(c rors.Two holes are drilled into a second 1mm thick microscope slide, on top of which two PDMS-elastomer pieces providing support for the fluid access tubings are bonded by surface activation in a mild oxygen-plasma discharge.A 100μm thick two-sided adhesive tape, cut-up in its center to form the main fluidic channel, bonds the two microscope slides and provides a seal for the solution to be flown in the system.Two 1ml microcentrifuge tubes are used as reservoirs and connected by plastic tubes to the fluidic system (omitted in Fig. 3).The input and output reservoirs are positioned respectively a few centimeters below and above the fluidic chamber.Smooth fluid flow is generated with little air pressure (fraction of a mbar) applied to the input reservoir and controlled by a manual pressure regulator.Reducing the pressure generates backward flow thanks to a communicating vessel mechanism.The traps in the channel are simply generated by directing a collimated laser beam onto the fluidic device (the micromirrors being embedded in the device).
Laser sources, observation and fluorescence detection
The optical set-up employed in the present experiments is schematically illustrated in Fig. 4.
The trapping laser source is an Ytterbium fiber laser (IPG Photonics) emitting in a linearly polarized T EM 00 mode at a wavelength of 1064nm and delivering up to 10W adjustable optical power.Two fluorescence excitation He-Ne laser beams (Polytech GmBh 543nm/0.5mW,633nm/2mW) are expanded to a diameter close to that of the trapping laser (roughly 5mm for a 1/e 2 irradiance drop) and coupled in the trapping laser path using a lowpass filter.The three laser beams strike upon the micromirror array at perpendicular incidence and are focused confocally by the micromirrors.ror array, being partially transparent to visible light, with different magnifications (L1).
µm
In addition to providing the high-NA necessary for 3D optical trapping, the micromirrors are also used to collect fluorescence light emitted by the particles.Indeed, since particles are trapped at the focus of the mirrors, emitted fluorescence light is collected with high efficiency by the mirrors and quasi-collimated beams are subsequently relayed on a color camera (CCD2, PCO Pixelfly) through a 4 f relay telescope system (0.8×) composed of lenses L2 and L3.F1 and F2 are custom designed filters (Chroma) being highly reflective at the wavelengths of the trapping laser and the fluorescence excitation lasers, but highly in passbands for fluorescence emission wavelengths.
3D trapping
Several solutions of polystyrene beads, with diameters ranging from 2.5 to 15 μm, were introduced in the fluidic system to test optical trapping with the micromirrors.All sizes could successfully be trapped in three dimensions.Figure 5 illustrates a transmission image of four 9.33μm polystyrene beads (Polysciences, Inc.) trapped at the focus of the parabolic micromirrors.Several arguments demonstrate that 3D trapping is achieved in the present experiments.By subsequently imaging the trapped particles and particles deposited at the bottom of the fluidic channel, the trapping plane was estimated to lie about 20μm inside the channel.Since the total depth of the channel approximates 100μm, particles certainly are not pushed against the ceiling of the channel.Another evidence demonstrating that the particles are trapped far away from the surfaces can be found by observing the particles speeds.The velocity of particles being released from the traps (e.g.turning off the trapping laser) is much higher than the one of particles flowing at the bottom of the channel.This indicates that the traps are located closer to the intermediate plane in the channel, where the parabolic flow velocity profile generates higher flow speeds.
Trapping efficiency
The maximal transverse trapping force F max r achievable with the micromirror tweezers was measured by the conventional viscous drag-force method relying on the Stokes formula.The force is reported using the normalized efficiency factor Q max r [25] where n f luid and η are respectively the fluid refractive index and dynamic viscosity, a is the particle radius, P trap is the optical power available at the trap, and v max is the maximal flow velocity that the trapped beads can sustain.Practically, the flow velocity in the fluidic channel was gently increased until the trapped particle escaped the trap, and the particle speed v max after the escape was measured by video microscopy.Since the trapping efficiency measurements were performed close to the center of the micromirror array, the laser power at the trap is approximated by the peak irradiance I 0 of the trapping gaussian laser beam (of half width w and total power P tot ) incident onto the micromirror array, multiplied by the micromirror crosssection A (d is the diameter of the micromirror) The factor α takes into account power losses, which are assumed to be restricted to the limited reflection at the golden mirrors (98.6%) and to residual reflections at the air-glass and the glasswater interfaces (α = 0.93).Using a total laser power of P tot = 8W , the central trap in the array receives a power of P trap = 34mW .An escape velocity of v max = 385 ± 62μm/s (N = 20) was measured for the 9.33μm polystyrene beads, corresponding to a transverse trapping efficiency of Q max r = 0.22 ± 0.03.The escape velocity measurements were performed on 20 different micromirrors, half of the measurements in a reverse flow direction to exclude asymmetry effects related to an eventual slight misalignment.These measurements were realized with an array of micromirrors whose focal plane was located relatively deep in the fluidic channel, h 30 − 40μm.This ensured that the particles were trapped far enough from the surface to limit proximity hydrodynamic force effects to less than 10% [26].Also, the axial flow velocity gradient related to the parabolic flow velocity profile, which is not considered by the Stokes formula for the viscous force, is less pronounced closer to the central plane in the channel.
Multiple trapping associated with fluorescence light collection through the micromirrors
Due to the relatively large pitch of the present micromirror array (250μm) and the limited field-of-view of the microscope objective only a fraction of the trapping area could be viewed directly in the transmission mode with sufficient resolution for observing individual particles.Nevertheless, direct observation of the trapping area is not the only possibility, and might not even be necessary.The high-NA of the micromirrors allows surveying all particles in the array in a reflection mode, overcoming this field-of-view restriction.In order to demonstrate this possibility, the micromirrors were tested as fluorescence light collectors.A solution containing a mixture of fluorescent beads, 6μm in diameter (Molecular Probes AlignFlow), was let flow into the fluidic system.As particles are trapped at the focus of the micromirrors, they are simultaneously illuminated by the He-Ne fluorescence excitation lasers and emit fluorescent light.The latter is efficiently collected at high-NA by the micromirrors and sent through a 4-f system to the color camera (CCD2).The sequence in Fig. 6 illustrates the array of micromirrors "turning- on" during the filling of the array, revealing the particle's individual fluorescence light color.Because of the gaussian profile of the trapping beam, the traps at the center of the array dispose of more optical power, thus particles can be trapped at higher flow speeds.No fluorescence signals could be observed from non-trapped particles at the CCD integration times (50ms) used in the present experiments.
Discussion
The physical configuration of the optical tweezers generated by the miniaturized parabolic mirrors is similar to that generated using a high-NA microscope objective, as both are single-beam gradient force optical traps.An advantageous difference consists in that high-NA parabolic mirrors produce convergent beams having proportionally more energy in the high spatial frequency components, due to their different apodization factor [27].Peripheral rays in the converging cone of light are known to be of fundamental importance for the axial trap stability [28,29], and also play an important role in the trap stability in the transverse direction [30].Therefore, at equal NA, a parabolic mirror may allow generating more efficient traps than a high-NA objective lens.The transverse trapping efficiency of Q max r 0.22 obtained so far with the micromirrors is somewhat lower than typical reported values of Q max 0.25 − 0.35 (for "large" ∼ 10μm polystyrene beads) using high-NA objective lenses [31,23].Such a relatively limited trapping efficiency may partially be explained by the micromirrors lower numerical aperture (NA = 0.96 with respect to commonly employed oil-immersion objective lenses performing NA = 1.3).Also, the micromirror surface quality has not been investigated yet, and may be inferior to that of the master microlens array due to non-conformities associated with the replication process.Finally, the spherical aberration caused by the refractive index interface at a trapping depth of h 30 − 40μm may also be involved in this reduced efficiency [23].Still, micromirrors could be designed to reach the same NAs as objective lenses, and their cross-sectional profile may be adapted to minimize aberration at a particular trapping depth.
A well-known issue related to the use of focusing mirrors, particularly if characterized by high-NA, is that slight deviations of the incident beam from the optical axis give rise to important levels of coma.Alignment accuracy should be better than 0.006 o to ensure undistorted focusing [27].Experimentally, it was observed that trapping was not as sensitive as this to the accurate alignment of the laser beam onto the micromirror array.Such a low sensitivity to alignment accuracy may be related to the miniature size of the mirrors, as the wavefront aberration scales with the size of the mirrors.Also, the sensitivity of the trapping efficiency to alignment was less important when handling relatively large particles (9.33μm beads) than with the smaller 2.5μm particles closer in size to the wavelength.
The trap depth h above the cover-glass turned out to be a critical parameter for the efficient capture of particles in the fluidic device.As particles flow through the fluidic channel, they sediment and already flow in vicinity of the bottom of the channel when arriving at the trapping area.Therefore, the best catching yields were achieved with traps positioned 15 to 25μm above the bottom of the channel, ensuring that the particles were flowing in vicinity of the plane of the foci.Employing micromirror arrays with h > 30μm, particles could not be captured efficiently from the flowing solution unless operating at reduced flow speeds, allowing them enough time to raise to the level of the traps under the effect of the levitating radiation pressure.
The micromirror arrays used in these first experiments have a relatively large pitch of 250μm, consequently limiting the trap density to the order of 20 traps/mm 2 .Such large micromirrors were used only because microlens arrays having a smaller lens diameter, a parabolic profile and a sufficiently high NA were not commercially available.Employing smaller micromirrors would increase the trapping density inversely proportional to the square of the mirror size.Reasonably, micromirrors with a cross-sectional diameter smaller than 50μm may be used for trapping living cells, thus the trap density could reach 500 traps/mm 2 .This is still by far below the highest optical trapping densities that have been achieved.However, as the trapping area is not restricted by the field of view of a microscope objective lens, the total number of traps may be increased at will (scalability).This advantage is essential when working with larger particles like living cells, having typical diameters in the 10 − 15μm range, as no more than 20-30 may be trapped simultaneously within the field of view of a high-NA objective lens.Trapping with micromirror arrays would allow increasing the total number of trapped cells by several folds.Micromirrors uniquely combine scalability with three-dimensional trapping, the latter certainly being an asset for working with biological particles having a large tendency to stick to the fluidic walls.
Another important advantage of the micromirror traps is their optical power throughput: they work with minimal power losses, when compared to tweezers based on microscope objective lenses wasting almost half of the laser power due to beam clipping or to limited optical transmission.Micromirrors also are achromatic, allowing undisturbed operation with different fluorescence wavelengths.
Obviously, trapping a large number of particles is not a goal by itself.Multiple optical trapping systems are very likely to find their highest potential in spectroscopic techniques, e.g.fluorescence or Raman spectroscopy [32], aiming at analyzing many particles at the same time and over extended periods of time.For such applications the ability to efficiently detect individual light signals from all the trapped particles at the same time is of primary importance.As demonstrated in the first experiments with artificial fluorescent particles reported above, micromirror trap arrays can be used to collect individual light signals at high-NA simultaneously from all trapped particles, giving access to multi-particle high sensitivity levels of detection.Very large assemblies of living cells could be achieved using micro-mirror arrays, opening the way for massively parallel analysis.Statistically relevant data may be collected within a single experiment, avoiding time-consuming repetitive experiments and ensuring that all particles are analyzed in the same experimental conditions.
Micromirror arrays may advantageously be integrated in microfluidic systems.In the present study, a simple fluidic device was developed only for the purpose of demonstrating the mi-cromirror's trapping and fluorescence light collection possibilities.More sophisticated devices combining microoptics and microfluidics have been described [33].Micromirrors may very simply be integrated in similar systems, and are potentially inexpensive and mass producible, e.g. using mold casting techniques.They are ideal candidates for optical traps integrated in lab-on-a-chip type microflow devices.
Conclusions
Multiple 3D optical trapping using parabolic micromirror arrays has been demonstrated.The trapping performances of these optical tweezers are comparable to those of conventional tweezers relying on macroscopic optical components.The micromirror approach allows multiple trapping in a highly scalable approach: every trap possesses its own miniaturized focusing element, thus the total number of traps is not restricted as in schemes relying on high-NA microscope objectives having a very limited field-of-view.Simultaneous yet individual fluorescence detection from all trapped particles is demonstrated using the micromirrors as high-NA light collectors, thus opening the way for multi-particle high-sensitivity levels of detection.Micromirrors could easily be integrated into all kinds of micro-fluidic systems.They represent an ideal solution for miniaturized multiple optical traps in lab-on-a-chip devices.
Fig. 1 (Fig. 1 .
Fig. 1.(a) Mirror parameters and basic focusing geometry.(b) Geometry of the focusing mirrors used in the described experiments.The reflection on the mirror takes place within a solid media of refractive index n solid , allowing a higher NA to be generated (c) Numerical aperture achievable with parabolic mirrors, both considering reflection in air (n = 1) or in a higher refractive index solid (n = 1.56), compared to that of a single plano-convex lens (lower straight line, paraxial approximaton), as a function of the diameter d to radius-ofcurvature R ratio.The star ( ) indicates the aperture achieved in the present work.
Fig. 2 .Fig. 3 .
Fig. 2. Fabrication of the micromirror array: (a) 60 nm of gold are evaporated on an array of parabolic microlenses (b) Negative replication in UV-curing resist forms the focusing micromirror array.The thin gold layer detaches from the microlens array, forming the reflective surface on the hardened resist (c) A 80 μm thick cover-glass is glued on top with additional resist merging the micromirrors.
F2Fig. 4 .
Fig. 4. Optical set-up.Left: lasers for trapping and for fluorescence excitation.Right-above: fluorescence signals detection.The light emitted by the particles is collected at high-NA by the micromirrors and relayed onto CCD2 trough a 4 f system.F1 and F2 are custom designed filters being highly reflective for the trapping laser and fluorescence excitation laser wavelengths, but transmissive for emitted fluorescence.Right-below: observation is performed in transmission through micromirror array partially transparent to visible light.
Fig. 5 .
Fig. 5. (Movie 2.43MB) Transmission image (10×) of four 9.33μm diameter polystyrene beads trapped in three dimensions at the focus of the parabolic micromirrors.The movie shows real time trapping both at 10× and 5× magnifications, and escape velocity measurements.
Fig. 6 .
Fig. 6. (Movie 899KB) Sequence showing fluorescence light detection using the micromirrors.The colored circles are not the particles themselves, but the micromirrors "turning-on" as particles progressively fill the traps.The fluorescence light emitted by the trapped particles is collected with high-NA by the mirrors, and relayed onto the color camera though a 4-f system. | 6,570.2 | 2007-05-14T00:00:00.000 | [
"Physics"
] |
Quantitative Analysis and Efficient Surface Modification of Silica Nanoparticles
Aminofunctional trialkoxysilanes such as aminopropyltrimethoxysilane (APTMS) and (3-trimethoxysilylpropyl)diethylenetriam-ine (DETAS) were employed as a surface modification molecule for generating monolayer modification on the surface of silica (SiO 2 ) nanoparticles. We were able to quantitatively analyze the number of amine functional groups on the modified SiO 2 nanoparticles by acid-base back titration method and determine the e ff ective number of amine functional groups for the successive chemical reaction by absorption measurements after treating with fluorescent rhodamine B isothiocyanate (RITC) molecules. The numbers of amine sites measured by back titration were 2.7 and 7.7 ea/nm 2 for SiO 2 -APTMS and SiO 2 -DETAS, respectively, while the numbers of e ff ective amine sites measured by absorption calibration were about one fifth of the total amine sites, namely, 0.44 and 1.3 ea/nm 2 for SiO 2 -APTMS(RITC) and SiO 2 -DETAS(RITC), respectively. Furthermore, it was confirmed that the reactivity of amino groups on the surface-modified silica nanoparticles could be maintained in ethanol for more than 1.5 months without showing any significant di ff erences in the reactivity.
Introduction
Since Werner Stöber has developed the synthetic method for preparing silica particles, colloidal silica particles have been intensively investigated to understand the reaction mechanism and control the size and size uniformity [1], and they have been used in many areas such as silicon wafer polishing, beverage clarification, and composite materials [2].Recently, silica nanoparticles were represented as one of the most widespread nanomaterials in use because they have several features: (i) ease of preparation through hydrolysiscondensation reaction from relatively low-priced precursor molecules such as tetraethyl orthosilicate (TEOS) in the presence of acid or base catalysts, (ii) possible surface modification with various organosilicon compounds, and (iii) its biocompatibility without showing acute toxicity [3][4][5].
The silanol groups, Si-OH, on the silica surface can be easily modified to various functional groups by treating with organotrialkoxysilane (RSi(OR ) 3 ) compounds or methallylsilanes together with catalyst [6], and surface modification of the silica nanoparticles with biorecognition molecules can make a specific interaction with receptor sites of living systems.Based on this surface modification technique, there has been a great amount of research effort to use the silica nanoparticles as carriers for drug or gene deliveries [7][8][9][10].Due to its stability and good biocompatibility along with easy surface modification, silica was also used as a surface coating material for many nanomaterials [11].Organotrialkoxysilanes produce silanol groups by hydrolysis, which can be condensed with surface Si-OH group to form stable siloxane bond, Si-O-Si, for surface modification.However, they can also engage in self-condensation to form gel or oily oligomers to be precipitated within few hours in the presence of base catalysts.Therefore, there is a dilemma whether to use the large excess amount of organotrialkoxysilane to maximize the coverage of SiO 2 surface or to use minimum amount to prevent the formation of any unwanted self-condensed side products.These self-condensed side products, in practice, are usually removed by repeating centrifugation/redispersion processes because the actual size of self-condensation products are very small compared to the size of SiO 2 nanoparticles.Therefore, there is always a possibility that these selfcondensed products further condense with Si-OH groups of the silica surface, resulting in a thick coating layer instead of generating a monolayer modification.Even though the thick coating layer of organotrialkoxysilane has the advantage, in some cases, of giving more reactive functional groups for reactions, precise control of the number of surface functional groups cannot be achieved in a reproducible and reliable manner.
Among the various organotrialkoxysilane molecules for the modification of silica surfaces, 3-aminopropyltrimethoxysilane (APTMS) and 3-aminopropyltriethoxysilane (APTES) were widely explored by many researchers.The aminofunctional groups are known to enhance the dispersibility and miscibility of silica fibers [12], and to be used as a linkage unit to attach other functional molecules [13].The surface modification of silica has been investigated in bulk surfaces and micron size silica beads by many characterization techniques [14][15][16][17][18][19].A reaction mechanism was suggested as through the interaction between the amino group of APTES and surface Si-OH group in anhydrous condition [14,[20][21][22] or through the self catalytic effect of the amino group of APTES in polar alcoholic condition [23].Interestingly, it was known that aminofunctional trialkoxysilanes such as APTMS and APTES are readily soluble in water to give solutions of unlimited stability at their natural pH in which normal organotrialkoxysilanes cause rapid condensation of Si-OH groups to form insoluble gels, and the internal hydrogen bonding was suggested to explain their lack of reactivity [12].
In this study, we have investigated the characteristic feature of aminofunctional ligand molecules such as APTMS and DETAS in ethanol solution, forming monolayer on the surface of SiO 2 nanoparticles instead of generating a thick coating layer.The total number of amino groups on the surface of SiO 2 nanoparticle was quantitatively analyzed by a simple acid-base back titration method and the effective number of amino groups for the successive chemical reactions was also determined by spectroscopic measurements after treating with fluorescent Rhodamine B isothiocyanate (RITC) molecule.It was also confirmed that the number of amino groups on the surface-modified silica nanoparticles, and their reactivity could be maintained in ethanol at room temperature for more than 1.5 months without showing any significant differences.
Fourier transform infrared (FT-IR) spectra were recorded using a JASCO FT/IR-600 Plus spectrometer for wavelengths ranging from 400 to 4000 cm −1 to study the surface of silica nanoparticles.The absorption spectra of surface modified silica nanoparticles decorated with RITC were measured by UV-visible spectrometer (Sinco, S-3100).
2.1.1.Synthesis of Silica Nanoparticles.To a TEOS (2.5 mL) solution in 115 mL of dried ethanol, 3.75 mL of aqueous ammonium hydroxide solution (14.6 M) and 3.75 mL of water were added while stirring.After 12 h of stirring, silica nanoparticles were isolated by centrifugation at a speed of 15,000 rpm and the supernatant was removed.The isolated products were redispersed in ethanol.The washing process with centrifugation/redispersion was repeated 3 times.Finally, the redispersed nanoparticle solution was centrifuged at a speed of 2,000 rpm to remove any aggregated particles.The purified SiO 2 nanoparticles were homogeneously dispersed in ethanol.The size and shape of the nanoparticles were characterized by TEM, FE-SEM, and DLS.
Surface Modification of Silica
Nanoparticles with Aminofunctional Trimethoxysilanes. 10 mL of SiO 2 nanoparticle solution (5 mg/mL of ethanol) was added into ten different vials.Appropriate amounts of APTMS (or DEATS) were added into each vial to maintain the conditions of surface modification; weight ratios of SiO 2 : APTMS (or DETAS) were varied from 1 : 0.01 to 1 : 0.1.After 12 hr of stirring at room temperature, modified silica nanoparticles were isolated and purified by centrifugation/redispersion processes (for 10 min at 15,000 rpm, 3 times) to remove the excess APTMS (or DETAS).Finally, the purified SiO 2 -APTMS (or SiO 2 -DETAS) nanoparticles were kept dispersed in ethanol.
Quantification of the Number of Amine Sites on the
Modified SiO 2 Nanoparticles by Back Titration.10 mg of modified silica nanoparticles was dispersed in 20 mL of 1.0 mM HCl solution and stirred for 30 min.Nanoparticles was separated by centrifugation at 15,000 rpm for 10 min, and 10 mL of supernatant was collected to be titrated with the standardized 1.0 mM NaOH solution in the presence of phenolphthalein indicator.From the difference of HCl concentration after treating with modified SiO 2 nanoparticles, the molar concentration of amine sites on the modified SiO 2 nanoparticles (10 mg) was calculated.This value was converted into the number of amine sites per unit area (nm 2 ) of the surface of SiO 2 nanoparticle based on the density of bulk silica (2.2 g/mL) and the surface area of 100 nm SiO 2 nanoparticle.
Quantification of the Number of Amine Sites on the Modified SiO 2 Nanoparticles by Absorption Measurement after
Coupling with RITC. 10 mL of the dispersed SiO 2 -APTMS (or SiO 2 -DETAS) nanoparticles in ethanol (5 mg/mL solution), prepared from various ratios of SiO 2 : APTMS (or DETAS), was treated with RITC (34.4 mg, 6.42×10 −2 mmol), and the mixed solution was stirred for 12 hr at RT. SiO 2 -APTMS(RITC) {or SiO 2 -DETAS(RITC)} nanoparticles were isolated by centrifugation at a speed of 15,000 rpm and repeatedly washed by centrifugation and redispersion until no RITC was detected from the supernatant.All the purified SiO 2 -APTMS(RITC) {or SiO 2 -DETAS(RITC)} nanoparticles, prepared from different ratios of SiO 2 : APTMS (or DETAS), were redispersed in 10 mL of ethanol.Small portions of each solution of modified nanoparticles (0.5 mL of SiO 2 -APTMS(RITC) or 0.3 mL of SiO 2 -DETAS(RITC), resp.) were diluted into 5 mL of ethanol for the absorption measurements.The absorptions of each sample were measured by UV-Vis spectrophotometry.A series of diluted solutions of RITC in ethanol was prepared to make the calibration curve and the absorption values were also measured by UV-Vis spectrophotometry.
Reaction of the Surface Amine Sites of SiO 2 -APTMS by a Process
Analogous to Dendrimer Synthesis.50 mg of SiO 2 -APTMS nanoparticles was stirred with methyl acrylate (MA; 1.8 mL) in 10 mL of methanol in a vial at 25 • C, with hydroquinone as an inhibitor.After 48 hr of stirring, the modified silica nanoparticles were purified by centrifugation/redispersion in methanol (for 10 min at 15,000 rpm, 3 times) to remove the excess MA.The purified SiO 2 -MA nanoparticles were dispersed in methanol and treated with ethylenediamine (EDA; 5.01 mL, concentration 7.5 M) at 25 • C.After 48 hr of stirring, the SiO 2 -EDA nanoparticles were purified by centrifugation/redispersion in methanol (for 10 min at 15,000 rpm, 3 times), and the purified SiO 2 -EDA nanoparticles were kept dispersed in methanol.
Long Term Stability Test of Aminofunctional Groups on
SiO 2 -DETAS by Coupling with RITC. 10 mL of the dispersed SiO 2 -DETAS nanoparticles in ethanol (5 mg/mL solution), after storing for 1.5 months, was treated with RITC (34.4 mg, 6.42 × 10 −2 mmol) and the mixed solution was stirred for 12 hr at RT. SiO 2 -DETAS(RITC) nanoparticles were isolated by centrifugation at a speed of 15,000 rpm and repeatedly washed by centrifugation and redispersion until no RITC was detected from the supernatant.The purified SiO 2 -DETAS(RITC) nanoparticles were redispersed in 10 mL of ethanol.0.3 mL of SiO 2 -DETAS(RITC) nanoparticles were diluted into 5 mL of ethanol for the absorption measurements.The absorption of SiO 2 -DETAS(RITC) was measured by UV-Vis spectrophotometry.
Results and Discussion
To check the stability of aminofunctional trialkoxysilanes in aqueous and alcoholic solutions as reported in the literature [12], APTES was dissolved in deuterated methanol (CD 3 OD) and the changes were monitored by 1 H-NMR.As shown in Figure 1, new methylene peaks next to oxygen atom were increased at 3.6 ppm with time, corresponding to free ethanol molecule resulting from either alcohol exchange with methanol solvent or hydrolysis with small amount of water in the system.Even though almost all the ethoxy groups were replaced from APTES after 6 h, the clear solution remained without forming an insoluble gel by condensation reaction.A similar but much faster hydrolysis reaction was observed in D 2 O solution (see Figure S1 in supplementary materials available online at doi:10.1155/2012/593471), and again no gel was formed.When the concentration of APTES was increased as high as 0.1 M where silica nanoparticles were usually prepared in the presence of basic NH 4 OH catalyst, a clear solution was maintained for several weeks.The methanol solution of APTES was also checked by electrospray ionization mass spectrometry (ESI-MS).The very slow alcohol exchange and hydrolysis reaction was confirmed again and no higher molecular ion peaks (higher than 300) were detected after 6 h as shown in Figure S2.
Based on the results of stability tests of APTES in alcoholic solution confirming no self-condensation of APTES in a meaningful speed, the surface modification of silica nanoparticles having an average size of 100 ± 15 nm was carried out with aminopropyltrimethoxysilane (APTMS) and (3-trimethoxysilylpropyl)diethylenetriamine (DETAS) in ethanol solution (Scheme 1(a)).Silica nanoparticles were prepared by the known method after slight modifications [1,12], and dispersed in ethanol (5 mg/mL of EtOH). 10 mL of this solution was treated with different amounts of APTMS in the range of 1 to 10 wt%.After the mixed solution was stirred for 12 h, the modified silica nanoparticle, SiO 2 -APTMS, was isolated and washed by centrifugation 3 times and redispersion in ethanol.To easily check the number of effective amine groups on the surface of silica naoparticle, these modified silica nanoparticles were reacted with a large excess amount (more than 10 3 times excess compared to the amounts of used APTMS) of Rhodamine B isothiocyanate (RITC) to attach fluorescent rhodamine units by forming thiourea linkage, which has become one of the useful bioconjugation methods [24].SiO 2 -APTMS(RITC) nanoparticles were again purified by centrifugation and redispersion in ethanol until no fluorescence from free RITC was detected in the supernatant.The absorption intensities from SiO 2 -APTMS(RITC) nanoparticles clearly showed that the absorption from chemically attached RITC units had increased as a function of the amount of APTMS used for the surface modification and seemed to reach the saturation point at a mixing ratio of around 1 : 0.05 (SiO 2 : APTMS, wt ratio) (Figure 2(a)).A very similar trend was observed from the modification experiments with DETAS, even though the absorption intensities were much higher due to the multiple amine sites (Figure 2(b) and Figure S3).These results showing saturation in absorption intensity seem to imply the formation of monolayer of APTMS and DETAS instead of the generation of a thick coating layer in our modification condition.
To confirm the formation of monolayer modification, silica nanoparticles before and after the surface modifications were precisely characterized by transmission electron microscopy (TEM) and dynamic light scattering (DLS) spectrophotometer.As shown in Figures 3(a)-3(c), TEM images of SiO 2 , SiO 2 -APTMS, and SiO 2 -DETAS did not show any significant size differences after the surface modifications.However, the size measured by DLS clearly showed the increment of the hydrodynamic sizes of SiO 2 nanoparticles in water from 137 to 141, and finally to 169 nm (Figures 3(d)-3(f)); the size from DLS is usually reported to be bigger than that from TEM due to the surrounded water molecules and swelling effect of surface molecules.A comparison of FT-IR spectra of SiO 2 -DETAS with SiO 2 clearly showed that the functional groups from DETAS molecule were observed after the surface modification (Figure 4).Since the surface-to-volume ratio of nanoparticles is dramatically increased compared to bulk materials, the actual number of aminofunctional groups in the modified silica samples would be reasonably high enough to be directly measured by simple acid-base titration.Due to the feature of amino group as a weak base and the possible aggregation and scattering problems during the titration, back titration method was employed for the direct measurement of amino groups on the surface of silica nanoparticles.Back titration is a technique designed to resolve problems when analytes are in nonsoluble solid phase having either too weak or too slow characteristics to give a valid reaction; the unknown concentration of a base sample is determined by reacting it with an excess volume of acid with a known concentration and then titrating the resulting mixture with the standardized base solution to give the differences of the acid concentration which corresponds to the concentration of unknown base sample [25].10 mg of modified silica nanoparticles was dispersed in 20 mL of 0.01 M HCl solution and stirred for 30 min.Nanoparticles were separated by centrifugation at 15,000 rpm for 10 min, and 10 mL of supernatant was collected to be titrated with the standardized 0.01 M NaOH solution in the presence of phenolphthalein indicator (see Table S1 for the summary of titration results and calculations).Table 1 summarizes the number of amine sites measured by the titration as well as the calculated values from the absorption measurement after treating with RITC.It has been reported that the number of surface silanol (≡Si-OH) groups on the silica surface could be measured by various techniques such as IR spectroscopy and inverse reaction chromatography; surface Si-OH concentrations vary in the range 3.3∼7 ea/nm 2 [26][27][28].It can be assumed that one APTMS reacts with three Si-OH sites on the surface to give the minimum values for the number of amine sites in the range 1.2∼2.3ea/nm 2 , which are comparable to the values obtained from the back titration method in our measurements (2.7 ea/nm 2 ).From the saturation value of SiO 2 -APTMS(RITC) nanoparticles in Figure 2(a), the number of effective amine sites reacting with RITC can be also calculated as 0.44 ea/nm 2 using the calibration curve of free RITC in ethanol solution (see Figure S4, based on the assumption that the absorption coefficient of RITC is not very different when it is attached on the surface).The differences from both analyses can be explained that only one out of six amino (-NH 2 and -NH-) groups can react with RITC probably due to the steric hindrance from the bulky and flat RITC molecule.Similar quantitative analyses were carried out for SiO 2 -DETAS and SiO 2 -DETAS(RITC) nanoparticles, which had three amine sites per DETAS ligand molecule.As expected, the number of amine sites measured by back titration was 7.7 ea/nm 2 and that of effective amine sites measured by absorption calibration was 1.3 ea/nm 2 , almost three times larger than SiO 2 -APTMS cases.To check the reproducibility and reliability of our quantitative analysis methods, a further chemical reaction was carried out from the amine sites of SiO 2 -APTMS, and the number of new functional sites was measured.For the quantitative conversion of each generation in the amineterminal dendrimer synthesis, it is known that a terminal primary amine could be successively reacted with methyl acrylate (MA) and ethylenediamine (EDA) to form one tertiary amine and two terminal primary amines (Figure S5) [29].The same reaction was carried out with SiO 2 -APTMS as illustrated in Scheme 1(b), and the number of amine sites in resulting SiO 2 -EDA was again measured by back titration.The number of amine sites per nm 2 was increased from 2.7 for SiO 2 -APTMS to 6.8 for SiO 2 -EDA, as shown in Table 1, corroborating that the conversion reactions proceeded almost quantitatively.
It is well known that the actual number of aminofunctional groups on the modified surfaces is changed with time due to the instability of surface layers.It has been also proposed that some amino group on the surface is bent toward the surface, and the bent amino group is interacted with circumjacent silanol group by hydrogen bonding or acid-base interaction to form ion pairs.Such interactions can reduce the actual reactivity of amine groups [30].In our surface modification method forming monolayer with amino-functionalized trialkoxysilanes, however, these types of interaction of amine groups with surface silanol groups (so-called backbiting interactions) were reduced significantly and the reactivity of amino group could be maintained for a long period of time without showing any differences in the reactivity.After the surface modification, SiO 2 -DETAS nanoparticles were stored in ethanol for 1.5 month.Then, SiO 2 -DETAS was treated with RITC and SiO 2 -DETAS(RITC) nanoparticles were purified by centrifugation/redispersion processes.The absorption intensities from Rhodamine B units from SiO 2 -DETAS(RITC) nanoparticles did not show any significant differences even after 1.5 month storage of SiO 2 -DETAS in ethanol solution, proving the excellent stability of surface aminofunctional groups and the reliable control of surface functionality in our modification method (Figure S6).
Conclusion
It has been demonstrated that a monolayer modification on the surface of SiO 2 nanoparticles can be obtained by using stable ethanol solutions of organofunctional trialkoxysilanes such as APTMS and DETAS.Precise characterizations with transmission electron microscopy (TEM) and dynamic light scattering (DLS) spectrophotometer have confirmed the formation of monolayer modification.The total number of amino groups on the surface of SiO 2 nanoparticle were quantitatively analyzed by a simple acid-base back titration method and the effective number of amino groups for the successive chemical reactions was also determined by spectroscopic measurements after treating with fluorescent Rhodamine B isothiocyanate (RITC) molecule, showing well-matched results with known values from literatures.Furthermore, the reproducibility and reliability of our quantitative analysis methods were confirmed by checking the change of amine sites of SiO 2 -APTMS after the quantitative conversion of the terminal primary amine to one tertiary amine and two terminal primary amines by successive reactions with methyl acrylate (MA) and ethylenediamine (EDA), a process well-developed in dendrimer synthetic methods.We believe that our method to generate monolayer modification and analyze the number of amine sites on nanoparticle surface will be useful in nano-bio research applications, such as sensors, diagnosis, and drug or gene deliveries, where reliable and reproducible modification and quantitative analysis are very critical.
Scheme 1 :
Scheme 1: Illustration of the surface modification of silica nanoparticle by APTMS followed by treatment with (a) RITC to measure the absorption and (b) methyl acrylate and ethylenediamine to increase the number of amine sites.
Figure 2 :
Figure 2: The absorption spectra of (a) SiO 2 -APTMS(RITC) and (c) SiO 2 -DETAS(RITC) nanoparticles prepared with various amounts of surface modification ligands, (b) and (d) relative intensity plots for each case clearly showing the absorption from chemically attached RITC units was increased as a function of the amount of APTMS used for the surface modification and seemed to reach the saturation point at around the mixing ratio of 1 : 0.05.0.5 mL of SiO 2 -APTMS(RITC) or 0.3 mL of SiO 2 -DETAS(RITC) was diluted into 5 mL of ethanol for the absorption measurements.
Table 1 :
Measured numbers of silanol and amine sites on the various silica surfaces from references as well as this work. | 5,024.6 | 2012-01-01T00:00:00.000 | [
"Chemistry"
] |
A dissemination workshop for introducing young Italian students to NLP
We describe and make available the game-based material developed for a laboratory run at several Italian science festivals to popularize NLP among young students.
Introduction
The present paper aims at describing in detail the teaching materials developed and used for a series of interactive dissemination workshops on NLP and computational linguistics 1 . These workshops were designed and delivered by the authors on behalf of the Italian Association for Computational Linguistics (AILC, www.ai-lc.it), with the aim of popularizing Natural Language Processing (NLP) among young Italian students (13+) and the general public. The workshops were run in the context of nationwide popular science festivals and open-day events both onsite (at BergamoScienza, and Scuola Internazionale Superiore di Studi Avanzati [SISSA], Trieste) and online (at Festival della Scienza di Genova, the BRIGHT European Researchers' Night, high school ITS Tullio Buzzi in Prato and the second edition of Science Web Festival, engaging over 700 participants in Center and Northern Italy from 2019 to 2021. 2 The core approach of the workshop remained the same throughout all the events. However, the materials and activities were adapted to a variety of different formats and time-slots, ranging from 30 to 90 minutes. We find that this workshop -1 In this discussion, and throughout the paper, we conflate the terms Natural Language Processing and Computational Linguistics and use them interchangeably.
2 Links to events are in the repository's README file.
thanks to its modular nature -can fit different target audiences and different time slots, depending on the level of interactive engagement required from participants and on the level of granularity of the presentation itself. Other than on the level of engagement expected of participants, time required can also vary depending on the participants' background and metalinguistic awareness. Our interactive workshops took the form of modular games where participants, guided by trained tutors, acted as if they were computers that had to recognize speech and text, as well as to generate written sentences in a mysterious language they knew nothing about.
The present contribution only describes the teaching materials and provide a general outline of the activities composing the workshop. For a detailed discussion and reflection on the workshop genesis and goals and on how it was received by the participants see (Pannitto et al., 2021).
The teaching support consist in an interactive presentation plus hands-on material, either in hardcopy or digital form. We share a sample presentation 3 and an open-access repository 4 containing both printable materials to download and scripts to reproduce them on different input data.
Workshop and materials
The activity contains both more theoretical and hands-on parts, which are cast as games.
Awareness The first part consists of a brief introduction to (computational) linguistics, focusing on some common misconceptions (slides 3-5), and on examples of linguistic questions (slides 6-12). Due to their increasing popularity, we chose vocal assistants as practical examples of NLP technologies accounting for how humans and machines differ in processing speech in particular and language in general (slides 20-39).
Games The core of the activity is inspired by the word salad puzzle (Radev and Pustejovsky, 2013) and is organized as a game revolving around a fundamental problem in NLP: given a set of words, participants are asked to determine the most likely ordering for a sentence containing those words. This is a trivial problem when approached in a known language (i.e., consider reordering the tokens garden, my, is, the, in, dog), but an apparently impossible task when semantics is not accessible, which is the most common situation for simple NLP algorithms.
To make participants deal with language as a computer would do, we asked them to compose sentences using tokens obtained by transliterating and annotating 60 sentences from the well known fairy tale "Snow White" to a set of symbols. We produced two possible versions of the masked materials: either replacing each word with a random sequence of DINGs (e.g. co §¦ for the word morning) or replacing them with a corresponding nonword (for example croto for the word morning). The grammatical structure of each sentence is represented by horizontal lines on top of it representing phrases (such as noun or verb phrases), while the parts of speech are indicated by numbers from 0 to 9 placed as superscripts on each word (Figure 1). Figure 1: The first sentence of the corpus "on a snowy day a queen was sewing by her window" translated using DINGs (above) and using non-words (below) Participants were divided into two teams, one team would be working on masked Italian and the other on masked English. Both teams were given the corpus in A3 format and were told that the texts are written in a fictional language.
Two activities were then run, focusing on two Each rule is made of felt strips for phrases, cards with numbers for parts of speech, and "=" cards. different algorithms for sentence generation. In the first, participants received a deck of cards each equipped with a button loop ( Figure 2) and showing a token from the corpus. Participants had to create new valid sentences by rearranging the cards according to the bigrams' distribution in the corpus. Using the bracelet method (slides 52-61), they could physically thread tokens into sentences.
In the second activity (slides 63-92), the participants extracted grammatical rules from the corpus, and used them to generate new sentences. In order to write the grammar, participants were given felt strips reproducing the colors of the annotation, a deck of cards with numbers (identifying parts of speech) and a deck of "=" symbols ( Figure 3). With a new deck of words (Figure 2), not all present in the corpus, participants had to generate a sentence using the previously composed rules.
Reflection and Outlook By superimposing a plexiglass frame on the A3 corpus pages (Figure 4), the true nature of the corpora was eventually revealed. The participants could see the original texts (in Italian and English) and translate the sentences they had created previously.
The activity ended with a discussion of recent NLP technologies and their commercial applications (slides 93-96), and of what it takes to become a computational linguist today (slides 97-99).
Activity Preparation
The preparation of the activity consists of several steps: (1) creating and tagging the corpora with morpho-syntactic and syntactic categories as described in the repository; (2) choosing the words to include in the card decks: these must be manually selected but scripts are provided to generate possible sentences based on bigram co-occurrences, and to extract all the possible grammar rules present in the annotation; (3) when the produced sentences and grammar are satisfactory, scripts are provided to generate (i) the printable formats of corpora and decks of cards, (ii) a dictionary to support the translation of sentences in the last part of the workshop, and (iii) clear-text corpora; (4) sentences from the clear-text corpora have to be manually cut and glued on a transparent support that can be superimposed on the printed corpora to reveal the sentences; (5) finally, some manual work is necessary: producing strips of felt or any material with the same colors used in the corpus; cutting threads; attaching a button loop to the relevant cards, etc.
Reusability
In the spirit of open science and to encourage the popularization of NLP, the teaching materials and source code are freely available in our repository (see footnotes 2-3 for the links). The print-ready material is released under CC BY-NC; the source code is distributed under the GNU gpl license version 3. All scripts work for Python versions 3.6 or above and the overall process requires python3, lualatex, pdftk and pdfnup as detailed in the README.md file in the repository, which contains all necessary instructions. | 1,806 | 2021-04-26T00:00:00.000 | [
"Computer Science",
"Education"
] |
CLIP-SP:Vision-language Model with Adaptive Prompting for Scene Parsing
We present a novel framework, named CLIP-SP, and design an adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing. Our approach addresses the limitations of DenseCLIP, which has shown the superior performance of CLIP pre-trained models over ImageNet pre-trained models in image segmentation, but struggles with the rough pixel-text score maps for complex scene parsing. We argue that owing to containing all textual information on a dataset, the pixel-text score maps, i.e. , dense prompts are inevitably mixed with noise. To overcome this challenge, we propose a two-step method. Firstly, we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images. Secondly, based on the top-k categories and confidence scores, our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes, and incorporates them into the visual features to feed the decoder for segmentation. Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results. Our method has achieved competitive performance, limited by the available visual-language pre-trained models. Compared with the DenseCLIP, our CLIP-SP achieves a performance improvement on ADE20K, yielding +1.14 % mIoU with a ResNet-50 backbone.
gions associated with semantic categories, e.g., road, person, sky and so on.Since Long et al. [3] proposed fully convolutional networks (FCNs) as the pioneer, many efforts have proposed various advanced improvements such as contextual representation aggregation [4][5][6]23], multi-scale representation learning [4,8], and vision transformer architecture designs [7,9,10].The large-scale pre-training models further promote the development of semantic segmentation because of their more robust representation and better modeling of intrinsic relationships.Fundamental vision-language pre-training models such as CLIP (Contrastive Language-Image Pretraining) [2] capture both fine visual and linguistic features and have shown excellent generalization ability to various downstream vision tasks.However, directly applying CLIP to scene parsing remains a challenge.With the help of prompt support sets composed of the target image, mask and text, CLIPSeg [11] achieves good performance on zero-shot and one-shot segmentation.However the method is hard to extend to complex scene parsing, because it relies on constructing high quality prompt support sets.DenseCLIP [1] roughly utilizes all categories of the dataset to generate prompts, which combines the visual features of image inputs and the linguistic features of all categories.To fit the segmentation task, the final mode of prompts is the mask , i.e., pixel-text score maps.We argue that utilizing all categories to generate prompts inevitably introduces noise, because redundant prompts are useless and misleading due to the equal treatment.This method partially disregards the information provided by the image inputs, thus the guidance from language approximates to be a retrieval of categories on a certain dataset.
Through statistical observation shown in Figure 1, it is found that 99.7% of images contain less than 25 categories for a single image on the validation set of the challenging ADE20K [12] dataset, and the maximum number of categories in one image is 27, which is much less than the total number of 150.To address the aforementioned issue and take advantage of the above observations, we adopt a two-step approach that starts with an additional branch to narrow the selection range of categories for adaptive generation of high quality prompts based on image inputs.Our approach mimics the way humans recognize objects in a scene at a glance, where the concepts present in the scene are instantly identified and then attributed to objects.To adaptively generate image-specific prompts, in the first stage we introduce a lightweight decoder for multi-label classification that takes advantage of multimodal knowledge of CLIP.Our approach involves a dual graph design, which allows the decoder to effectively incorporate both image and text features.Moreover, we propose a novel selection strategy to boost model performance and accelerate training by utilizing a subset of the ground-truth labels for promoting at the beginning of the training process.
In this paper, we take a step further to explore the applicability of the pre-trained CLIP models for scene parsing.Compared with the state-of-the-art method DenseCLIP, our proposed method exhibits +1.14% mIoU on ADE20K with a ResNet-50 [13] backbone of the CLIP.The main contributions of this work are summarized as follows: (1) A two-stage framework is proposed, which comprises one path of adaptively generating prompts through multilabel image classification, and the other for promptguided semantic segmentation.(2) A lightweight dual graph decoder is proposed for multilabel image classification, which fully utilizes language knowledge from CLIP serving as the adaptor to prompt according to the image inputs.(3) A simple but effective selection strategy is proposed to improve the performance of fine-tuning, which transforms uni-modal inputs into multi-modal by using partial ground-truth labels for prompting.Our trick achieves an improvement in performance and is computationally efficient, introducing no additional computational overhead during inference and almost negligible overhead during training.
Semantic segmentation
Semantic segmentation has long been a major topic in the vision community and is still a challenging task for parsing diverse contexts in different scenes.In this field there exist extensive studies which can be generally divided into pixelbased methods and region-based methods.The pioneer work of FCNs [3] which treats semantic segmentation as pixel classification adopts fully convolution networks to make dense prediction.A number of later works strive to improve the pixel classifier performance via expanding the receptive field [5], constructing more reliable contextual information [23], and fully utilizing multi-scale features [8].The region-based methods split semantic segmentation into mask prediction and mask region classification [24,25].Our method can be regarded as mainly operating in the neck between the encoder and decoder to enhance features.
Transferable Representation Learning
Pretraining is the primary impetus to promote the development of computer vision over the years.The universal approaches to solve various downstream vision tasks are based on the ImageNet pretraining that helps to speed up convergence.
To get the larger scale data and simultaneously free manual annotation, inspired by the success in NLP, some works focus on masked signal modeling and self-supervised learning [26,27], which are friendly to dense prediction tasks in the initial design.Another attempts to utilize the supervision directly from natural language to learn visual representation, e.g., CLIP [2], ALIGN [28].Contrastive learning and largescale image-text pairs make CLIP successful, which has showed impressive performance of zero-shot transfer on several classification tasks.However, image features are considered to be short of fine local information due to the loose supervision from language, which makes downstream dense prediction confusing.The latest work DenseCLIP [1] demonstrates that the reasonable application of text features from CLIP helps widespread visual models achieve better performance.Despite its great success, we try to explain the main causes of why the CLIP visual encoder is not easy to be fine-tuned to the semantic segmentation task, and propose a solution from another perspective.
Multi-label classification
The multi-label classification task aims to identify multiple predefined labels in a given image.Existing studies exploit the label correlations to model the semantic relationships between different categories [33][34][35] and handle the imbalance issue through well-designed loss functions [31,32].A recent state-of-the-art method ADDS [17] extends CLIP to zeroshot multi-label classification has inspired us.We propose a dual graph decoder to exploit the language knowledge and compare it to a simple MLP decoder in our framework to verify effectiveness.Even with the simple MLP decoder, the final performance on semantic segmentation is remarkably improved compared with the method without the multi-label classification.Through experiments, we found that the multilabel classification task forces the model to pay enough attention to minor concepts and small objects.It does help the model gain better local information and achieve better performance in semantic segmentation, as shown in RankSeg [36].
Method
We begin with a brief introduction of CLIP [2] and our failure case in a naive solution as the preliminary.Then we propose an improved solution, followed by presenting the proposed CLIP-SP in detail.
Overview of CLIP
CLIP is a visual-language pre-training method that consists of two encoders, including an image encoder V(•) (ResNet [13] or ViT [9]) and a text encoder T (•) (Transformer [16]).CLIP aligns the embedding spaces of visual and language during pre-training on 400 million image-text pairs through contrastive learning, where original image-text pairs are regarded as positive samples, while mismatched image-text Fig. 3 Results of different pre-training settings on the ADE20K dataset.We report the single-scale mIoU of ResNet50 backbones with different configurations and the same decoder, semantic FPN [8].
pairs are negative ones.Presently, several works [1,11,14,15] have shown that CLIP inherently embedded local image semantics in its features as it learns to associate image content with natural language descriptions during pre-training.However, transferring the pre-trained knowledge of CLIP for the dense downstream task is nontrivial.At the beginning of the simple experiment, we only fine-tuned the image encoder, and the performance of semantic segmentation on the ADE20k dataset is even worse than the same model pretrained on ImageNet as shown in the Figure 3.An interesting discovery is that compared with the same experiment in DenseCLIP, the key point is that in the original model we used the default BatchNorm rather than SyncBatchNorm on 4 RTX 3090 GPUs with a batch size of 16.It shows that there is an obvious internal covariate shift during pretraining on massive image-text pairs, because CLIP adopts a huge minibatch and the BatchNorm layer is sensitive to the batch size during the training.
Framework
The overall framework of our method is illustrated in Figure 2, which consists of one path for denoising through multi-label image classification and utilizing predictions to generate adaptive prompts, and the other path for semantic segmentation.The actual input of our framework is only an image, because we keep the text encoder frozen and the text embeddings of all categories are unchanged during the training and inference for fixing language knowledge from the pretraining.This procedure can be regarded as constructing a codebook S. Because valuable correlations between text embeddings and image embeddings are constructed during the pretraining on a large-scale dataset, and fine-tuning both V(•) and T (•) is more difficult than fine-tuning V(•) with the guidance of fixed text embeddings.We utilize the template "a photo of a [Label]."proposed by CLIP to construct text embeddings and handle how to adaptively select a subset of them containing less noise.Then we use both selected text embeddings and image embeddings to generate prompts, and combine them with the last stage of feature map output by the image encoder to explicitly incorporate language knowledge and local information.Ultimately we feed the features aggregated with prompts to the semantic segmentation decoder.We explain the formulations and details of both multi-label image classification and the method of prompting as follows.
Dual GraphFormer for Multi-label Decoder.The motivation to complete the multi-label classification task in our framework is to eliminate interference from irrelevant categories in the scene and then reduce the misleading dense prompts.We are inspired by the Dual-Modal Decoder on ADDS [17], which uses both frozen V(•) and T (•) and only fine-tunes the decoder to maintain the original alignment between image and text features for zero-shot.To simplify the design and make it more interpretable, we retain the two-way cross-attention module and based on this, we propose our module named Dual GraphFormer.As illustrated in Figure 4, on the one way we use text features as the query and image features as the key and value, and on the other operate in the opposite way.Further, to exploit correlations of both label and the image features of neighborhood, we add graph convolution layers [30] and try to integrate them via cross-attention.
Dual GraphFromer needs two modal inputs.For the image path, input V ∈ R C×(H4W4+1) is the concatenated image features [z, z], where H 4 , W 4 , C are the height, width and the number of channels of the image features from the 4-th stage of the image encoder, and z is the local token and z is the global token in the global average pooling of CLIP.For the text path, input S ∈ R K×C is the text embeddings of all categories, where K, C are the number of total categories and channels of the text embedding which is the same with the image embedding for CLIP.At the beginning of block, the compressed matrix B ∈ R N ×(H4W4+1) is generated by applying a convolutional operation on V , with the goal of reducing the number of nodes from (H 4 W 4 +1) to N , because there is still redundancy after sampling at 32x downsampling.The transpose of the compressed matrix B is used to recover to the origin number for decompression.w v , w s ∈ R 1 are the learnable factors used to weight the short connection.The normal graph convolution and our implementation are formulated as: where H out and H in are the output and input features of nodes in the single layer, σ is the nonlinear activation function, A is related to adjacency matrix and W is weight parameter matrix.We replace the origin weight parameter matrix with the LayerNorm layer with elementwise affine to smooth the relation between different samples while preserving the relation between different features like in the transformer.We treat the A as a learnable parameter matrix through onedimensional convolution with the kernel size of 1. Specially on the text path, we use the co-occurrence matrix of labels on the training set to initialize A with the prior information of statistics.We use 3 blocks to construct the decoder because there is no significant improvement in performance while stacking more than 3 blocks.The logits S l ∈ R K×1 are obtained by passing the final output S ′ of the text path, through a fully-connected layer with sigmoid activation function acting as a classifier.Adaptive Prompting.After obtaining the S l , we can generate adaptive prompts for modeling scenes in input images.Firstly, we sort the logits S l and take the top k indices and value of S l to narrow down the selection range and filter noise.The number of k is related to scene complexity, which can be simply measured by the number of categories in the single image.We obtain the corresponding text embeddings S k from the codebook S based on k indices, which can be treated as a rough scene representation.Considering that different scenes have the same categories and the difference lies in the different main body, we concatenate the corresponding probabilities with S k to enhance the representation ability.After this operation, we ultimately obtain scene tokens T ∈ R k×(C+1) , i.e., adaptive prompts.
Scene Refinement Module
We propose the Scene Refinement Module to fuse the information of adaptive prompts and image features, and then directly feed image features aggregated by scene information to semantic segmentation decoder to predict masks.The detail of our design is illustrated in Figure 5.To make full use of scene tokens T , we further mine the information within it.
Because scene tokens contains fine-grained information of scenes and text embeddings are closely related to image features during the visual-language pretraining, we can shrink it to model the channel activation of scenes based on the image features x 4 ∈ R C4×H4×W4 from the 4-th stage of the image encoder V(•), which is similar to the approach in SENet [37]: where T g ∈ R C4 .The MLP layer is used for modeling channel-associations, and we use the parameterized global pooling layer i.e., the linear project to shrink features.To obtain dense features from adaptive prompts T for aggregating image features, we adopt the non-local approach [38] to calculate cross-attention, and the query is image features x ′ 4 , both the key and value are T .Finally, to completely fuse the information carried in prompts into image features, we concatenate the output of the cross-attention and image features x ′ 4 and leverage 1x1 convolution to fuse features.
Loss Functions
The final loss is the sum of three intermediate losses: one for the multi-label decoder, L ml , one for the segmentation decoder, L seg , and an auxiliary loss L aux for supervising the scene awareness in the scene refinement module.We use λ 1 , and λ 2 to balance the final loss: (5) The multi-label decoder loss function L ml is the asymmetric loss [32], which can effectively handle long tailed distributions and is also used in the recent multi-label classification work [17].The segmentation decoder loss function L seg is the cross-entropy loss, and it is the major loss.The auxiliary loss L aux can be defined as a cross-entropy loss: where m is the attention map in the non-local operation of the scene refinement module, ŷ is the one-hot label of m, y hw is the corresponding label in the (h, w) cell and l i is the label corresponding to the i-th position of top-k results.We treat the attention map as a coarse scene parsing result and the loss can help accurately building connections between prompts and image features.
Selection Strategy
Although the model can converge normally under our framework, we still worry that in the early stages of training, the poor performance of multi-label decoder makes the prompts almost ineffective, which hinders the model from converging to the optimal solution.The similar types of work, crossmodal adaptation [22] shows that it is a better fine-tuning paradigm for multi-modal models such as CLIP to fine-tune by using cross-modal information as training samples than uni-modal information for downstream uni-modal tasks.It inspires us to introduce the ground-truth labels to alleviate the negative impact of inaccurate top-k results at the beginning of training.We proposed a simple but effective strategy called batch drop.We chose a ratio r to mask the batch of top-k labels from S l and replace them with labels sampled from the ground-truth.During training, r decreases exponentially, because the performance of multi-label classification is gradually improved and we also need to reduce the reliance on real samples.
Experiments
We perform experiments on ADE20K [12], a challenging large-scale semantic segmentation dataset covering an ex- We report on mIoU in the validation sets in accordance with the common practice [19,20] as well as the FLOPs and the number of parameters for fair comparisons.Our baseline only contains the pre-trained image encoder of the CLIP as the segmentation backbone, and the Semantic FPN [8] as the decoder.The following subsections describe the details of the experiments and results.
Implementation Details
For a fair comparison, we trained all the models on 4 RTX3090 GPUs with a batch size of 16 for 160k iterations.We use the ResNet-50 pre-trained image encoder of the CLIP for both CLIP-SP and baseline.Our model applies the loss function ASL for multi-label classification.To evaluate the effectiveness of our framework, we use a simple MLP decoder on the path of multi-label classification for comparison, which consists of three linear layers and a fixed classifier, i.e., S. We set the lr multiplier of the image encoder to 0.1 and the initial learning rate to 0.0001 following the schedule of DenseCLIP.We use weighting terms λ 1 = 10 and λ 2 = 0.4, to keep the learning balanced.
Comparison with the state-of-the-art
We compare the proposed method with state-of-the-art algorithms on ADE20K in Table 1.We include the FLOPs, the number of parameters, and the mIoU in single-scale testings.
The experiments results show that for the same backbone, our CLIP-SP with a simple Semantic FPN can outperform the state-of-the-art methods and is +1.14%, +1.15%, and +0.88% higher than DenseCLIP on ResNet-50, ResNet-101 and ViT-B backbones with the same input size and reduce the computational overhead.Besides, only a small number of computational cost and parameters have been increased.
Ablation Studies
To further validate the effects of different components of our CLIP-SP, we perform detailed ablation studies with the ResNet-50 backbone and the results are shown in Table 2.
Our baseline model is Semantic FPN with the backbone ResNet-50 on the CLIP pre-train.Firstly, to evaluate the effectiveness of our framework, we chose a weak multi-label decoder, i.e., MLP decoder without the batch drop.The base experiment achieves 3.62% singlescale mIoU improvement, which shows there is a significant improvement while using the language knowledge of CLIP.Secondly, replacing the MLP decoder with Dual GraphForm decoder, we obtains the 0.92% improvement.After adding the batch drop, we gain the 0.53% improvements.It shows that the performance may hit a bottleneck and these two modules may not contribute too much to the overall performance.Influence of different number of Dual GraphFormer blocks.We study the influence of the number of Dual Graph-Former blocks, as shown in Table 3.According to the results, our method achieves the best performance on ADE20K when we adopt 3 Dual GraphFormer blocks.Increasing the number of blocks, our method even performs worse.
Influence of different top k.We study the influence of the size of the selected label set, i.e., k, as shown in Table 4.According to the results, our method achieves the best performance on ADE20K when k=30.Excessive k value cannot effectively filter, resulting in decreased performance, especially when k=150.
Influence of different ratios in batch drop.We study the influence of different ratios in Batch Drop, as shown in Table 5.The hyper parameter, exponential decay coefficient is 0.9999 in the experiments.r descends from r start to r end and then remain unchanged.According to the results, we find that at the beginning of training, the over reliance on ground-truth label is harmful to performance.And keeping a small ratio r during the middle to late stage of training may play a role similar to the drop out and obtain a better generalization.
Visualization
To better demonstrate the superiority of CLIP-SP, we provide several qualitative results in Figure 6.We compare the segmentation maps of our method with the baseline model and DenseCLIP, and find that CLIP-SP is more effective in reducing the probability of irrelevant categories appearing in the scene parsing output.
Conclusion
In this paper, we have presented a novel framework, CLIP-SP, to reduce the noise in the dense prompt while transferring the language knowledge from the CLIP to scene parsing.The visual features of CLIP contain rich semantics but still need the guidance of local information.We decrease the number of prompts compared with the normal method, and control it within a reasonable range.It shows that more prompts are not better, instead, they will introduce more confusion.Our findings suggest that, by constraining the number of prompts instead of directly constraining classifiers, our method generally results in a lower number of predicted categories compared to other methods.
Limitations & challenges.Although our method has achieved improvement, we find that it is not always beneficial because the tendency leads to ignore objects that is hard to identify, but it is advantageous to segment the main objects in the scene.Also, our design for multi-label classification may not be good enough to make full use of the visual-language pre-trained models.Though we believe the better method of multi-label classification can lead to higher improvements, we need to consider the trade-off between computational cost and accuracy.Besides, owing to our method based on the visual-language pre-trained models, it is nontrivial to expand to other excellent visual pre-trained backbones.We hope our initial attempt can inspire more efforts towards adopting a denoising prompting strategy to exploit the pre-trained vision-language knowledge.
Appendix
We provide more analyses of the influence of different top k in Table 4 and the weighting terms of the loss function in detail.
Details of different top-k value results.According to the characteristics of our method, different k values have different impacts on training.Generally a larger k means more redundancy, and it is not necessarily higher than better in Table 4. Different values of k cause the model to pay different attention to categories during training.Here we find that IoU values for 20% categories of ADE20K increase with the increase of k and show the results of the 10 categories with the most significant changes in Table 6.We see that a low k has a significant negative impact on the performance of the model in certain categories, such as the bar, the lake and the oven.Besides, different k values lead to different performances in different scenes and we provide several qualitative results in Figure 7.We find that the lower k usually performs better due to containing less unrelated categories in simple scenes.When encountering complex scenes the situation becomes more complex.The lower k performs poorly in terms of details in complex scenes for certain categories, as circled in the last row of Figure 7.
Effects of weighting terms in the loss function.Table 7 shows the effects of λ 1 and λ 2 .We find that adjusting the weights to make the three loss scales similar is beneficial to the training results.
Fig. 1
Fig.1Statistical data on the number of categories for each image on the ADE20K[12] validation set.
Fig. 2
Fig.2The overall framework of CLIP-SP.CLIP-SP first extracts the image embeddings and text embeddings of all categories, and then utilize them to obtain multi-label predictions.Trough the selection strategy, we use both the text embeddings and confidence scores of corresponding multi-label predictions to generate scene tokens, i.e., adaptive prompts.In the scene refine module, we combine the image embeddings and scene tokens to obtain refined features for implicit modeling scenes through adaptive prompts.
Fig. 4
Fig.4 The overview of our block design of multi-label decoder, named Dual GraphFormer.FI is the image features and FT is the text features.
Fig. 5
Fig.5 The overview of our scene refinement module.
Fig. 6
Fig.6 Qualitative results on ADE20K.We visualize the segmentation results on ADE20K validation set of our CLIP-SP based on ResNet-50, the baseline model and DenseCLIP.
Fig. 7
Fig. 7 Qualitative results on ADE20K of different k values.We visualize the segmentation results on ADE20K validation set of our CLIP-SP based on ResNet-50.
Table 1
Semantic segmentation results on ADE20K.We compare the performance of CLIP-SP with existing methods when using the same backbone.We report the mIoU of single-scale, the FLOPs and the number of parameters.The FLOPs are measured with 1024 × 1024 input using the fvcore library.The results show that our CLIP-SP outperforms other methods." * " represents our implementation under the same settings.
Table 2
Ablation study on the ADE20K.The MLP decoder and Dual GraphFormer decoder are used for multilabel classification.
Table 3
Influence of different number of Dual GraphFormer blocks.∆ is compared with CLIP-SP with the baseline.
Table 4
Influence of the size of the selected label set, i.e., k. ∆ is compared with the baseline.
Table 5
Influence of different ratios in batch drop.
Table 6
IoU results for different k values and partial categories of our CLIP-SP based on ResNet-50.The displayed subset of categories are quite representative, as their IoU increases noticeably with the increase of the value of k.
Table 7
Influence of different weighting terms. | 6,181 | 2024-08-27T00:00:00.000 | [
"Computer Science"
] |
Surface Assisted Combustion of Hydrogen-Oxygen Mixture in Nanobubbles Produced by Electrolysis
The spontaneous combustion of hydrogen–oxygen mixture observed in nanobubbles at room temperature is a puzzling phenomenon that has no explanation in the standard combustion theory. We suggest that the hydrogen atoms needed to ignite the reaction could be generated on charged sites at the gas–liquid interface. Equations of chemical kinetics augmented by the surface dissociation of hydrogen molecules are solved, keeping the dissociation probability as a parameter. It is predicted that in contrast with the standard combustion, the surface-assisted process can proceed at room temperature, resulting not only in water, but also in a perceptible amount of hydrogen peroxide in the final state. The combustion time for the nanobubbles with a size of about 100 nm is in the range of 1–100 ns, depending on the dissociation probability.
Introduction
Combustion processes are supported by the heat produced by the combustion reactions [1][2][3][4]. For a small volume of the reaction chamber, the surface-to-volume ratio becomes large, and the heat escapes from the volume too quickly to sustain the combustion. Quenching of the reactions is the main obstacle for scaling down internal combustion engines [5,6], which could be used to power different kinds of micro and minidevices [7,8]. Nevertheless, combustion of a stoichiometric mixture of hydrogen and oxygen was recently observed in nanobubbles [9,10] and at special conditions in microbubbles [11,12]. The reaction between gases is ignited spontaneously at room temperature, and cannot be explained by the standard combustion process. The high density of nanobubbles observed in the experiments suggests that the reaction is a surface-assisted process [13], but no specific mechanism has ever been discussed.
Here we propose a mechanism for the combustion of the hydrogen-oxygen mixture in nanobubbles. The mechanism is related to charges existing on the gas-electrolyte interface. These charges provide sites where H 2 (and possibly O 2 ) molecules dissociate, producing H and O atoms in the gas phase. These atoms ignite and support the combustion reactions. The main prediction of the model that can be directly checked experimentally is that the surface-assisted combustion produces an appreciable amount of hydrogen peroxide, in contrast with the normal combustion.
Nanobubbles containing a mixture of gases were produced by the alternating polarity electrolysis when the voltage polarity of an electrode changes with a frequency f ∼ 100 kHz. In this case, a thin layer adjacent to the electrode is highly supersaturated with both gases [9]. Nanobubbles that are formed in the layer do not grow large, and disappear in phase with the electrical pulses. Periodic reduction of the gas concentration was observed in different systems with a vibrometer [9,10]. Direct observation of the nanobubbles was not possible due to the small size (∼100 nm) and short lifetime (∼1 µs) of the objects. However, at some conditions [11,12] the nanobubbles can merge to form a visible short-lived microbubble, which is also ignited spontaneously and disappears with a significant release of energy.
Energy production on the time scale of microseconds provides proof that the observed process is the combustion but not just conversion of the gases. The reaction between H 2 and O 2 gases in nanobubbles produces heat that cannot be explained by the Joule heating of the electrolyte. Because the heat escapes very quickly to the liquid and solid substrates, the temperature rise around the electrodes was expected less than 1 • C. Nevertheless, it was measured by a gold probe located in the vicinity of the electrodes [9]. In an independent study [14], the effect was investigated in detail using a built-in thermal microsensor. Much stronger heating-up to 50 • C-was observed in a microchamber covered with a flexible membrane [10]. A significant heating due to a small thermal mass of the device was measured using the thermal dependence of the current passing through the electrolyte.
At high nanobubble density, they start to coalesce and form short-lived microbubbles. In a closed microchamber, the bubbles with a size of 5-10 µm last for just a few microseconds [11]. Each microbubble is accompanied by a pressure jump in the chamber, so the combustion energy was transformed not only to the heat, but also to the enthalpy of the liquid and to the mechanical work done by the flexible membrane. For an open millimeter-scale system, the microbubbles formed by coalescence have original size of 40 µm, combustion happens in less than 10 µs, and produces a well-audible sound (click) [12]. The bubble inflates during 50 µs to a size of 300 µm, and the main part of the combustion energy is transformed to mechanical work done by the inflating bubble. The mechanism of combustion in microbubbles is related in some way to their origin from merging nanobubbles: the reaction is not ignited in bubbles with a size of 10 µm produced by a microfluidic bubble generator from the premixed gases.
It has been known for a long time that bubbles in water carry a negative charge [15][16][17][18]. A similar effect was observed for oil drops in water [19,20]. The experiment showed that ζ-potential of the bubbles (drops) changes with pH from zero at pH = 2-4 up to −120 mV at pH ≈ 10. The surface density of charges measured for oil drops [20] corresponds roughly n s = (3 − 4) × 10 13 cm −2 at neutral pH. For bubbles in water, the charge density is expected to be in the same range, because the pH dependence of the ζ-potential is similar to that for the drops [21].
Significant ζ-potential of the bubbles and drops is typically associated with the adsorption of hydroxyl ions at the interface. However, not all authors support this point of view. For example, it was proposed in [22] that the charge transfer is related to the anisotropy of water-water hydrogen bonding at the interface. Different points of view on the origin of the charges on the interface are reviewed in [21,22] (see original references therein), but the experimental fact that the negative charges exist on the interface is not disputed.
Model
We assume that the charges on the bubble walls can play the role of special sites for dissociation of H 2 and possibly O 2 molecules. Although there is no direct experimental evidence for this specific dissociation, the generation of OH free radicals was observed in collapsing microbubbles filled with air, oxygen, or ozone in the absence of external dynamic stimuli [23,24]. The microbubbles with a size smaller than 50 µm decrease in size and collapse softly under water after several minutes. Although pressure in the shrinking bubble increases, temperature does not change significantly because the process proceeds slowly in comparison with acoustically driven bubbles [25,26]. The pressure increase alone is not sufficient to produce radicals, and it was proposed [23] that the charges on the bubble surface play a role. We believe that surface-assisted reactions have to be involved and the surface charges provides sites for these reactions.
A molecule impinging on a charged site acquires the electrostatic energy E el ∼ (α 0 /4πε 0 )(q/h 2 ) 2 , where α 0 is the static polarizability of the molecule (for hydrogen α 0 ≈ 0.79 Å 3 ), ε 0 is the vacuum permittivity, q is the electric charge of the site, and h is the distance between the site and the molecule. The equilibrium distance due to the van der Waals attraction and the short-range repulsion is about h = 3 Å. Using the maximal charge on the site to be the charge of an electron, q = e, one finds E el ≈ 0.14 eV, which is much smaller than the dissociation energy (4.5 eV for H 2 ). Therefore, direct interaction of the induced dipole with the charged site can play only a marginal role.
In spite of a large dissociation energy, plasmon-induced dissociation of H 2 was reported on gold nanoparticles [27,28]. The mechanism includes transition of a hot electron on an antibonding state of an approaching molecule with the following dissociation of H − 2 ion. In our case, if the charged site is OH − , the energy of electron with respect to vacuum is −9.2 eV [29]. It is considerably deeper than the antibonding level in a free hydrogen molecule −3.7 eV [30]. Nevertheless, the electron transfer is possible because the external potential φ is applied to the system. The electrochemical potential of OH − ion is thenμ = µ + qφ, where µ = −9.2 eV is the chemical potential of the ions without the field and q the charge of the ions. On the other hand, the hydrogen molecule is neutral and interacts with the field only via the induced dipole moment. The external potential sweeps from −φ 0 to +φ 0 with a frequency f ∼ 100 kHz [9]. For sufficiently large amplitude φ 0 there is always a moment of time when the electron energy in the ion is equal to the energy of the antibonding level of H 2 molecule. Combustion in nanobubbles is observed at a rather high potential φ 0 5 V [13].
We assume that the radicals are formed in the gas phase as is observed for shrinking microbubbles [23,24]. The process-which we call the "surface assisted dissociation"-differs from the catalytic process, where the products of the dissociation are adsorbed on the surface. The difference can be related to two factors: the gas-liquid interface has no fixed positions for molecules as there are for the gas-solid interface, and the surface charges will push the transitional state out of the wall.
Many details of the proposed mechanism are not clear yet. For example, energy levels of hydrogen can change when a molecule approaches the charged site so as the energy of the solvated OH − ion at the bubble surface can differ from −9.2 eV. Moreover, it is even debated that the charged sites are related to OH − ions. Because of the uncertainties, we approach the problem from a different side. It is simply assumed that there are non-zero probabilities for the dissociation of H 2 and O 2 molecules on the charged sites. Using these probabilities as parameters, we solve the chemical kinetic equations inside of a nanobubble to see if the reaction can be ignited spontaneously at room temperature, and what the main products of the reaction are.
The combustion of hydrogen-oxygen mixture is a well-investigated process [1][2][3][4] that is more complex than the one-step overall reaction 2H 2 + O 2 → 2H 2 O. The process is controlled by chain branching reactions in the volume competing with the volume and surface termination reactions. The combustion can be ignited spontaneously (see [31,32] on autoignition limits) at moderate temperature (T > 700 K), but no combustion is possible at lower temperatures. All species taking part in the reaction include: three molecules H 2 , O 2 , and H 2 O; four short-lived radicals H, O, OH, and HO 2 , plus one long-lived radical H 2 O 2 . Since in this work we consider the processes on nano-and microsecond scales, the latter can be considered as a stable molecule.
If the combustion happens in the nanobubble, the gas temperature during the process can be considered as a constant equal to the temperature of the surrounding liquid. This is because the thermalization time is quite short. It is limited by the heat diffusion in the gas phase; the time needed to reach homogeneous temperature in the bubble is estimated as τ h = (r 2 /π 2 χ 0 ) · (P/P 0 ) ∼ 10 −10 s, where r ∼ 50 nm is the bubble radius and χ 0 ≈ 0.9 × 10 −4 m 2 /s is the heat diffusion coefficient in the stoichiometric gas mixture at room temperature and normal pressure P 0 , and P = P 0 + 2γ/r is the pressure in the bubble that includes the Laplace pressure (γ ≈ 0.072 J/m 2 is the surface tension of the electrolyte). Thus, the heat diffusion is faster than most of the elementary steps of the combustion reaction.
Because of low temperature, the main chain branching reactions H + O 2 → O + OH and O + H 2 → H + OH are strongly suppressed, and cannot drive the process. Instead, the H and O radicals can be generated at the bubble surface by the dissociation reactions at the charged centers H s 2 → H + H and O s 2 → O + O. These reactions becomes increasingly important for nanobubbles due to large surface-to-volume ratio S/V = 3/r. Although we do not yet understand the dissociation mechanism, the surface reaction seems to be the only way to explain combustion in a small volume at room temperature.
The reaction constant for the surface processes can be presented in the following way [1]: wherev i is the average thermal velocity of the i-th species. The sign "+" corresponds to the surface dissociation reaction, and "−" is related to the surface termination reactions. The parameter ε ± i can be considered as the probability of the surface reactions. For the radical termination on glass walls, the typical values are in the range ε − i = 10 −5 − 10 −2 [1,2]. For the dissociation reactions, ε + i can be presented in the form ε + i = σ i n s , where σ i is the dissociation cross-section for hydrogen or oxygen molecules, and n s is the concentration of the centers for dissociation on the bubble surface. If we relate the centers to the surface charges, this concentration is estimated as n s ∼ 10 13 cm −2 . For the cross-section σ ∼ 1 Å 2 , one finds the dissociation probability ε + ∼ 10 −3 .
Combustion Kinetics
We solve the equations of chemical kinetics for eight relevant species H, O, OH, HO 2 , H 2 , O 2 , H 2 O, and H 2 O 2 enumerated as i = 1, 2, ..., 8, respectively. It is assumed that the species are distributed homogeneously in the bubble. This is a good approximation for all molecules and radicals, except maybe hydrogen atoms. For H radicals, the diffusion time r 2 /D H ∼10 −10 s-where D H ∼10 −5 m 2 /s is the diffusion coefficient-is comparable with the fastest reaction linear in H. This is the reaction H + O 2 + M → HO 2 + M that proceeds with the participation of a third body M. For this process, the reaction time is also on the level of 10 −10 s, and the diffusion of atoms competes with the reaction. The effect becomes important for bubbles larger than 100 nm in diameter. Here we assume that H atoms are distributed homogeneously in the bubble; therefore, our analysis is applicable for rather small bubbles.
Although the bubbles are small, the gas inside of them can be considered as a continuum medium. This is because the pressure inside the bubble increases with the decrease of the bubble size due to the Laplace pressure. For example, the mean free path λ in a bubble with a radius of r = 50 nm is estimated as 4 nm, and λ scales as r. For this reason, the Knudsen number Kn = λ/r ≈ 0.08 stays constant for bubbles, where the Laplace pressure dominates.
The total list of the elementary reactions was taken from Reference [33], but the detailed information on the reaction rates was collected from a number of papers [31,[34][35][36][37][38][39][40][41]. The kinetic equations are significantly simplified by the absence of temperature variation in the bubble. In this case, all the reaction rates stay constant during the process. Since nanobubbles with a mixture of gases last less than a few microseconds (combustion can proceed much faster), we select only those reactions that happen on a time scale of 1 µs or faster. The list of these reactions together with the reaction constants at T = 300 K and r = 50 nm is presented in Table 1. Note that the reverse reactions are not included in the list, because they happen on a time scale longer than 1 µs. The first six processes are the termolecular reactions, and the remaining 13 reactions are the bimolecular reactions. The concentration of all species are defined with respect to the initial gas concentration in the bubble: where T = 300 K, R = 8.314 J·mol −1 ·K −1 , r 0 = 50 nm, and it is assumed that the Laplace pressure dominates in the bubble P ≈ 2γ/r. We define the reaction constants as the probability per unit time, which are related to the standardly defined reaction constants as: Here k (2) and k (3) have dimensions m 3 ·mol −1 ·s −1 and m 6 ·mol −2 ·s −1 for bi-and termolecular reactions, respectively. Table 1. Bulk reactions included in the network and their reaction constants calculated at T = 300 K and r = 50 nm according to (3). For termolecular reactions 1-6, the constants are given for M = H 2 , O 2 , and H 2 O, respectively.
Reaction
K·ns −1 Ref. [33]. For reactions 1-4, the data are available for the pressure up to a few bars. We assume that in the bubble (where the pressure is somewhat higher), these reactions keep the third order. The reaction rates in the low and high pressure limits are known for reaction 5. The transition happens at [M] = 2500 − 5000 mol·m −3 , depending on M. This is higher than the gas concentration in nanobubbles, and the reaction is of the third order. On the contrary, for reaction 6, the transition happens at [M] ≈ 10 mol·m −3 , which is much smaller than the concentration in the bubble. Therefore, reaction 6 is effectively of the second order, resulting in equal efficiencies for all M.
Introducing the dimensionless concentrations y i defined with respect to [M 0 ] and using 1 ns as the unit of time, we can write the system of eight ordinary differential equations. This system has the following structure: i (y) + R i (y) includes the first six reactions in Table 1, while the term R (2) i (y) includes the rest of the reactions in the table. The linear term R (1) i (y) includes only the surface reactions. For example, for hydrogen atoms this last term can be presented in the form: which corresponds to the termination of one H atom at the surface with the probability per unit time K − 1 and creation at the surface of two H atoms from a hydrogen molecule with the probability K + 1 . We introduce nonzero creation terms K + 1,2 only for H and O, and the termination terms K − i can be nonzero only for radicals H, O, OH, and HO 2 (i = 1, 2, 3, 4). Termination of radicals on the surface is considered as permanent, meaning that the radicals sticking to the surface interact within the liquid phase. In this sense, the number of atoms of each kind in the gas phase is not conserved, but it can be considered as a quasi-constant on a time scale of 1 µs.
Results
The kinetic Equations (4) were solved numerically with the initial conditions that only hydrogen and oxygen molecules are present in the moment t = 0. For the stoichiometric mixture of gases in the initial state, one has y 5 (0) = 2/3 and y 6 (0) = 1/3. Suppose that only hydrogen atoms can be produced on the bubble surface and termination of the radicals on the surface can be neglected. A solution that corresponds to the probability of hydrogen dissociation on the surface ε + 1 = 0.003 is shown in Figure 1. It reaches a quasi-steady-state in 20 ns, and the concentrations vary only on a time scale of ∼1 µs. In contrast with the standard combustion, the final state contains not only water, but also H 2 O 2 . The presence of H 2 O 2 is a characteristic feature of the surface-assisted combustion. Hydrogen peroxide appears mainly due to reaction 19. If we put the corresponding reaction rate to be zero, the concentration of H 2 O 2 is reduced from 0.073 to 0.004. Due to the extra oxygen atom in hydrogen peroxide, not all molecular hydrogen is consumed. A stationary concentration of hydrogen atoms in the final state is due to the absence of the surface termination. Note that the dimensionless termination rate ε − i ∼ 0.001 gives a much smaller effect than the creation rate ε + i ∼ 0.001 because the termination is proportional to a small concentration of radicals. Of course, the concentration of hydrogen atoms cannot stay constant for a long time. It is reduced on a microsecond time scale due to reactions 10 It is easy to check that the initial relative number of hydrogen atoms equal to 4/3 coincides with that in the finale state and is similar for oxygen atoms.
The generation of hydrogen atoms on the surface is of principal importance. The reaction is not ignited if only oxygen atoms are generated on the surface. At a fixed ε + 1 , the concentration of H 2 O 2 slightly decreases relative to water when ε + 2 increases. The time τ 0.9 to reach a steady state strongly depends on ε + 1 , as shown in Figure 2a. We define τ 0.9 as the time when water concentration reaches 90% of its stationary value. It is well approximated by the function τ 0.9 ≈ 0.13 × (ε + 1 ) −0.834 ns. Figure 2b shows how water concentration in the final state depends on ε + 1 (red curve), and shows the relative concentration of H 2 O 2 with respect to H 2 O (blue curve) at steady state. For very small ε + 1 , the concentration of hydrogen peroxide becomes comparable to that of water. Figure 3a shows the dependence of τ 0.9 on the initial concentration of oxygen y 6 (0) = ([O 2 ]/[M 0 ]) t=0 at the condition that total concentration at t = 0 is fixed: y 5 (0) + y 6 (0) = 1. This time has a maximum close to the stoichiometric ratio. Dependence of the peroxide/water ratio in the final state on the initial concentration of oxygen is shown in Figure 3b. The relative contribution of H 2 O 2 increases with the increase of the oxygen fraction in the initial state. The peroxide/water ratio also increases with the decrease of the bubble size r: at r = 75 nm it is 0.047, and at r = 25 nm it is as large as 0.14. Uncertainties in the rates of termolecular reactions do not change the combustion process significantly. For reactions 1-3, we do not know where the transition between high and low pressure limits is. However, if we exclude reactions 2 and 3, the concentrations for all species practically coincide with that presented in Figure 1. Reaction 1 influences the concentrations of H and H 2 , but has only a weak effect on all the other components. Qualitatively, the time dependence and the magnitude for all components does not change if we switch off all the termolecular reactions, excepting reaction 5. In absence of reaction 5, the mixture of gases is not ignited.
Conclusions
We proposed an explanation of the combustion reaction in nanobubbles, which happens spontaneously at room temperature and cannot be explained in the standard combustion theory. The key step of the mechanism is the dissociation of hydrogen molecules on the charged centers existing on the gas-liquid interface. We kept the dissociation probability as a free parameter and solved the equations of chemical kinetics. It was demonstrated that the combustion is ignited if hydrogen atoms are produced on the bubble walls. The surface-assisted combustion produces in the final state not only water but also an appreciable amount of hydrogen peroxide. The latter is a specific signature that can be used to check the mechanism experimentally. The time scale for combustion in nanobubbles is about 10 ns. | 5,681.2 | 2017-02-04T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Strong quasi-ordered residuated system
: The concept of residuated relational systems ordered under a quasi-order relation was introduced in 2018 by S. Bonzio and I. Chajda. In such algebraic systems, we have introduced and developed the concepts of implicative and comparative filters. In addition, we have shown that every comparative filter is an implicative filter at the same time and that converse it does not have to be. In this article, as a continuation of previous research, we introduce the concept of strong quasi-ordered residuated systems and we show that in such systems implicative and comparative filters coincide. In addition, we show that in such systems the concept of least upper bound for any two pair of elements can be determined.
Introduction
T he concept of residuated relational systems ordered under a quasi-order relation was introduced in 2018 by S. Bonzio and I. Chajda [1]. Previously, this concept was discussed in [2]. The author introduced and developed the concepts of filters in this algebraic structure as well as several types of filters such as implicative, associated and comparative filters [3][4][5][6].
In [6,Theorem 3.4], it is shown that every comparative filter of a quasi-ordered residuated system A is an implicative filter of A and the reverse it need not be valid [6,Example 3.3]. When analyzing the properties of comparative filters, a requirement appears that a quasi-ordered residuated system satisfies the condition (1) If the system A satisfies the previous condition, then each comparative filter F of A has the following property and, in that case, if the implicative filters satisfy the condition (2), then F is a comparative filter of A.
In order to achieve the conditions under which each implicative filter will be a comparative filter of a quasi-ordered residuated system, we have designed (Definition 6) the concept of strong quasi-ordered residuated systems. In such systems, the condition (1) is always fulfilled and, therefore, in that systems implicative and comparative filters coincide (Theorem 5). Finally, we show (Theorem 6) that in strong quasi-ordered residuated systems the least upper bound 'u v' for each pair u, v of elements can be determined. We finish this article by the statement (Theorem 7) that (A, ) is a distributive upper semi-lattice.
Concept of quasi-ordered residuated systems
In article [1], S. Bonzio and I. Chajda introduced and analyzed the concept of residual relational systems.
is an algebra of type 2, 2, 0 and R is a binary relation on A and satisfying the following properties: (1) (A, ·, 1) is a commutative monoid; (2) We will refer to the operation · as multiplication, to → as its residuum and to condition (3) as residuation.
The basic properties for residuated relational systems are subsumed in the following: Recall that a quasi-order relation on a set A is a binary relation which is reflexive and transitive (Some authors use the term pre-order relation). Example 2. For a commutative monoid A, let P(A) denote the powerset of A ordered by set inclusion and '·' the usual multiplication of subsets of A. Then P(A), ·, →, A, ⊆ is a quasi-ordered residuated system in which the residuum are given by (∀X, Y ∈ P(A))(Y → X := {z ∈ A : Yz ⊆ X}).
Example 3. Let R be a field of real numbers. Define a binary operations '·' and '→' on
Then, A is a commutative monoid with the identity 1 and A, ·, →, <, 1 is a quasi-ordered residuated system.
Remark 1.
Quasi-ordered residauted system, generally speaking, differs from the commutative residuated lattice A, ·, →, 0, 1, , , R where R is a lattice quasi-order. First, our observed system does not have to be limited from below. Second, the observed system does not have to be a lattice. However, the difference between a quasi-ordered relational system and a CRPM (Example 6) is only in order relations since a quasi-order relation does not have to be antisymmetric. More about this last-mentioned algebraic structure can be found in [8].
The following proposition shows the basic properties of quasi-ordered residuated systems.
Concept of filters
In this subsection we give some notions that will be used in this article.
For a nonempty subset F of a quasi-ordered residuated system A we say that it is a filter of A if it satisfies conditions: In the article [3], the conditions were also analyzed:
Remark 2.
In implicative algebras, the term 'implicative filter' is used instead of the term 'filter' we use (see, for example [9,10]) because in the structure we study the concept of filter is determined more complexly than requirements (F3). It is obvious that our filter concept is also a filter in the sense of [9][10][11]. The term 'special implicative filter' is also used in the aforementioned sources if the implicative filter in the sense of [9] satisfies some additional condition.
There is considerable diversity in the literature in the use of terms that cover additional conditions that are met by filters (see, for example [12][13][14]). The terms we opted for our previous papers [5,6] and for use in this paper are taken from [13,14].
Definition 4.
[5, Definition 3.1] For a non-empty subset F of a quasi-ordered residuated system A we say that the implicative filter of A if (F2) and the following condition Definition 5. [6, Definition 3.1] For a non-empty subset F of a quasi-ordered residuated system A we say that a comparative filter of A if (F2) and the following condition are valid.
Theorem 2. [6, Theorem 3.4]
Any comparative filter of a quasi-ordered residuated system A is an implicative filter of A.
Proposition 2. [6, Proposition 3.2] Let
A be a quasi-ordered residuated system that satisfies the condition: Then any comparative filter F in A satisfies the the condition: Theorem 3. [6, Theorem 3.3] Let F be an implicative filter of a quasi-ordered residuated system A satisfying (13). Then F is a comparative filter of A.
Notions and notations that are used but not previously defined in this paper can be found in [1,[3][4][5].
Concept of strong quasi-ordered residuated systems
In this section we introduce the concept of strong quasi-ordered residuated systems. Considering the fact that the quasi-order relation ' ', which appears in the determination of this algebraic system, does not have to be antisymmetric, the following definition gets a clearer meaning. Definition 6. For a quasi-ordered residuated system A it is said to be a strong quasi-ordered residuated system if the following holds: (14) is valid, then (14) is also valid. The reverse does not have to be true.
The following is an example of a non strong QRS: It can be easily checked that A is a quasi-ordered residuated system. Since . Thus, A is not a strong quasi-ordered residuated system. Now, we give an example of strong quasi-ordered residuated system. A = {1, a, b, c} and operations '·' and '→' defined on A as follows: 1 1 a b c a a a a a b b a b a c c a a Direct verification it can prove that A is a strong quasi-ordered residuated system.
Remark 4.
It is generally known that a quasi-order relation on a set A generates an equivalence relation ≡ := ∩ −1 on A. Due to properties (9) and (10), this equivalence is compatible with the operations in A. Thus, ≡ is a congruence on A. In light of this remark, the condition (14) can be written in the form In this paper, we shall investigate the structure of strong quasi-ordered residuated systems.
Theorem 4.
In a strong quasi-ordered residuated system A the following holds: Proof. Assume that A is a strong quasi-ordered system. For given u, v ∈ A, let be u v. First, as v (u → v) → v is valid according to (11), from here we immediately get the following by (14). Secondly, from u v it follows 1 u → v by (3). From here, applying (10), we obtain (u As a significant consequences of this theorem we can prove some important properties of strong quasi-ordered residuared systems. In what follows, we need the following lemma: Lemma 1. Let A be a strong quasi-ordered residuated system. Then the following holds: Proof. Let A be a quasi-ordered residuated system and let u, v ∈ A be arbitrary elements. As v u → v holds according to (11), by Theorem 4, we conclude that the required formula is valid.
Thus, designing a strong quasi-ordered residual system environment in such an environment it can be proven the following result: Theorem 5. Let A be a strong quasi-ordered residuated system. Then the comparative and implicative filters of A coincide.
Proof. By Theorem 2, each comparative filter is an implicative filter in any quasi-ordered residuated system. The second part of the proof of this theorem follows directly from the previous Lemma and Theorem 3.
The notion "least upper bound" is well-defined for partial order relations. Although it is not common to use this definition for quasi-order relations because, in the general case, for quasi-order relations the "least upper bound" is not unique, when it comes to a strong quasi-ordered relational system, this concept can be determined as shown in the following theorem: Theorem 6. Let A be a strong quasi-ordered residuated system. For any u, v ∈ A, the element is the least upper bound of u and v.
Proof. It is clear that the following hold: (11) and (14). This shows that u v is the upper bound for u and v. If z is any common upper bound of u and v, then by Theorem 4, On the other hand, according to (10) and (15). Therefore, u v is the least upper bound of u and v in the system A.
The following example shows that condition (14) is crucial for the defining of the least upper bound. Then A = A, ·, →, 1 is a quasi-ordered residuated systems where the relation ' ' is defined as follows : By direct verification it can be proved that A is a quasi-ordered residuated system but is is not a strong system. For example, for Since d = 1, we conclude that A is not a strong system. However, on the other hand, in this case we still have b d and c d. Obviously, 1 is an upper bound of b and c also, but we cannot define the least upper bound b c as in the previous theorem.
We end this section by the following theorem: Theorem 7. Let A be a strong quasi-ordered residuated system. Then (A, ) is a distributive upper semi-lattice in the following sense (∀x, y, z ∈ A)((x y) z ≡ (x z) (y z)).
Proof. Let x, y, z be arbitrary elements of a strong quasi-ordered residuated system. From x x z and y y z it follows x y (x z) (y z). At the other hand, we have z (x z) (y z). Thus (x y) z (x z) (y z).
Conversely, from x x y (x y) z and z (x y) z it follows x z (x y) z. Analogous to the previous one, we can get y z (x y) z. Hence (x z) (y z) (x y) z.
This proves the theorem.
Remark 5.
The term 'distributive upper semi-lattice' used here differs from the concept of 'distributive join semi-lattice' which appears for example in the book [15] (pp. 99).
Of course, the question arises quite naturally: Since (A, ) is an upper semi-lattice, how are the standard statements for semi-lattices applied in this case?
Final conclusion and further work
The concept of quasi-ordered residuated system was introduced and analyzed by Bonzio and Chajda in [1]. This system differs from the commutative residuated lattice A, ·, →, 0, 1, , , R where R is a lattice quasi-order (see Remark 1). In this article, a specific quasi-ordered system it was designed in which it was possible to determine the least upper bound for any two elements of the system. In such a newly defined environment, implicative and comparative filters have been shown to coincide.
In future work, we could study the internal structure of such a designed quasi-ordered residuated system as well as some of its substructures such as, for example, filters (so-called prime filter) that satisfy the additional condition (∀u, v ∈ A)(u v ∈ F =⇒ (u ∈ F ∨ v ∈ F)). | 2,898 | 2021-02-12T00:00:00.000 | [
"Computer Science"
] |
A Guide to Signal Processing Algorithms for Nanopore Sensors
Nanopore technology holds great promise for a wide range of applications such as biomedical sensing, chemical detection, desalination, and energy conversion. For sensing performed in electrolytes in particular, abundant information about the translocating analytes is hidden in the fluctuating monitoring ionic current contributed from interactions between the analytes and the nanopore. Such ionic currents are inevitably affected by noise; hence, signal processing is an inseparable component of sensing in order to identify the hidden features in the signals and to analyze them. This Guide starts from untangling the signal processing flow and categorizing the various algorithms developed to extracting the useful information. By sorting the algorithms under Machine Learning (ML)-based versus non-ML-based, their underlying architectures and properties are systematically evaluated. For each category, the development tactics and features of the algorithms with implementation examples are discussed by referring to their common signal processing flow graphically summarized in a chart and by highlighting their key issues tabulated for clear comparison. How to get started with building up an ML-based algorithm is subsequently presented. The specific properties of the ML-based algorithms are then discussed in terms of learning strategy, performance evaluation, experimental repeatability and reliability, data preparation, and data utilization strategy. This Guide is concluded by outlining strategies and considerations for prospect algorithms.
N anopore sensors have been developed for decades to target multiple applications, including DNA sequencing, 1 protein profiling, 2 small chemical molecule detection, 3,4 and nanoparticle characterization. 5,6 Nanopore sensor is inspired by the Coulter cell counter 7 and realizes a task by matching its dimension to that of analytes, molecules or nanoparticles. Thus, it possesses an extremely succinct structure, a nanoscale pore in an ultrathin membrane. Its sensing function is based on a simple working principle: the passage of an analyte temporarily blocks a size-proportional volume of the pore and induces a spike signal on the monitoring ionic current at a given bias voltage. Information about passing analytes is hidden in the corresponding current spikes, i.e., translocation spikes distributed on the ionic current traces. By processing the signal and analyzing the features of the spikes such as amplitude, width (duration), occurrence frequency, and waveform, the properties of the analytes can be inferred, including size, shape, charge, dipole moment, and concentration. Therefore, signal processing is the crucial link to interpreting the signal by assigning the associated features to relevant physical properties. In general, signal processing comprises denoising, spike recognition, feature extraction, and analysis. A powerful signal processing algorithm should be able to isolate signals from a noisy background, extract useful information, and utilize the multidimensional information synthetically to accurately derive the properties of the analytes.
Low-pass filters have been adopted as a simple approach to removing the background noise. However, this function risks filtering out the important high-frequency components naturally present in signals representing rapid changes of ionic current associated with translocation spikes that carry informative waveform details related to the target analytes. Thus, self-adaptive filters and advanced current level tracing algorithms have been developed. 8,9 Traditional algorithms are mainly based on a user-defined amplitude threshold as a criterion for detection of translocation spikes. Apparently, the choice of this threshold determines how successful a spike is singled out and how good the quality of the subsequent feature extraction is. However, the threshold is usually chosen based on the experience of individuals dealing with the data. It is, hence, a subjective process. Moreover, using the extracted features to infer the properties of the analytes relies mainly on physical models that build upon a comprehensive understanding of the physiochemical process involved in the translocation. Unfortunately, generalized models and algorithms for this purpose are yet to be developed.
Concurrently, Machine Learning (ML) has revolutionized the signal processing landscape. In this regard, ML algorithms for nanopore sensing have seen rapid advancements in noise mitigation, spike recognition, feature extraction, and analyte classification. The learning process usually demands a huge number of well-labeled data sets, which is challenging. Furthermore, the applicability of ML-based algorithms is restricted by the accessibility of training data sets. In addition, ML-based algorithms usually work as a black box so that a user has limited knowledge of their operation. 10 This shortcoming can impair the control and usage of the algorithms and further adversely affect the interpretation of the results. Combining ML-based algorithms with physics-based models to exert respective advantages is considered a promising approach to attaining high-fidelity signal processing.
Reviews on processing the signals from nanopore sensors are sparse in the literature despite their scientific relevance and technological potential. One of the few reviews on signal processing technologies for identification of nanopore biomolecule includes both software algorithms and hardware readout circuits/systems. 11 A more general topic on ML-based algorithms for signals from biosensors touches upon nanopore sensing. 12 In addition, mini-reviews on some specific issues of signal processing for nanopore sensors and related sensors can be found, such as ML for identification of single biomolecules, 13 virus detection, 14 and nanopore electrochemistry. 15 Concomitantly, signal processing algorithms for nanopore sensing have been rapidly developed by adopting various strategies and techniques. It is, therefore, ripe to request a systematic treatment of the different algorithms, including both non-ML-based and ML-based, with respect to their architectures and properties. This Guide offers a general description of the explored signal processing algorithms for nanopore sensors and, thereafter, proposes guidelines for the development of prospect algorithms.
The Guide starts by categorizing the reported algorithms as non-ML type and ML type. Each category is generalized under the umbrella of a common signal processing flow to guide the discussion of specific algorithms in terms of development tactics and features. The focus will then be placed on the MLbased algorithms by scrutinizing the respective strategies and properties. Specifically, the discussion spans learning strategy, performance evaluation, experimental repeatability and reliability, data quality, data preparation, and data utilization. The discussion also concerns challenges, possible solutions, and special considerations for nanopore signals. Finally, strategies and considerations are outlined for prospect algorithms to conclude this Guide.
■ SIGNAL PROCESSING FLOW
The nanopore device used for sensing is usually immersed in an electrolyte, as shown in the left panel of Figure 1. The membrane embedding a nanopore separates the electrolyte into two compartments. The only electrical connection between them is the nanopore. By applying a bias voltage across the membrane, a steady ionic current, named open-pore current, is generated, which constitutes the baseline of the signal. The electric field also drives charged analytes dispersed in the electrolyte to pass through the nanopore. During the translocation, the analytes temporarily block a certain volume of the pore, proportional to their size. Such blockages usually cause spike-like current variations, as seen in the right panel of Figure 1, that are of central interest for signal processing. The ionic current is anticipated to resume the open-pore level once the translocations complete. Other designs can also be adopted to generate signals for nanopore sensing. For example, functionalizing the nanopore surface with a probe molecule can generate a specific interaction with target analytes resulting in characteristic signals on the monitoring ionic current trace. 16 Such signals arising from specific interactions, 17 adsorption− desorption processes, 18 clogging, 19 nanopore morphology changes, 20 and open−close activities of channels 21 can also be dealt with in the same framework designed for processing the translocation-caused spike signals.
The typical signal processing flow for nanopore sensors is summarized in Figure 2. Raw data here refer to those directly acquired experimentally and background noise is persistently present, while clean data represent those after the denoising process with which the background noise is sufficiently mitigated. With raw data at hand, a complete signal processing scheme comprises four consecutive steps as follows.
Step 1 Denoise raw data to generate clean data, typically via low-pass filters in the frequency domain. This step can be omitted if the quality of the raw data, i.e., signal-tonoise ratio, is acceptable.
Step 2 Identify and extract translocation events represented as spikes on current traces, frequently based on a userdefined threshold of the amplitude as a criterion to separate a true translocation-generated spike from the noise fluctuation.
Step 3 Extract features of these spikes based on various methods such as physical models, peak analysis algorithms, and algorithms of feature analysis in the frequency domain.
Step 4 Infer the properties of the translocating analytes from the extracted features.
In general, the parameters/structures of the ML-based algorithms can be dynamically adjusted in the training process according to the input data in order to achieve an improved performance toward the goal. 22 For a typical ML algorithm, the input data is usually deliberately divided into a training data set and a test data set. An automatic adjustment of the parameters/structures only applies to the training data set. 22 However, the implicit differentiation of the training and test data sets is not a necessity. For example, an ML algorithm can adjust its parameters/structures upon processing each and every input. Furthermore, the input data can be labeled or unlabeled so that the associated algorithms are based on supervised or unsupervised learning, respectively.
An algorithm can be regarded as ML-based if its current output is associated with its historical input or distribution of input, i.e., it "learns" from the history/distribution and exploits the hidden relations/patterns carried in the input data. Such learning can be explicit, as in a supervised training process for algorithms with labeled data sets. Nevertheless, the learning can also be implicit, as in some unsupervised clustering algorithms with a learning-by-doing manner. Therefore, an ML-based algorithm always relates to tunable weights, adjustable architectures, self-adaptable parameters, memory, etc. In contrast, a non-ML algorithm usually outputs in real time, i.e., it records no history data and, hence, its current output/systematic state is not influenced by any such history/ input distribution. However, the boundary between non-ML and ML algorithms is not always sharp and clear. For example, algorithms for spike recognition and baseline tracing with dynamic threshold/window adjustments and self-adaptive filters are usually regarded as non-ML, although the related parameters are automatically adjusted according to the input in real time. In this Guide, algorithms with distinguishable training and testing processes are classified as ML-based ones. For algorithms with an implicit learning process, conventions in the field are followed without making such a strict, nuanced distinction between non-ML and ML algorithms. In addition, the discussion proceeds by observing the functions of algorithms categorized by the aforementioned four steps.
It is worth noting that the algorithms reviewed here are those targeting pulse-like signals from nanopore sensors. They are not meant for treating the DNA/RNA/protein sequencing data that may also come from a nanopore sequencer. Processing such sequencing data belongs to a different field in bioinformatics. However, pulse-like signals may also be generated in other sensor devices such as nanogaps 23,24 and ion channels, 25 which will be briefly covered here when appropriate. Furthermore, "model" is used in this Guide to exclusively refer to physical models not algorithms. It is important to note that "model" is also widely used in the area of signal processing to represent a realization/implementation of algorithms, especially for ML algorithms.
Step 1. Denoising. Traditional methods of signal processing usually rely on low-pass filters for denoising as the first step in Figure 3. It should be emphasized that low-pass filtering is a must for signal amplification and data acquisition in a hardware system to define the bandwidth, mitigate out-ofband noise, and achieve anti-aliasing before digitalization. In this guide, the discussed low-pass filter refers, instead, to the software realization as a category of algorithm for already acquired digital data during the signal processing. In a nanopore system, the current noise power spectrum density, S I , consists of several different components in distinct frequency ranges. 26,27 A white thermal noise exists at all frequencies in the spectrum with its power density being inversely proportional to the electrical resistance of the nanopore. The low-frequency noise at frequencies below 1 kHz is usually contributed by flicker noise originating from the charge fluctuation on the pore wall and/or number fluctuation of ions in the pore and 1/f-shape noise from electrodes. 26 In the high-frequency range beyond 1 kHz, the noise power is dominated by the dielectric noise and capacitance noise. The former comes from the dielectric loss of the nanopore membrane, while the latter is a result of current fluctuation generated by the voltage noise of the amplifier input port on the impedance of the nanopore. Considering the frequency distribution of noise power, S I Δf, the high-frequency range dominates. Therefore, low-pass filters can efficiently restrict the bandwidth of the signal and filter out background noise. 28 However, the limited bandwidth degrades the capability of capturing fast translocation events and mars important details for analyzing the translocation waveform.
Traditional low-pass filters set a hard frequency threshold for the system. During noise filtration, it may also filter out the high-frequency components of the signal that may contain abundant details about the analytes. Therefore, different approaches have been sought to bypass this dilemma. Backed by the estimation theory, a Kalman filter has been developed to denoise the nanopore sensing signal. 8 Key parameters of the Kalman filter are adjusted dynamically according to the historical inputs. The stochastic properties of the signal are acquired and represented by these dynamic parameters. Thus, the Kalman filter is capable of extracting a signal whose frequency spectrum overlaps with that of the background noise. In addition, a filtering technology based on wavelet transform has been involved for nanopore signal denoising. 29,30 First, a group of proper bases that trades between the resolution of time and frequency is selected. Second, the wavelet transform of the input signal is implemented on these bases. Signal and background noise become separable in the wavelet domain even if they overlap in the frequency domain. Finally, a few large-magnitude wavelet components are kept, while the rest of the small-magnitude components are regarded as noise and meant to be removed, because with the specific bases chosen, those large magnitude components are the outcome of the wavelet transform of the main features (information) of the signal. The boundary between large and small magnitude is carefully selected by implementing different threshold functions that work similarly as the cutoff frequency of a traditional low-pass filter. The separability can be further enhanced by adopting multiple levels of wavelet transform, and a simple achievement for discrete signals is a bank of low-and high-pass filters. 30 By confirming the consistency of signals from multiple readout channels of the same nanopore, a consensus filter is adopted to remove the uncorrelated events as noise from each channel. 31 In addition, a weighted network among the single nodes gradually builds up and converges to stable values of the weights. This network can deliver consentient events, i.e., the highly correlated signals from each node, and abandon the uncorrelated events, i.e., noise.
Step 2. Spike Recognition. Spike recognition usually begins with defining an amplitude threshold as a criterion to separate spikes from noise. Apparently, this threshold plays a decisive role in further processing. 32−34 If the amplitude of a spike surpasses this threshold with reference to the baseline, it is recognized as a translocation event. Otherwise, it is regarded as noise. The identified spike segments are singled out from the current trace, and the associated features are extracted in the next step. Setting a large threshold increases the risk of omitting translocation spikes, misleadingly rendering a low translocation frequency. On the other hand, having a small threshold can mistakenly lead to assignment of noise fluctuations as translocation spikes, thereby incorrectly increasing the translocation counts. 35 To reduce the subjectivity due to involvement of the user in the threshold selection, the background noise level can be used as a reference. 5−10 As an example, a certain multiple of the rootmean-square (RMS) value of the background noise can be taken as the threshold. 34,36 Nonetheless, two potential subjectivity risks persist. The determination of the multiple of the noise level is usually based on the user's empirical experience. An accurate measurement of the background noise level, e.g., RMS value or peak-to-peak value, is related to the baseline detection. It is common for an algorithm for dynamic baseline detection to be designed to track the baseline position as an effort to mitigate the influence of shift, drift, and slow swing of the baseline on spike recognition. A dynamic average with a proper window size is a simple and straightforward method to obtain the baseline. 32 How to optimize the window size is crucial for the final performance. A large window can function as a low cutoff frequency filter and shows a stable baseline, but it can be insensitive to rapidly changing signals, including sudden jumps from the baseline. With a small window, changes from the baseline can be followed better, but the penalty is that the attained baseline can be easily influenced by translocation spikes. A simple fixation to overcome this dilemma is to keep the baseline level not updated during the blockage state, i.e., within a spike. 33 An iterative detection method is further proposed 34 wherein the baseline is first traced using a simple dynamic average method. Then, the translocation spikes are identified with respect to the baseline. They are then removed from the signal. By repeating the described operations several times, more spikes are recognized and subsequently removed. The dynamic average baseline eventually approaches the real level.
An alternative to the window size selection is to closely follow the current changes without differentiating them in the open-pore state (baseline) from those in the blockage state (spike). A dedicated algorithm named Cumulative Sums (CUSUM) has been developed along this line. 9 By adopting an adaptive threshold, which is dynamically adjustable according to the slow fluctuations of the signal level, it can detect abrupt changes possibly associated with state switching, e.g., from open-pore to blockage, from a shallow blockage level to a deep level, etc. First, an initial value of the signal level is set by referencing to the average of a small section of the signal at its start. Second, the deviation between the predicted signal level and the real value is calculated and accumulated. If the predicted level is close to the real one, the noise fluctuations above and below this level cancel each other. If the current jumps to a different level, a net deviation accumulates. Third, once this deviation surpasses a user-defined threshold, an abrupt change is identified, and the predicted level is shifted to the new level. Otherwise, the predicted level is updated by averaging the present data points. This algorithm can not only recognize the translocation spikes, i.e., the blockage stage, but also separate multiple levels in one blockage event. 37 Furthermore, information about these spikes can be extracted naturally, including the amplitude, duration (dwell time in blockage state), interval between adjacent spikes (dwell time in open-pore state), and ionic current levels and dwell time at the corresponding levels for multilevel signals.
Step 3. Feature Extraction. Once the spikes are singled out from the baseline, feature extraction constitutes the third step in Figure 2. The main features of a spike-like signal commonly include amplitude and duration (width of the spike). An additional parameter to quantify the translocation is the apparent frequency of translocation events (FTE). In general, larger analytes translocating smaller pores induce more severe blockades in the form of deeper spikes; longer analytes with lower translocation speed, caused by weaker driving forces and/or stronger analyte−nanopore interactions, yield longer durations; higher analyte concentrations and/or larger bias voltages give rise to higher FTEs. These are intrinsic factors relevant to the properties of analytes and nanopores. Extrinsically, the bandwidth constraint of an electrical readout system may distort narrow spikes, rendering an attenuation of amplitude and a prolongation of duration. Signal distortion by limited bandwidth has received quantitative analysis. 28,38 In order to recover the true spike waveform from the distorted one, a physical model-based algorithm named ADEPT 39,40 and a Second-Order-Differential-Based Calibration (DBC) method with an integration method 41−43 have been developed. The ADEPT algorithm is based on an equivalent circuit model of the nanopore system. From the system transfer function of the circuit, the true signal is recovered from the distorted one by inversely applying the system function. Thus, the affected spikes corresponding to short-lived events are compensated for to restore the unaffected features. In the DBC method, a Fourier series is first applied to fit the translocation spikes for smooth waveforms. Second, the second-order derivative of the smoothed waveform is calculated. Third, the minima of the derivative are located at positions corresponding to the start and end time points of the translocation, thereby leading to an accurate determination of the duration of a spike. Finally, the attenuation of amplitude by the limited bandwidth is compensated for by considering the area beneath the spikes referred to as the baseline. The DBC algorithm has been integrated in software packages for signal processing of nanopore sensing data. 33,44 ADEPT is effective for short-duration spikes, while CUSUM is suitable for long-duration spikes with multiple blockage levels. A software platform, MOSAIC, has emerged by combining the two algorithms to benefit from their respective strengths. 32 An advanced version of CUSUM has recently been adopted in MOSAIC for a robust statistical analysis of translocation spikes. In addition, an algorithm named AutoStepfinder is devoted to stepwise signals. 45 First, the initial number of step levels representing different blockage states is assigned. Fitting is then implemented to achieve the minimum error. Second, the fitting outcome is evaluated and compared with the halt condition for the required accuracy. Third, if it does not reach the halt condition, the number of step levels is gradually increased for the new iterative round of data fitting. In an iterative manner, this process is repeated until finding the best number of step-levels. This algorithm is developed for signals arising from the growth dynamics of protein polymer microtubules with optical tweezers. 46 Translocation spikes from nanopore sensors, including the typical single-step signals and blockages with multiple levels, are all targets of this algorithm. Other multiple-step signals from electrical, optical, and mechanical measurements can also be processed using this algorithm. 10,45 For stepwise signals, the Rissanen principle of Minimum Description Length (MDL) is adopted to identify the steps, e.g., the close−open dynamics of ion channels. 47 Here, an anticipated location of step is confirmed by achieving an MDL, which trades off between fineness and fitting accuracy.
Besides the three main features of spikes, amplitude, duration, and FTE, more specific features need to be scrutinized for analyte classification and analysis. A large number of different features, i.e., multiple dimensions of feature space, advances the especially powerful ML-based classifiers for processing data in high-dimensional space. In contrast, simple non-ML-based classifiers, such as statistical distribution-based distances and hypothetical tests, provide limited functionalities. The ML-based classifiers will be discussed later, and the focus is placed on feature extraction here. Several details of the translocation spikes are selected as features, such as increasing and decreasing slopes of a spike, spike area, area of increase and decrease period and their ratio, symmetry of spikes, bluntness of spikes, and "inertia" with respect to the current axis and time axis with and without normalization by the amplitude. 48−50 The frequency spectrum and cepstrum of spikes based on the Fourier transform can also be used as features for the classification. 51 Usually, peaks may appear in the frequency spectrum representing the major frequency components of a signal. The features of these peaks, e.g., the position, amplitude, and phase angle of the peaks, are collected as features of the signal in the frequency domain. Furthermore, if no clear peak-wise pattern appears, the amplitude and phase angle of a series of frequency points obtained by resampling on the spectrum can be used as features for classification as well. 51 If the number of features is too large and some of them are highly correlated, the Principal Components Analysis (PCA) method can be employed to compress the redundant information and decrease the dimensionality of the feature space. This treatment can lead to refined features in an efficient manner for the classification algorithms.
Step 4. Analyte Identification. The final step of signal processing is to infer the analyte properties and identify/ classify the analytes based on the extracted features. As discussed above, simple physical models can be utilized to correlate the amplitude of spikes to the size and shape of analytes; 52,53 to relate the duration to the translocation speed and nanopore−analyte interaction that in turn are connected to the physiochemical properties such as mass, charge, dipole, and hydrophobicity; 54,55 and to associate the frequency of spikes to the concentration of analytes at a given bias voltage. 55,56 By synthetically considering the three features in a three-dimensional space and utilizing the tools of hypothetical tests, the separability of spike clusters in a feature space can be inferred, each cluster can be attributed to certain characteristics of the analytes, and new spikes can be identified to one of these clusters. For example, the Mahalanobis distance metrics are adopted to assess the similarity of certain spikes with labeled clusters in the feature space so that five different amino acids can be identified. 57 Moreover, a probabilistic fuzzy algorithm is adopted to quantify the concentration range of analytes through a comparison between the Gaussian distribution of the blockage amplitude and the calibration values. 58 The fuzzy property endows the algorithm with flexibility, which can tolerate the data variation from the experimental conditions to some extent. Details of translocation waveform are considered by invoking more sophisticated physical models 55,59 to distinguish proteins based on their fingerprint feature of blockage of current distribution. 60
SS Properties of proteins
Extracting five parameters of proteins: shape, volume, charge, rotational diffusion coefficient, and dipole moment 60 a TSD: time sequence data, i.e., the ionic current trace in nanopore sensors; SS: spike segment, i.e., the current trace segment of a translocation spike; SF: spike features.
ACS Sensors
pubs.acs.org/acssensors Review Instead of DC bias, an AC voltage can be applied as excitation and the corresponding AC current is recorded as signal. 61 It has been shown that the frequency response properties, including magnitude and phase, of translocating nanoparticles, SiO 2 , Au, and Ag, are easily differentiable by employing this AC method.
The non-ML-based algorithms for processing nanopore signals are summarized in Table 1, while the commonly used algorithms in each step are depicted in Figure 3.
The ML-based algorithms are mainly devoted to treating two key aspects in a typical signal processing flow, i.e., spike recognition in step 2 and analyte identification in step 4. Besides, few algorithms are developed for step 1, denoising, and step 3, feature extraction.
Step 1. Denoising. A Deep Neural Network (DNN) can be adopted to filter out the noise from the signals generated by carboxylated polystyrene nanoparticles translocating a 5-μmlong nanochannel. 62 In such an algorithm, the time sequence traces as signals are first sent to a convolutional autoencoding Neural Network (NN) that repeats the convolution of input and passes on the features to the next stage, i.e., an activation function of either rectified linear unit (ReLU) or LeakyReLU. This operation converts current traces into vectors in a highdimensional feature-enhanced space by keeping the features and dropping the time resolution. Next, the vectors undergo deconvolution to reconstruct the current trace in the original size. During the training process, the weights and biases for each node in the NN are tuned by means of gradient descent optimization to evaluate the deviation between the output and the denoised (control) current traces obtained by Fourier analysis and wavelet transform. In this way, the algorithm can automatically identify features and discard noise in the highdimensional feature space, thereby overcoming the limitation of traditional filtering with overlapping frequency components of signal and noise. This is a typical unsupervised algorithm needing no labeled data sets, i.e., the ideal "clean" data without noise, during the training process.
Step 2. Spike Recognition. Regarding Step 2, most efforts are based on the Hidden Markov Model (HMM) strategy. 63,64 The HMM is naturally suitable for the description of stochastic hops between the open-pore state and the blockage state, as a Markov chain. The key to train an HMM is to determine the probability of state transition from one to the other, i.e., state transition probability, and the probability of ascertaining the value of an observed variable with certain values of hidden stochastic variables, i.e., output probability. In order to train the HMM, labeled data sets are necessary. For nanopore translocation signals, the current of each sampling point need be assigned to a given state, e.g., open-pore, shallow blockage, deep blockage, etc. First, a Fuzzy c-Means (FCM) algorithm and a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) have been adopted to cluster the sampled data. Next, the Viterbi approach, which is used to obtain the maximum a posteriori probability and estimate the most likely sequence of hidden states in an HMM, has been utilized to achieve an intelligent retrieval of multilevel current signatures. This approach has enabled detection of nanopore translocation events and extraction of useful information about single molecules under analysis. 65,66 Lately, some feature vectors from HMMs have also been used to provide the characteristics of translocation spikes. 64,67 Finally, the feature vectors are used for further analyte classification. The components of feature vectors include not only translocation spike related features, such as mean value and variation of spike amplitude, but also stochastic process related parameters, such as the transition probability between the open and blockage states and the statistical distribution of emission probability.
Concerning classification and by way of introduction to such a topic, HMMs have also been utilized for classification. For instance, HMM-based duration learning experiments on artificial two-level Gaussian blockade signals can be used to identify a proper modeling framework. 68 Then, the framework is applied to the real multilevel DNA blockage signal. Based on learned HMMs, base-pair hairpin DNA can be classified with an accuracy up to 99.5% on experimental signals.
Step 3. Feature Extraction. As to Step 3, most commonly used algorithms are non-ML-based. Few studies on ML algorithms can be found though. Based on the Residual Neural Network (ResNet), a bipath NN, named Bi-path Network (B-Net), has recently been established to extract spike features. 35 Since the task of counting the number of spikes is essentially different from that of measuring the amplitude and duration, the bipath design, composed of two ResNets, each one trained for one task, has been shown to be robust with compelling performance. During the training process, segments of time sequence traces are first sent to the NN. The predicted values of spike number, average amplitude, and duration of the appearing spikes are then compared with the respective ground truths. Next, the deviations of the predicted values and the ground truths are back-propagated through the NN using the Stochastic Gradient Descent (SGD) algorithm and the weights of each node are adjusted. Finally, the training performance in each epoch is evaluated on a validation data set so that the best trained NN is selected. The training data sets are artificially generated by a simulator on the foundation of a set of physical models, describing open-pore current, blockage spikes, background noise, and baseline variations. The trained B-Net can directly extract the three features of spikes, i.e., amplitude, duration, and number (or FTE), from raw translocation data of λ-DNA and protein streptavidin. The features show clear trends with the variation of certain conditions, which is in agreement with the corresponding physical mechanisms of analyte translocation. The B-Net avoids the inherent subjectivity found on spike recognition with traditional threshold-based algorithms that are dependent on a userdefined amplitude threshold.
A new concept of shapelet has been involved in the feature extraction of translocation spikes. 69 Shapelets are short timeseries segments with special patterns that contain discriminative features. For translocation spikes from nanopore sensors, the tiny fluctuations of the ionic current in the blockage state, i.e., the bottom of the blockage spike, do not always result from noise. They can be characterized by certain regular patterns representing the characteristics of the translocating analytes as well as their interactions with the nanopore. In the learning time-series shapelets (LTS) algorithm, these regular patterns are learned as shapelets automatically from the training data set to maximize the discriminative features among the spikes from different analytes. Then, the similarities of test spikes and these shapelets are measured by the Euclidean distance, as the features of these spikes. Consequently, a multidimensional feature space is established. On the platform of aerolysin nanopore, the LTS algorithm is proven to have the ability to discriminate the translocation spikes generated by 4-nucleotide DNA oligomers with single-nucleotide difference. 69 Step 4. Analyte Identification. Finally, when it comes to Step 4, main trends have been directed toward three approaches, (i) Support Vector Machines (SVMs), (ii) Decision Trees (DTs) and Random Forests (RFs), and (iii) NN-based classifiers. For shallow ML algorithms such as (i) and (ii), the inputs are the features of a signal, i.e., vectors in high-dimensional feature space. The extraction of features, i.e., construction of feature space, are adequately discussed in Step 3 of non-ML-based signal processing algorithms as well as in the previous section. The commonly used features include those from the time-domain of signal, e.g., amplitude, duration, and frequency of spikes, and those from frequency-domain, e.g., peaks in the spectrum. Regarding deep-learning (DL) algorithms such as (iii), the inputs are usually the time sequence of current traces or spike segments. Therefore, the DL algorithms, compared to their shallow ML counterparts, may avoid the tedious feature extraction process that usually needs rich experience and can be subjective. An expanded discussion of this issue is found in the section Strategies of ML-Based Algorithms. However, although rare, it is also found that extracted features have been used as inputs for DL algorithms. 70 An SVM is a linear classifier whose goal is to find a hyperplane in an n-dimensional space that segregates data points belonging to different classes. Consequently, data points falling on either side of the hyperplane can be attributed to different classes. Multiple possible hyperplanes can be chosen to separate two classes of data points. The main aim of an SVM is to find the plane with the maximum margin, i.e., the maximum distance between the data points of both classes. Maximizing the margin distance provides robustness such that future data points can be classified with high confidence. Support vectors are the data points closer to the hyperplane, which are taken as references and determine the position and orientation of the hyperplane, thereby maximizing the margin of the classifier. Since an SVM is a linear classifier, it works better when there is a clear margin of separation between classes, and it is more effective in higher dimensional spaces, i.e., when the number of dimensions is greater than the number of samples. This algorithm does not incorporate nonlinearities to the input points by itself. However, complementary kernels can be involved to realize the nonlinearity. The cost is that they come with the incorporation of more dimensions to the inputs and carry more processing loads. Accordingly, the SVM does not perform well when the data points at different target classes severely overlap. As a result, this algorithm needs a preprocessed data set in its input to build up the high-dimensional feature space. Such a preprocessing step is not linked to the automatic optimization process of the algorithm and has to be conducted using human-engineered tools, which makes this procedure less automatic and adds limitations derived from human subjectivity. 71 In regard to how to utilize SVMs to attain classification, a strategy has been introduced to classify and interpret nanopore and ion-channel signals. 72 The Discrete Wavelet Transform (DWT) is used for denoising nanopore signals and features. Spike duration, amplitude, and mean baseline current are extracted and subsequently used to detect the passage of analytes through the nanopore. First, a two-stage feature extraction scheme adopts the Walsh-Hadamard Transform (WHT) to provide feature vectors and PCA to compress the dimensionality of the feature space. Afterward, classification is carried out using SVMs with 96% accuracy to discriminate two highly similar analytes. Along the same lines, 73 each current blockade event can be characterized by the relative intensity, duration, surface area, and both the right and left slope of the pulses. The different parameters characterizing the events are defined as features and the type of DNA sample as the target. By applying SVMs to discriminate each sample, an accuracy between 50% and 72% is shown by using two features that distinctly classify the data points. Finally, an increased accuracy up to 82% can be achieved when five features are implemented. Likewise, the SVM has also been used to identify two different kinds of glycosaminoglycans with an accuracy higher than 90%. 74 Similarly to nanopore techniques, nanogap sensors generate characteristic tunneling current spikes when individual analytes are trapped in a gap between two electrodes. As is the case for nanopores, this technique has also been used to identify individual nucleotides, amino acids, and peptides at a singlemolecule level. Following this line of research using nanogaps, an SVM has been shown to classify a variety of anomers and epimers via the current fluctuations they produce when being captured in a tunnel junction functionalized by recognition probe molecules. 24 Likewise, a tunneling nanogap technique to identify individual RNA nucleotides has been demonstrated. 51 To call the individual RNA nucleotides from the nanogap signals, an SVM is adopted for data analysis. The individual RNA nucleotides are distinguished from their DNA counterparts with reasonably high accuracy. In addition, it is found through using an SVM for data analysis how probe molecules in a nanogap sensor distinguish among naturally occurring DNA nucleotides with great accuracy. 75 It is further shown that single amino acids could be identified by trapping the molecules in a nanogap being coated with a layer of recognition molecule probes and then by measuring the tunneling current across the junction. 23 Since a given molecule can bind in different manners in the junction, an SVM algorithm is useful in distinguishing among the sets of electronic "fingerprints" associated with each binding motif.
To pursue classification, ensemble learning is involved in signal processing of nanopore sensing. By assembling the results from multiple simple learners, the ensemble learner can achieve a better generalization than by the individual simple learners. 76 A simple learner is usually based on learning algorithms with low complexity, such as DT and NN. An ensemble learner combines multiple simple learners based on the same algorithm, i.e., homogeneous ensemble, or different algorithms, i.e., heterogeneous ensemble. To highlight the advantage of assembling, the individual learners should behave differently, yet with sufficient accuracy. Therefore, an important issue in this scheme is to find a smart way to divide the training data sets for the individual learners, especially for homogeneous ensembles, since the behavior of each learner is based on the training data. According to the strategies adopted to generate these base/component learners, ensemble learning algorithms can be divided into two major categories: (i) the individual learners are generated sequentially with strong correlations in between, and (ii) they are generated in parallel with weak correlation.
Boosting algorithms belong to the first category, such as Adaptive Boosting (AdaBoost), in which the training data for each individual learner is selected/sampled from the entire training data set. The performance of the current learner determines the manner in selecting the training data for the next learner, and as mentioned, the simple learners in boosting algorithms are generated one by one. AdaBoost, assembled by multiple DT classifiers, is used to classify the spikes generated by the mixture of two kinds of 4-nucleotide DNA oligomers with single-nucleotide difference. 67 Bagging, on the other hand, is a typical algorithm of the second category, in which the training data for each learner is selected simultaneously in the training data set. Thus, the learners can be trained individually at the same time. An RF algorithm constructs the bagging ensemble architecture by involving a random selection mechanism in the training data selection for each DT. Thus, the RF shows better robustness and generalization ability than achievable with the simple DT. Moreover, the performances of RF and SVM are contrasted. 77 On one hand, an SVM-based regressor is used to establish the correspondence between specific peptide features inside the pore and the generated signal. On the other hand, an alternative approach for supervised learning can be explored by implementing the RF regression for translocation waveform prediction. The resulting RF becomes more robust to outliers also exhibiting less overfitting.
To boost the generalization ability, Rotation Forest (RoF) has been proposed. It builds classifier ensembles using independently trained DTs. The RoF is proven to be more accurate than bagging, AdaBoost, and RF ensembles across a collection of benchmark data sets. 78 In an RoF algorithm, the feature set is randomly split into a number of subsets in order to create the training data for the base classifiers, i.e., DT, and the selected training data for each DT is further processed usually by PCA. Then, a rotation operation is applied in the feature space to form the new features for the base classifiers. The aim of the rotation operation is to boost individual accuracy and diversity simultaneously within the ensemble.
Along the same research line, RoF ensembles have been used to demonstrate label-free single-cell identification of clinically important pathogenic bacteria via signal pattern classification in a high-dimensional feature space. 79 A similar classifier is used in bacterial shape discrimination 48 and for label-free electrical diagnostics of influenza to distinguish among individual viruses by their distinct features from the same group. 14,50 Recently, RoF and RF-based classifiers have been developed to identify four kinds of coronal viruses according to the features of translocation spikes, even when they have highly similar size and shape. 80 A comparison between RFs and Convolutional Neural Networks (CNNs) has recently been conducted. 81 Using either a set of engineered signal features as input to an RF classifier or the raw ionic current signals directly into a CNN, both algorithms are found to achieve similar classification accuracy ranging from 80% to 90%, depending on the hyperparameters and data sets.
Another major category of classifiers is those based on DNNs with various architectures, such as CNN, fully connected DNN, Long Short-Term Memory (LSTM), ResNet, etc. The DNNs came to the scene to eliminate an important bottleneck in the previously traditional ML pipeline. Essentially, previous ML workflows put feature extractions from raw data in the hands of human experts in step 3 as discussed above. The consideration behind relying on human expertise is not guided by the optimization conducted in the classifiers whatsoever, in order to attain better discrimination.
Great risks are involved in the fact that human judgments could neglect important information and features present in the raw input data. The combinatorial nature of possible correlations among different features cannot be completely contemplated by human expertise. Therefore, essential correlations can be accidentally discarded with the consequential compromises in classification accuracy. Likewise, some feature correlations can be essential in the perspective of human reasoning. Yet, those could also be completely useless statistical features at the time of attaining better classification performance. The DNNs, instead, promote the extraction of features using optimization mechanisms guided by ultimate classification objectives in an end-to-end fashion. By backpropagating errors and applying optimization steps, such as SGD, these architectures modify internal parameters in the networks in order to accomplish better classification performance. Every stage in such multilayer pipelines abstracts more and more relevant information regarding the final objectives of the complete system. The automation of the feature extraction stages in the ML pipeline bypasses an explicit operation in step 3 in Figure 2. Hence, the DNNs can directly process the traces/segments of translocation spikes from step 1 or 2 and achieve outstanding results in a variety of important applications, such as computer vision, speech recognition, and Natural Language Processing (NLP) among many other newer and essential fields. 82 Following the research line of CNNs, a CNN is developed for a fully automated extraction of information from the timeseries signals obtained by nanopore sensors. 83 It is trained to classify translocation events with greater accuracy than previously possible, which increases the number of analyzable events by a factor of 5. 83 An illustration of the step-by-step guide in how a CNN can be used for base classification in DNA sequencing applications is available in the literature. 10 Moreover, a CNN has been adopted to classify different kinds of proteins according to the fluorescently labeled signals from optical measurement of the translocation through solid-state nanopores, which also show spike-like features as electrical current signals. 84 A comparison with the SVM as a more conventional ML method is provided for discussion of the strengths and weaknesses of the approaches. It is claimed that a "deep" NN has many facets of a black box, which has important implications in how they look at and interpret data. Moreover, each translocation event is described by various features in order to enhance classification efficiency of nucleotide identities. 70 By training on lower dimensional data and comparing different strategies, such as fully connected DNN, CNN, and LSTM, a high accuracy up to 94% on average is reached. In addition, a ResNet is trained and acquired the ability to classify the spikes generated by the translocation of two kinds of DNA oligomers, 5′−(dA)7(dC)-7−3′ and 5′−(dC)7(dA)7−3′ that only differ in the sequence direction. 85 Prior to classification, an ensemble of empirical mode decomposition, variational mode decomposition, inherent time scale decomposition, and Hilbert transform has been designed to extract multispectral features of nanopore electrical signals. By combining ResNet with SVM, adenoassociated viruses carrying different genetic cargos are discriminated according to their respective translocation spike signal through a SiN x nanopore. 86 The ResNet extracts abstract "features" of the signal traces, although these features are not describable and cannot be directly correlated to ACS Sensors pubs.acs.org/acssensors Review In ML, Logit is a vector of raw (non-normalized) predictions that a classification algorithm generates, which is usually then passed to a normalization function. Such a normalization function maps the real number line (-inf, inf) to probabilities [0, 1]. If the classifier solves a multiclass classification problem, logits typically become an input to a softmax function (normalization function). The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.
physical meanings, and delivers them to a SVM for classification. Besides the translocation spike signals, a DL algorithm based on CNNs and LSTM architecture can also be used for recognition of the open and blocked states of ion channels by the ionic current levels. 25 It can process signals from multiple channels with multiple ionic current levels. The algorithm is completely unsupervised and, thus, adds objectivity to ion channel data analyses.
An NN-based technique called Positive Unlabeled Classification (PUC) has been introduced to learn the interference of background noise fluctuation with the spikes generated by the four different nucleotides in a nanogap sensor. It uses groups of training current signals recorded with the target molecules. By combining with a common NN classifier, it can identify the four nucleotides with a high degree of precision in a mixture sample. 87 Other ML optimization algorithms such as Expectation Maximization (EM) have also been used for classification. An EM is a widely used iterative algorithm to estimate the latent variables from the observations of the statistical estimation theory. The unobserved latent variables can be induced from the incomplete/damaged data set or variables that cannot be measured/observed directly. In an EM algorithm, the two steps, E and M, are repeated alternatively. In the E step, the values of the latent variables are estimated from the parameters of the stochastic schemes. In the following M step, the stochastic parameters are updated according to the observed variables from the data set and latent variables from the previous E step. By iterating these two steps, the stochastic parameters may converge to their real values. The EM algorithm is adopted in several clustering methods, such as the k-mean clustering and the Gaussian mixture model. The EM algorithm has been implemented to cluster the translocation spikes from wild-type E. coli cells and f liC deletion mutants. 88 Seven features related to the shape of the translocation spikes are selected, and the statistical distribution parameters of the spikes in a seven-dimensional feature space are estimated by applying the EM iteration. In addition, the same algorithm has been used to classify two viruses, influenza A and B, through the translocation signals from peptide functionalized SiN x nanopore sensors. 49 Other classification and clustering algorithms have also been implemented to identify various analytes via the translocation features obtained from nanopore sensors, such as the k-nearest neighbor (k), 89,90 the logistic regression, 69,89,90 and the naive Bayes. 89 An important facet of the ML related algorithms is that they devote significant effort to comparisons among as many methods as possible. For instance, by utilizing ML, it is possible to determine two different compositions of four synthetic biopolymers using as few as 500 events. 89 Seven ML algorithms are compared: (i) AdaBoost; (ii) k-NN; (iii) naive Bayes; (iv) NN; (v) RF; (vi) logistic regression; and (vii) SVM. A minimal representation of the nanopore data, using only signal amplitude and duration, can reveal, by eye and image recognition algorithms, clear differences among the signals generated by the four glycosaminoglycans. Extensive molecular dynamics simulations and ML techniques are used to featurize and cluster the ionic current and residence time of the 20 amino acids and identify the fingerprints of the signals. 90 Prediction is compared among three classifiers: k-NN with k = 3, Logistic regression, and RF with the number of estimators of 9. 90 The ML-based algorithms for processing the nanopore signals are summarized in Table 2, while the commonly used algorithms in each step are depicted in Figure 4.
■ STRATEGIES OF ML-BASED ALGORITHMS
As its name alludes, ML conforms to a set of algorithms that improve automatically through experience. Such algorithms are essentially machines that learn a task from data. One part of these algorithms is mainly regarded as classification machines that use preprocessed features as inputs. The conventional ML techniques, classical MLs, are limited by the information contained in such features, since they are obtained by other algorithms tailored by highly specialized human engineering. Such specialization is bound by human subjectivity, which does not always align with the best decisions at time of providing relevant features to the classifiers for the task at hand. The typical classical ML algorithms for signal processing of nanopore sensing are k-NN, DT, RF, RoF, AdaBoost, and SVM among others.
Following the categorization scheme laid down and the line of argument thus far, all these algorithms share the same strategy of receiving highly preprocessed human engineered features. This approach limits their capability of selfdiscovering relevant features in order to attain a higher performance for the task. Deep Learning, on the other hand, is based on representation learning, which is a set of strategies by which representations of data can be learned automatically to extract useful information when building, for instance, a classifier. 91 To extract the features, clear criteria in traditional algorithms for preprocessing data need consequently be defined and described by unambiguous logic judgments. These criteria are usually based on users' empirical experience, thereby rendering them subjective and case dependence. For example, a user needs to summarize related key features of the spikes by observation and experience in order to single out the spikes from a noisy background in step 2, e.g., the threshold for spike recognition. The prevalent algorithms used in these application scenarios follow this path, and all the links in the path should be expressed explicitly. The limitation for each step is obvious; feature extraction needs experts, but some key features may only bear limited information. The criteria are rigid and stiff, which can be incompatible with highly nonlinear cases turning to an even more complicated and sophisticated structure. Such limitations can be attributed to the weakness of the concept itself, since the traditional algorithms request an explicit representation of everything, including features, variables, and logical relationships. This process inexorably invites subjectivity. With DL algorithms, in contrast, the features of spikes are acquired by the algorithm during the training process and thus include as much of the original information as possible. This approach inherently bears the commonality for wide application scenarios. Its assessment process is flexible and probabilistic, thus implying a more complex and nonlinear logic and indicating a powerful method with robust performance. The whole process minimizes the participation and intervention of users, which warrants a maximum level of objectivity. 91 The automatic feature extraction in DL algorithms is specially beneficial for atypical signals induced by nanopore−analyte interaction, morphology change dynamics, adsorption−desorption, and clogging. Such atypical signals usually do not display the spike-like features of the typical translocation signals. Therefore, it is challenging and requires rich experience to define and extract features for those signals.
The DL strategy builds its own features by highlighting the most explanatory ones, while diminishing the ones with the least explanatory value for the task that the network is commissioned to solve. Such an efficacy is achieved because the feature extraction part of the network uses optimization mechanisms connected to the final optimization algorithms in the pipeline, which are regarded to address the final task. Such connections are provided by back-propagating errors throughout the architecture in a scheme that utilizes derivatives, the chain rule, and SGD. These mechanisms work together on moving the optimal point of the network progressively in order to find some local minimum in a loss function that the system seeks to minimize. Consequently, the feature extractor mechanism harmoniously follows more robust paths of optimization that permit the whole network to achieve the optimum performance. 91 Historically, the main differences in the data to be processed are forwarded to the corresponding DL architectures. Thus, Feed-Forward Artificial Neural Networks (FFANNs) process data in a way that information flows from input to output without a loop in the processing pipeline. In other words, the input to any module in the network is not influenced by the outputs of such modules directly or indirectly. Examples of FFANN include CNN implementations, such as LeNet-5 introduced in 1998 and known as CNNs today. 92 Later, AlexNet was introduced in 2012 93 with a considerably larger but structurally similar architecture (60,000 parameters for LeNet-5 vs 60 million for AlexNet). Then, VGG-16, developed in 2014, introduced a deeper (with 138 million parameters) yet simpler variant of the previous architectures. 94 Inception network (or GoogleNet) was also introduced in 2014, 95 with its 5 million parameters in version V1 and its 23 million in version V3. As networks started to become deeper and deeper, it was noticed that adding more layers would compromise the quality of gradients. The advantage of the network concept could eventually vanish or explode exponentially with the number of layers. Nowadays, this limitation can be mitigated by employing a new architecture called ResNet, which incorporates skip connections to residual layers. There are several ResNet variants, for instance, ResNet 50 with 25 million parameters. Another architecture called ResNeXt is an extension of ResNet by replacing the standard residual block with one having a different strategy. 96 Finally in the DenseNet architecture, the feature map of each layer is concatenated to the input of every successive layer within a dense block. This strategy encourages feature reuse thus allowing later layers within the network to directly leverage the features from earlier layers. Compared with ResNet, DenseNets are reported to possess better performance with less complexity. 97 For instance, DenseNet has architectures with the number of parameters ranging from 0.8 million to 40 million.
Concomitantly, Recurrent Neural Networks (RNNs) allow the existence of loops in the pipeline. Derived from FFANNs, RNNs use their internal state (memory) with temporal dynamic behaviors to process variable-length sequences of inputs. Basically, an RNN uses sequential data, i.e., time series, all regarded as temporal problems in language translation, sentiment classification, NLP in general, Automatic Speech Recognition (ASR), 98 image captioning, music generation, etc. Their memory from prior inputs influences the current network's internal state and output. An important property of RNNs is that they share weights along the sequence and apply Backpropagation Through Time (BPTT) throughout the sequence in order to learn. 98 The main principle of BPTT is the same as the traditional back-propagation, where errors are back-propagated from its output layer to its input layer. However, the BPTT differs from the traditional approach in that it sums up errors at each time step, whereas FFANNs do not need to do so, as they do not share parameters across each layer (CNNs do share weights too, but such sharing occurs through the feature space and not through time).
There are several variants in the RNN reign. For instance, Bidirectional Recurrent Neural Networks (BRNNs) pull information from future data in order to improve the accuracy and LSTM and Gated Recurrent Unit (GRU) are created as a solution to the vanishing gradient problem. 99 Recently, attention mechanisms have been introduced in new algorithms configuring the state-of-the-art today. Attention is a technique
ACS Sensors
pubs.acs.org/acssensors Review that mimics the cognitive attention process in the human brain. Initially, it was applied to solve typical problems normally tackled by RNNs but completely precluding recurrence. Today, attention almost completely spans the ML application landscape. Attention enhances the more relevant parts of the input data while fading out the rest in regard to the task that the network seeks to solve. Famous examples with big breakthroughs are Generative Pretrained Transformer (GPT) 2 and 3, 100,101 as well as Bidirectional Encoder Representations from Transformers (BERT). 102
■ HOW TO GET STARTED
There is a series of feasible steps one could follow to aim for a successful ML application. Yet, such steps will not necessarily be conducted once on the ML process. Conversely, this stepby-step procedure is cyclic, returning once again to the first step and optimizing strategies in each step to achieve better results, 103 as shown in Figure 5. First of all, the problem to be solved needs to be characterized. Characterizing a problem means to understand, define, and delineate it by identifying its challenging aspects. Characterizing a problem in ML is to define what the algorithm will have as input information and what it will need to return as output. The loss functions and performance evaluations are selected once the input−output is defined. This is the step where valuable knowledge from domain experts helps in collecting the relevant data in order to understand the target requirements.
The second step is to collect appropriate data sets for training. The volume, type, and quality of the data depend on both the complexity of the ML strategy and the problem defined in Step 1. Typical questions that can arise in this stage include: Is this a classification or regression problem? Is there enough labeled data? Could we approach the problem by generating artificial data to train the model? Can we transfer knowledge from an available data set to a target data? In the case of data scarcity, can we augment our data set? In which way? The NVIDIA Data Loading Library (DALI) is a platform for data loading and preprocessing to accelerate deep learning applications. Using DALI, one can augment their own data sets by offloading them on graphic processing units so as to avoid the bottlenecks of central processing unit in the processing pipeline. Collecting an appropriate data set is a fascinating and complex problem in itself. This problem frequently entails complete investigations in which prestigious research groups devote years of laborious work. 104 Data exploration and preparation is the third step. On one hand, interacting with the data set, substantially exploring it before its final utilization, is mandatory in order to get more insights that help in the ML strategy selection and optimization. Data exploration includes testing its separability, linearity, monotonicity, and balance, finding its statistical distribution and spatial and/or temporal dependency, etc. On the other hand, data preparation deals with arranging the data for training, validation, and testing procedures. This stage usually includes cleaning, normalizing, segmenting, balancing, etc.
The fourth step concerns implementation, i.e., the ML strategy is selected and then trained and validated to finally be tested. Choosing an appropriate strategy depends on the combination of a multitude of factors such as the kind of data set to be processed and the problem to be solved. A myriad of different architectures can be chosen for different problems.
For instance, when the problem is related to computer vision, a suitable architecture is the one with considerable visual inductive bias such as the variant of a CNN. Proper architectures for NLP are the ones with a recurrent structure, such as LSTM or GRU. Nevertheless, all kinds of rules in these aspects have shown to become obsolete with time. Today, the best architectures for NLP are shown to dispense with recurrence using self-attention with transformers. 105 Likewise, such architectures have been taken from the NLP world and successfully been applied to image classification. 106 In some cases, a combination, using a self-attentional architecture by preprocessing the inputs using a pretrained CNN as a backbone architecture, is applicable to more complex tasks, such as object detection in computer vision. 107 However, there is no universally superior ML algorithm according to the no free lunch theorem for ML. Typical DL frameworks are Tensorflow and Pytorch, among others. Choosing the right DL framework for one's needs is a topic in itself and is beyond the scope of this guide.
After each training epoch, validation is conducted. Validation metrics are chosen to select the best performing epoch in an iterative manner. The current epoch with outperformed results will be stored. Otherwise, it will be discarded. Until the performance meets the requirement, the best one will be used on the testing data set. The validation metrics could be different from or the same as the ones used for the final testing.
For implementation, there are diverse ways to develop and share codes. The most widely used platform is GitHub that is a provider of Internet hosting for software development and version control using Git. Git is the software for tracking changes in any set of files, but it is mostly used for tracking changes in software development files. For sharing data sets and code, general-purpose open-access repositories, such as Zenodo, are the preferred options.
■ PROPERTIES OF ML-BASED ALGORITHMS
Algorithm Performance Evaluation and Benchmark. Performance evaluation is crucial for the development of any algorithm. It directly affects algorithm selection and parameter tuning. Different schemes exist to evaluate the deviation between the ground truth and the prediction generated by an algorithm, known as the error. During training, weights are adjusted to minimize the errors produced on training data sets, i.e., training errors. To evaluate the generalization capacity of the algorithms, the errors produced on the validation data sets, i.e., generalization errors, are relevant for practical applications. Usually, the performance on validation data sets is used during training to select the best performing implemetations. Such implementations will be finally utilized in real test data sets. Validation also provides a reference to tune the structural parameters, such as the number of layers and nodes in an NN. Accordingly, by comparing the performance of different algorithms on a validation data set, the most suitable algorithms can be selected for further application in real-life scenarios (test data sets).
Regarding the continuous output from the regression tasks, such as the denoised current trace from step 1, extracted spike segments from step 2, continuously varied spike features from step 3, and inferred properties of analytes with continuous values from step 4, relative error (err r ) and mean-squared error (err ms ) are usually employed as indexes to evaluate the performance, defined as where x is the measured value, x 0 the ground truth, and m the total number of output points. The relative errors can be calculated for each data point/situation, and the average and standard deviation of these relative errors on the total output points m can be further derived to reflect the overall performance of the algorithm on a certain data set. 35 For discrete outputs from the classification tasks, such as identified classes of the analytes from step 4 and extracted spike features in qualitative, categorical, or attribute variables, error rate (ER), and accuracy (Acc) are commonly adopted to count the incorrectly and correctly classified data, respectively where χ(.) is the indicator function equal to 1 when the condition is valid and to 0 otherwise. In addition, a standard Fmeasure is widely employed to evaluate the classification performance. 69,83,87 By comparing with the ground truth, a Precision and recall are not useful when used in isolation. For instance, it is possible to have perfect recall by simply producing a less restrictive classification of samples without worrying about false positives. Similarly, it is also possible to obtain a very high precision by just being very restrictive about the classification of positive samples, virtually invalidating the chances of false positives. Therefrom, the F score is a kind of trade-off that combines precision and recall as a figure of merit to evaluate the overall performance, i.e., Furthermore, other performance measures can be commonly seen according to the specific situations, such as receiver operating characteristics and cost curves. 108,109 To compare the overall performance among various algorithms on a higher level, directly ranking the value of the aforementioned performance indices is not a comprehensive manner. Considering the stochastic factors in the training/testdata selection and training process, hypothesis tests based on the statistical theory are usually adopted. 110 In general, acquisition of the ground truth is always difficult in performance evaluation. For example, it is usually impossible to acknowledge the ground truth, e.g., a clean signal without noise, true values of amplitude, duration, and FTE, etc., from the measured experimental data. An indirect route is to analyze the rationality and consistency of the outputs with the assistance of related physical models. Apart from experimental results, it can be beneficial to use artificially generated data, if they are accessible, to evaluate an algorithm. Usually, the generated data may come from simulations or modeling from which the ground truth is "known". Thus, a well-established set of physical models and related simulation frameworks are crucial for evaluating algorithms.
Reproducibility as a Means for Result Reliability. Reproducibility is inextricably associated with the scientific method in itself. Any result, whether for experimental measurement or algorithm implementation, must be accompanied by clear descriptions delineating its replication procedure under explicit conditions.
In order to obtain reliable results from the signal processing algorithms, a general agreement of experimental data repeatability with considerable signal-to-noise ratio is necessary. Various algorithms have been specifically designed for unique patterns of signals. If the processed signals are closer to such typical cases, the output results are more reliable and interpretative. It is obvious that in the ML-based algorithms, the features/patterns/properties learned are based on the training data sets and regarded as the essential connotations of a certain category distinguished from other categories. Therefore, these acquired connotations should be repeatable and reliable such that they can represent the essential differences among these categories in the real world.
Efforts can be made from two aspects to reinforce the reliability. One is a strict control of experimental conditions to guarantee repeatability as much as it could attain, such as standardization of experimental procedures, careful handling of nanopore devices, robust screening of noise interference, etc. The other is an improvement of the generalization ability of algorithms. Multiple variations should be involved in the data sets for algorithm training toward complicated scenarios so as to boost the robustness. Moreover, a suitable architecture of algorithms with a proper scale should be carefully selected to avoid overfitting due to randomly appearing details.
As for the implementation of ML, any one should be able to achieve the same results by using the code and data. In this way, the same computations could be easily executed to replicate identical results. Nowadays, there are exceptional tools to achieve this endeavor. For instance, a complete project can be shared online using Git and GitHub. A release of the code can be issued at the moment of publication of the experiments. Such a release allows researchers to access the same version of the code in its state at the date of publication. Researchers can also combine such tools with general-purpose open-access repositories, such as Zenodo, that allow for deposition of data sets and research software fully available online under specific licensing conditions. Yet, today's advantages regarding ML reproducibility do not end there. Incorporating additional improvements to already cloned implementations is easily achievable to build up on stable releases. The DL frameworks allow the research community to build powerful ML implementations progressively, step by step, through well tested and appropriately optimized baselines. Typical examples of these frameworks include Pytorch and Tensorflow. Such tools enable the research community to coherently build DL applications that can be rapidly modified by reconfiguring hyper parameters and computation graphs in a modularized fashion. With these tools at hand, researchers can not only replicate but also build upon released implementations by modifying them for their own needs.
Data Preparation and Data Utilization Strategies for Algorithm Training. It is not necessary to show a human infant a giraffe more than twice in order to get her to identify it in different positions and light conditions. Such a scenario is a far-reaching one for ML. Instead, the success of any ML application is at best uncertain without massive amounts of data. For today's ML standards, data, in considerable amounts, is available for only a subset of powerful companies, and academia is usually left out. Generally, data is considered a scarce resource, let alone accurately labeled data that is far from abundant. 111 Data has to be annotated manually according to human judgment, which is an extremely costly and time-consuming process. Crowdsourcing, on the other hand, is an alternative approach, which exploits the crowd to annotate data and thus significantly reduces human labor and therefore cost. Yet, results from crowdsourcing are far from perfect and bear numerous low-quality annotations. 112 Appealing to the example of recognizing images, some tasks in this area are simple, such as categorizing dogs, and can be done by nonspecialized staff. Conversely, labeling medical images, such as the ones found in cancerous tissues, needs deep medical expertise, which is extremely hard to access. 113 Supervised learning is the leading cause for this problematic situation, whereas alternative solutions to this problem can be referred to nonsupervised paradigms. For instance, semisupervised learning is an extension of supervised learning that uses unlabeled data in conjunction with labeled data to improve learning. Classically, the aim is to obtain enlarged labeled data by assigning labels to unlabeled data using their own predictions. 114 Another example is unsupervised representation learning methods that make use of unlabeled data to learn a representation function f such that replacing data point x by feature vector f(x) in new classification tasks reduces the requirement for labeled data. Typical examples of such methods are self-supervised methods and generative algorithms. Finally, reinforcement learning is an optimized-data alternative to supervised learning, since the sample complexity does not depend on preexisting data, but rather on the actions that an agent takes in the dynamics of an environment. 115 Another general solution can be data augmentation. It involves a set of methods that apply systematic modifications to the original training data in a way that it creates new samples. It is regularly utilized in classification problems with the aim of reducing the overf itting caused by limitations imposed by the size of the training data. Augmentations can be basic or generative, depending on if they are handcrafted by humans or artificially learned by machines via utilizing generative algorithms. They can be applied to data-space or feature-space. They can be supervised or unsupervised depending on if they rely on labels or not.
Knowledge sharing aims at reusing knowledge instead of relying solely on the training data for the main task. This category comprises (i) transfer learning, which aims to improve learning and minimize the amount of labeled samples required in a target task by leveraging knowledge from a source task; and (ii) multitask learning, which involves no distinction between source, target task, and multiple related tasks. They are learned jointly using a shared representation, (iii) lifelong learning, which aims to avoid "catastrophic forgetting" (catastrophic forgetting basically means the loss or disruption of previously learned knowledge when a new task is learned) and (iv) meta-learning, which automates the experiments that are required to find the best performing algorithm and parameters of the algorithm resulting in better predictions in shorter time.
In the realm of nanopore translocation events, several applications, such as spike recognition, feature extraction, and analyte identification, can be solved using shallow ML or sophisticated DL schemes. In contrast to traditional algorithms that usually rely on expert knowledge and experience, ML has advanced with important achievements exemplified by its success in addressing most of the issues in this area during the past decades. Shallow ML has mainly offered new mechanisms with the capacity to automatically learn from data, solving problems of feature extraction, classification, identification, and regression, in the signal processing for nanopore sensors. Even when the parameters of the ML algorithms are automatically adjusted from data, the inputs to such algorithms have to be preprocessed considerably in order to make them digestible by the algorithms. However, this preprocessing step usually requires human expertise, which is subjective and sometimes incompatible with the ultimate goal of the learning algorithms.
To overcome this challenge, DL can recognize and automatically extract highly specialized features from the raw data by a training process that agrees with the latest classification or regression stages. This solution can considerably improve the performance of system tasks. By taking such a strategy, DL has been applied to solve analyte classification with automatically extracted features, 70,83 translocation waveform regression and identification, 25,77 and noise recognition and elimination 62,87 in nanopore sensing. Yet, DL has its own drawbacks that render it difficult to implement in some scenarios. To begin with, DL is inherently a data-hungry strategy lacking mechanisms for learning relevant abstractions from few explicitly exposed examples. This pitfall is far from how humans solve problems on a daily basis. Additionally, DL works best when there are thousands, millions, or even billions of training examples. 101 In problems with limited data sources, DL is not an ideal solution. In the specific area of nanopore sensing, real traces collected from nanopore translocation experiments could be abundant, but they are not labeled. Recruiting staff for labeling such data is not viable, given the extension of the data sets needed to train any conceivable DL architecture. Palliative strategies as the ones discussed above could solve the problem at least partially. For instance, data can be augmented in several ways, knowledge of the system can be transferred to new tasks, and alternative unsupervised tasks can serve as pretraining examples to improve the performance obtained from scarcely labeled data sets. Moreover, it is paramount to develop good strategies to augment the available data or to pretrain the architectures on, for instance, unsupervised tasks before training (fine-tuning) them on labeled data for the final downstream tasks. Generating artificial nanopore translocation signal traces appears to be a good option. Such a path has its own caveats though, since generating a data set with the same probability distribution as the experimental data is an impossible endeavor.
Nonetheless, an approximation can be achieved and the better it is, the better the network can be trained regarding experimental data sets. From the perspective of the architecture, DL offers a rich repertoire of alternatives with different characteristics. A reasonable strategy seems to be following the trends utilized in current state-of-the-art computer vision and language models. For example, it is a reasonable path that employs CNNs as preprocessing backbones pretrained on unsupervised tasks. Afterward, it finetunes such backbones using attentional architectures, such as transformers. Finally, it trains them on supervised downstream tasks with reduced labeled data sets. Therefore, it demands new ways of augmenting nanopore translocation data or alternatively generating unsupervised tasks in order to pretrain the architectures.
■ CONCLUSION AND OUTLOOK
Nanopore-based sensors have found myriad existing applications and confer potential in a wide range of scientific disciplines and technological areas. The realization of nanopore sensors has critically benefitted from today's mature biotechnology and semiconductor fabrication technology. Signal processing is an inseparable component of sensing in order to identify the hidden features in the signals and to analyze them. In general, the signal processing flow can be divided into four steps: denoising, spike recognition, feature extraction, and analyte identification. Following this processing flow, the developmental tactics and features of the algorithms at each step are discussed with implementation examples, by categorizing them into ML-based and non-ML-based classes.
With the application of ML, the performance of an algorithm is enhanced to a great extent, especially for classification tasks, thus facilitating the wide spectrum of real-life applications of nanopore sensors. Lately, an increasing number of novel algorithms are developed periodically. Thus, in this work, a comprehensive guide is provided with further discussion on the special properties of ML-based algorithms that are shaping up a new paradigm in the field. A successful nanopore technology builds on two hand-inhand pillars, i.e., the "hardware" comprising, apart from essential biochemistry, device fabrication, integration, upscaling, electronics, surface management, and the "software" named signal processing. Nanopore sensing signals are generally different from those of other sensing approaches and require special treatments. Three sets of major challenges need to be resolved in order to take full advantage of the great potentials of nanopore technology. (i) The complicated physics in the intertwined processes of ion transport and analyte translocation makes the mechanisms behind signal generation intriguing, since they depend on a large range of different factors. (ii) The nanoconfined space, surfacedominant processes, multiorigin noise, high environmental susceptibility, and weak long-term stability invite serious concerns about the quality of signals. Achieving a quantitative and precise description of the signals can render a challenging proposition. (iii) The great variability found in the configuration of sensor structures and experimental measurements demands the handling of the nanopore signals to seek interpretation of the widely varying data and to standardize procedures, tools, and protocols. In order to respond to these three challenges, two aspects are considered. On one hand, sophisticated physical models based on the established translocation mechanisms are required to assist the evolution of corresponding algorithms. On the other hand, strategies for performance enhancement regarding accuracy, objectivity, robustness, and adaptiveness need be outlined.
As can be seen from the general flow of signal processing for nanopore sensors, each step has its own purpose. No single algorithm/strategy can resolve all problems covering the entire flow. Moreover, this flow is not strict and can be redesigned to take into consideration variations in sensor structures, measurement configurations, target analytes, and application scenarios. Thus, the algorithms are highly application-specific, and some may skip certain steps while others may need to integrate several.
Further development of algorithms for nanopore sensors should consider three aspects. (1) Modularity in each step is a necessity in order to retain the flexibility of the signal processing flow. Users should be able to select suitable algorithms, according to the nature of the data, so as to accomplish the entire task from raw data to final extraction of analyte properties. Standardization of the inputs and outputs is required for each step. (2) Tailorability is another important feature that users should be provided with. Some system parameters for each algorithm, such as format of data, option of data pretreatment, interested feature of the signal, etc., may need reconfiguration in order to adapt to the specified application. (3) A synthetical platform as a package solution is welcome. It integrates several algorithms in all steps and assembles a pipeline of the signal processing by the users' preference. Furthermore, the performance of different algorithms can be compared systematically, which offers a reference for users' selection.
Advanced algorithms should be able to assess more stereoscopic data than generated electrically by analyte translocations. Optical and nanomechanical signals are complementary examples. Algorithms of the next level are also able to evaluate experiment-related information such as design and fabrication parameters and characterization conditions. Co-design of experiment and algorithm would be the ultimate.
■ VOCABULARY
Nanopore sensor, A sensor device with a nanoscale pore in a separation membrane. It usually works in electrolytes to count or analyze analytes, such as biomolecules, upon their passage through the pore that generates signals of electrical, optical, or mechanical nature; Pulse-like signal, A time-sequence signal that consists of rapidly increasing-decreasing changes, i.e., pulses, on a generally stable baseline. The appearance of these pulses can be either random, such as neuron spike signals and nanopore translocation signals, or regular, such as electrocardiographic signals; Machine learning, A family of algorithms whose current output is associated with its history input or the distribution of the history input. Hence, such algorithms are built on recurrently "learning" from the history/distribution of the input. Such learning can be either explicit, as in a supervised training process, or implicit, as in some unsupervised clustering algorithms; Signal features, Abstract parameters that characterize the information of a signal. Feature extraction is a process of information compression that uses several parameters to represent the information of interest for the signal; Labeled data, Data with correct answers. They are marked and subsequently used for training an algorithm in accordance to a supervised learning process. These correct answers are the information of interest that are anticipated from the algorithm upon inputting corresponding data. They can be the right class of the data and/or the right values of certain quantities of the data; Deep vs. shallow learning, Two different categories of machine learning algorithms. Deep learning is based on a huge number of tunable parameters that can be even larger than the amount of training data, such as those in a neural network, while shallow leaning uses a relatively smaller scale of tunable parameters, such as those in a support vector machine; Classification vs regression, Two distinct categories of machine learning tasks. A classification task requires that an algorithm identify the input data and divide them into several preset classes, while a regression task anticipates that an algorithm predict the values of continuous variables, usually the characteristic quantities of the data | 19,266.8 | 2021-10-04T00:00:00.000 | [
"Engineering",
"Materials Science",
"Chemistry"
] |
A comparison between the finite element method and a kinematic model derived from robot swarms for first and second gradient continua
In this paper, we consider a deformable continuous medium and its discrete representation realized by a lattice of points. The former is solved using the classical variational formulation with the finite element method. The latter, a 2D discrete “kinematic” model, instead is conceived to determine the displacements of the lattice points depending on interaction rules among them and thus provides the final configuration of the system. The kinematic model assigns the displacements of some points, so-called leaders, by solving Newton’s law; the other points, namely followers, are left to rearrange themselves according to the lattice structure and the flocking rules. These rules are derived from the effort to describe the behaviour of a robot swarm as a single whole organism. The advantage of the kinematic model lies in reducing computational cost and the easiness of managing complicated structures and fracture phenomena. In addition, generalizing the discrete model to non-local interactions, such as for second gradient materials, is easier than solving partial differential equations. This paper aims to compare and discuss the deformed configurations obtained by these two approaches. The comparison between FEM and the kinematic model shows a reasonable agreement even in the case of large deformations for the standard case of the first gradient continuum.
The origin of this idea lies in the Computer Graphics industry [2]; in video games, it is necessary to calculate the deformation and breakage of objects very quickly. The body is discretized, and PBD tries to obtain a plausible description of its deformation, considering only the relative positions of the points to describe the action of the internal forces. It is imperative in the visual effects industry to provide movies with simulated versions of complex physical phenomena that are too expensive or impossible to solve precisely. In such a commercial environment, these simulations must be as efficient and fast as possible while remaining realistic to the viewer's eye.
Initially, approaches to simulating dynamic systems in computer graphics were often based on force [3] and Newton's equations. Internal forces (i.e. elasticity) and external forces (i.e. gravity) are computed to obtain the displacements; they are derived from Newton's law using the discrete time integration method to update the velocities, from the accelerations, and finally, the positions of the discretized object.
This classical approach presents many problems which are typical for numerical integration systems. Moreover, collision constraints and penetrations are difficult to manage. Also, while external forces are relatively easy to apply, problems occur in calculating internal forces. Sometimes these forces are represented by a massless linear spring (spring model); this model constitutes an important class of models that can be characterized in different ways by changing the geometry of the springs and their type. Some difficulties may arise when one has to separate bending stiffness from torsional stiffness. It is possible to establish micro-macro relationships to link spring constants with the constitutive parameters of bulk materials (i.e. Young's modulus and Poisson's ratio). Some other models to describe internal forces may be unstable or require high computation time.
Another possibility involves the use of energy minimization methods and the actions principles [14]. These are guided by variational principles, which state that a constrained surface will take the form that minimizes the total strain energy. Unfortunately, differential equations are still needed to calculate the force by expressing the energy of a surface in terms of its local deformation and differentiating the energy with respect to position. The problem is even more difficult in the case of the second gradient, where the energy is written in a more complex way, and Lamé's two constants increase by five. This because in the case of strain gradient theory of a centro-symmetric and isotropic material, there are 5 additional constants (so they are 7 instead of 2 in the simplest case).
Another possibility emerged in the 1970s with smoothed particle hydrodynamics (SPH) [15]. But in this case one always starts with the Navier-Stokes equations to be discretized and thus the computational cost is high.
We emphasize how in computational sciences the main focus is on accuracy. In contrast, the main issues in the Computer Graphics industry are stability, robustness, and computational speed, keeping the results plausible. So far, a new method has been developed in which it is preferable to have direct control over the positions of objects, often represented by the vertices of a mesh, avoiding collision between them and managing their displacements instead of dialoguing with forces. The method is known as Position-Based Dynamic (PBD) and was introduced by Jakobsen in 2001 [16]. It works directly on positions, which makes manipulations and deformations easy to manage. Moreover, with the PBD approach, overshooting and energy problems, related to explicit integration can be avoided because the integration can be directly controlled. The advantage of this technique, even over a mass-spring model, is its stability, even when taking large time steps, which is not always possible in the mass-spring model. The new position can be expressed as an algebraic function of the old position and the time step. The algorithm is iterative and can be stopped at any time if a certain inaccuracy is accepted: If the time step is larger the algorithm can go faster and thus a time/accuracy trade-off must be chosen. Problems with internal forces are avoided by replacing them with a set of geometric constraints. In the simple case, the constraints are holonomic and bilateral. To simulate elastic behaviour, for example, the massspring model can be converted to a constraint system, which requires the distance between connected nodes to be equal to the initial distance. This system can be solved sequentially and iteratively, moving nodes to satisfy each constraint until sufficient accuracy is achieved. One of the main limitations may be the dependence of the results on the number of iterations of the solver. Still, one of the significant advantages of a position-based approach is its controllability. Constitutive parameters of the material are hidden in the rules of interaction between first neighbours, hence in the constraints.
In some previous papers [17][18][19][20], we have shown how a 2D kinetic model is able to handle different types of phenomena, such as elasticity, plasticity, and fracture, using the same framework. The ultimate goal is not to obtain an exact description of the deformation, but a plausible visible description. This work aims to obtain an algebraic equation that can calculate the position of a point using the placement of other points as input.
This kinematic approach has the advantage of working with simple algebraic equations and the complexity of the system increases only linearly, rather than quadratically, as the number of points increases. We want to emphasize now that, in this paper, we use an approach that is only partially similar to PBD because, so far, we only use position data.
In the context of underwater swarm robotics, the problem related to flocking rules to determine what the behaviour of a single robot should be so that the swarm reaches a certain assigned configuration is often faced. One option to accomplish this task is to consider what a given number of its neighbours are doing. Using this information, a pure kinematic model can be developed with which we try to reproduce some behaviours of two-dimensional deformable bodies according to the standard Cauchy model as well as to the second gradient theory. All the constitutive properties of the material are within the rules determining the kinematic displacements of the particles, each with respect to the others. This tool has an advantage in terms of computational cost and is very flexible to be adapted to samples of complex geometry and different behaviours. The results are encouraging and even fracture can be considered easily and in many different ways.
The other PBD methods we found in the literature require knowledge of the particle velocity, are related to Newton's equation, and use mass explicitly. We avoid, for now, the explicit use of these concepts by using a purely positional kinematic approach. Therefore, in this work, we refrain from considering basic aspects such as mass, velocity, and conservation of momentum but only evaluate kinematic characteristics and transformation rules. The most crucial problem, then, is how to reconnect the classical constitutive equations, which determine the characteristics of the model and the material, to the parameters of the kinematic model under study.
The proposed model is based on swarm intelligence theory [21]. The algorithm that exploits is parallelizable and runs with the CUDA programming language, so we can obtain numerical simulations in a few seconds, even if they are very complex, using a modest desktop PC. Remark that the model does not yet have any physical meaning. It concerns only a set of variables, which are points in a Cartesian plane, that change coordinates in successive steps by iterations of an algorithm. Therefore, physical concepts such as "motion", "time" or "velocity" are not considered at this stage of the model development. It will be the subject of future work to connect the model parameters with the constitutive equations of the material.
This tool could be helpful in complex microstructures not easily analysed by the Cauchy continuum [22]. Classical Cauchy continua cannot accurately predict highly inhomogeneous microstructures [23][24][25][26][27] and [28]. However, generalizations must be introduced by considering additional degrees of freedom to account for kinematics at the microstructure level or by including higher displacement gradients than the first one in the strain energy density [29][30][31][32][33][34][35][36]. The latter is particularly relevant when considering the technological interest in developing exotic mechanical metamaterials capable of performing targeted tasks; therefore, investigating new and efficient algorithms, such as our tool, is of great interest.
In the second gradient theory, the Lagrangian function also depends on the second derivatives of the displacement field and not only on the first ones. The origin of this model lies in the fact that higher-order derivatives have sometimes been included to obtain more general models; Lagrangian functions dependent on higher-order derivatives are often called non-local in the literature [37][38][39][40][41][42][43][44][45][46][47][48][49][50]. This terminology can be explained by considering that, while for a standard material, in a discrete framework with a lattice of points, a simple interaction between a given point and the immediately neighbouring points, in the continuum limit involves an energy function based on the first derivative of the placement field according to the Piola's ansatz. In the case of the nth gradient, the interaction among points involves all the nth frame neighbouring points in the lattice. Therefore, in the continuum limit, the energy depends on the nth derivative of the placement [51][52][53][54][55]. Indeed, in practice, one must measure finite differences of higher order in a discrete framework to estimate the values of the higher derivatives. Furthermore, by introducing new characteristic lengths, higher-order models also introduce new parameters that require appropriate methods for their determination [56][57][58][59][60][61][62].
Sometimes remote actions are excluded, which has led to the introduction of local Lagrangians; however, in some situations, non-local Lagrangians must be considered. For example, in composite materials where microfibers are present, these can introduce non-local interaction between points that are not close in space due to the connection between distant points provided by the fibres.
We are aware that many mathematical results are challenging to satisfy, but we believe that the proposed discrete algorithm can be helpful in a wide variety of mechanical applications.
The paper is organized as follows. The first section contains a description of the swarm robot model. The second section shows numerical results comparing a 2D continuum with a first and second gradient energy and the proposed kinematic model. Future developments on this topic and open questions are presented in the conclusions.
The tool
The use of robot swarms in underwater work is becoming increasingly important; flocking rules have been applied to determine the displacement of the individual robot in order to reach the assigned final configuration of the entire swarm [63]. The new position of the robot is determined by the position of its neighbours; we applied the same algorithm to describe the deformation of a lattice of material particles. This is a purely kinematic approach to the problem, without the use of the concepts of mass or force.
In practice, we proposed to consider a transformation operator, with constraints, between matrices representing the particle configuration, C t , for a discrete set of time steps t 1 t 2 ,...t n. This operator can be represented by a fourth-order tensor that transforms a two-dimensional matrix into another two-dimensional matrix. The continuous two-dimensional body is discretized into a finite number of particles lying, in the undeformed configuration, in the lattice node. A complete description of the algorithm can be found in [17,18,20] where we obtained a plausible description of success even in the case of material fracture. In the previous articles, we have defined four types of particles that make up the lattice but, if necessary, new types can be introduced, to describe other properties, due to the modular structure of the algorithm. They were the leaders whose movement is assigned and determines the movement of the other particles. The followers whose movement is determined by the flocking rule with other particles; we also had the frame, introduced to avoid edge effects and the fourth type concerns fracture These last two will not be used in this paper, where. a model without the frame avoiding edge effect with different flocking rules is considered.
Some hints from graph theory help us to note that time and velocity are concepts that do not belong to this model; therefore, we prefer to talk about pseudo-time as an update step of the algorithm.
We then consider a graph as a mathematical structure used to model pairwise relationships between objects. We denote a graph G = (V, L) as an ordered pair of a set P of points, a set L of links. We will consider only undirected graphs, so there is no direction associated with the links. Considering n points V i , with i ∈ {1, 2, ..., n}. We define the adjacency matrix as: the matrix A has dimension n × n. This is not the only possible definition of adjacency matrix; to satisfy some properties of our swarm we can change it, i.e. change the definition of neighbour.
A path on the graph is defined as a finite or infinite sequence of links joining a sequence of points; it will be useful to define the concept of neighbourhood order, related to gradient and non-local order.
To describe the time evolution of our swarm we consider a set of p-time (pseudo-time) steps, i.e. a set of ordered discrete values of the time variable t that we will denote with T = {t 1 t 2 , ..., t n }. We talk about pseudotime because tit is only an index parameter of the loop. A swarm, denoted as S,is composed by n material points P i (i from 1 to n). At pseudo-time t m the i element of the swarm has position P i (t m ) = (x i , y i )(t m ) where x, y, are the coordinates in a reference system. So far P i is the material point and P i (t m ) is its position represented in Cartesian coordinate. At time t 0 , we denote by C 0 as the reference configuration of S.Therefore, S is the swarm as collection of points P i while the configurations C tm are the set of coordinates of them at pseudo-time t m . We assume that C 0 is the representation of a bidimensional crystalline lattice to be specified. For each elementi in Swe define a fixed set of neighbour's points, calculated in the reference configuration C 0 . Their number, typically, is nc, the coordination number of the lattice. We call this set of point the first neighbours. The process is iterative so we can define the neighbours of each neighbours and call this set the second neighbours.
The first neighbours of a point P i so are all the points of S at time t 0 adjacent to the point P i in the meaning of the graphs (so it is related to the lattice coordination number). The second neighbours of a point P i are all the points of S whose minimum path without repetitions consists of only two links. Therefore, defining l(i, j) as the minimum path without repetitions between points i and j, the k-th neighbours of a point i is the set So, Ne k (i)is the set of point neighbours of i order k. We call it the shell of order k. We can also consider pairs of opposite neighbours of chosen point; this will be useful in defining the rule to compute an alignment term. The set of pairs k-order opposite neighbours of a point P i is the set Therefore, we have defined, assigned a point P i and its kneighbours a couple of point composed by the kneighbours and its opposite neighbours h. So, the j-th pair of the point P i will be indicated with ρ i j and its average point of the pair as < ρ i j > for each t m . Note that Ne k (i)is the i-th row of the adjacency matrix: this means that our adjacency matrix A is not nxn but nxm where m is the number of the total neighbours considered until the k-th order.
We select a set of leaders L; in analogy to biological swarms they are a subset, L ,of C 0 whose trajectory is determined. These elements drive the evolution and the deformation of the swarm and they have a "priori" imposed motion and are not influenced by other elements in the swarm.
All the remaining elements of C 0 form the set of the Followers, F. The movement of the follower is guided by the flocking rules. A possible choice of the rules is: Where Int i (t m ) is an interaction term between the point i and its neighbours; one of the possibilities is to choose the interaction term so Where k is the number of neighbours of the point i.
Where the quantities are explained as follow.
is the new position of the point P i . Int i (t m )is the Interaction term among the point P i and all its neighbours in the h-th order shell until the k-th order. N i is the number of the neighbours of the point P i . r , j (i,t m ) is the new position of the j-th neighbours of the point P i at the p-time step t m . α j is the coefficient that modulates the intensity of the "elastic" interaction (W i, j ) and the convergence velocity. It depends by the j-th neighbour's shell order. β l is the coefficient that modulates the intensity of the "alignment" interaction (Y ,i,l ). It depends by the j-th neighbour's shell order; the lis the label of the couple ρ i,l . <> has the meaning of simple average value of the distances, The term W i, j, can simulate an elastic behaviour of the material and the rule contains a dependence on the distances between the point and its neighbours trying to emulate a similarity with Hooke's law. We remark that the proposed algorithm is still different from a particle system with spring interactions and some constrained, because we do not give a constitutive equation for elastic potential. We implement a recursive set of rules in order to define the evolution. It should be noted how in Eqs. 5 and 6 although the interaction is between two particles it occurs for all neighbours, so this would be a 'multi-body' Hooke's law.
The Y i,l term was introduced to handle two problems. The first is to obtain a way to modulate the bending response of the swarm. Indeed, the Y i,l term pushes the point i to be located in the centroid of the considered pair of opposite neighbours. It is an alignment term but we will not always consider the Y i,l term in this paper using other methodologies. The second problem is to avoid the overlap of points during displacement.
In previous articles we have supposed the existence of a frame, but not here. This is because, unlike [17,18], to balance the interaction on the edges of the swarm we do not need all shells to be complete (i.e. each shell has the maximum number of neighbours expected for the considered order). When used, Frame points have their own rule of motion and are moved in a different pseudo-time than the other points in the swarm. In their absence therefore, here all non-Leader points are moved simultaneously and this makes the algorithm leaner, smoother and faster. However, we noticed that the edges of the swarm suffer from local overlaps and oscillations; they cause exotic behaviours that propagate within the whole lattice, especially increasing the term α j Thus, our idea about the incompleteness of neighbouring shells was wrong, and to correct it, we correct the term 1/N i with 1/(2Z k − N k (i)), in Eq. 6, where Z k is the number of points in the shell of order k when it is complete.
The boundary elements may have a larger displacement than the inner followers, due to the lack of points at the edge of the swarm not balancing the interaction. This because sometimes a border point can have a neighbour's number less than k. This reduces the weight of the edge points and coregulates the imbalance. This modified rule will be used thereafter.
Several methods have been used to avoid particle overlap and to overcome the alignment parameter Y i,l ; one of them is the use of intermediate propagation stages where the leaders are frozen and the lattice resettles.
In previous work [19] it has been observed how the deformation information in the swarm propagates through its elements as the pseudo-temporal steps move forward. The strain information will propagate through the swarm proceeding in shells that are proportional to the k-order of the neighbours considered in the interaction.
To avoid point's overlap, we introduce the parameter γ ; It consists in a certain number of settling steps while the leaders are stopped. In the previous model it was necessary to pay attention to the pseudo-velocity of the leaders, assign them an extremely low value (resulting in longer computation times) to avoid the overlap problem (especially in compression simulations) and give enough time to the followers to adapt according to the deformations. We introduced algorithm cycles γ between two pseudo-time steps; in these intermediate cycles the leaders are stopped while the followers continue to resettle. This has the significance of changing the information about the rate of propagation of deformation within the swarm. What changes in the visible motion is the stiffness of the deformation but the final configuration obtained is the same.
With the parameter γ we can modulate how many shells are involved in a pseudo-temporal step; it is then related to the rate of propagation of the movie deformation in the swarm. This concept is analogous to the concept of correlation length in lattice models (such as the Ising model) and the concept of persistence length in polymer models.
It needs to find an optimum between the pseudo-speed (which can be higher) and the increase in machine time due to the increase in computational steps.
Practically, we start with a swarm occupying the nodes of a crystal lattice, move the leaders to an assigned position, and calculate the positions of the followers by means of the above equations.
The code was initially developed in Mathematica by Wolfram research. It was later converted into the Python language to take advantage of the CUDA libraries to parallelize the calculation. The simulation involves particles (or agents) moving in two-dimensional space and interacting with each other according to certain rules or behaviour. The code imports several libraries, including matplotlib, NumPy, math, itertools, SciPy, timeit, numba and tqdm. Another section of the code includes several functions for creating the swarm. These functions include ADJMatrix, which creates an unordered adjacency matrix and is called by the lattice function, and NBORD, which reorders the elements of the adjacency matrix and is also called by lattice. The purpose of these functions is to create a network of connections between the agents in the swarm, allowing them to interact with each other. This is done by calculating the distance between each agent and its neighbours and creating a matrix representing these connections. The remaining code includes functions to simulate the behaviour of the agents in the swarm, updating their positions based on interactions with each other and the environment. The code can also include functions to visualize the swarm as it evolves over time using CUDA. It takes several input parameters, such as step size, start and stop times, number of particles, their initial positions and velocities, and various interaction parameters. The function then prepares the data to invoke a CUDA kernel, which is responsible for the calculation of the swarm's evolution. The kernel uses the input parameters to calculate the new positions and velocities of the particles, based on their interactions with neighbouring particles and the swarm leader. The function then returns the updated positions and velocities of the particles, as well as the time step at which the function finished its calculation. The kernel is structured in such a way that it performs the particle interactions in parallel on several GPU threads.
Results
In this section, we present some comparisons between our tool and the solution obtained by COMSOL software using finite elements analysis on a square lattice.
Let us consider a swarm S constituted by a finite number of elements, and indicate by C 0 the reference configuration of S a time t = t 0 . In C 0 the elements of S occupy the nodes of the lattice. We consider a set of time steps, i.e. a set of ordered discrete values for the time variable t that we will denote with T m = {t 1 , t 2 , t m }. Consider an orthonormal reference system with axes that are parallel to the lattice, whose unit length is equal to the side of the lattice and whose origin coincides with the left bottom vertex of the swarm. To identify the elements of the swarm, we use the coordinate (x i j , y i j )(t m ) of the element occupying the node (i, j) at the time t m . For each element (i, j), we associate a set of neighbours kth order as the -th coordination number of the lattice cell. By this, we define, in case of k = 1 and k = 2, the definition of first and second neighbours, including the neighbours of the first neighbours. Actually, we use a definition of k-th neighbours that is Lagrangian, i.e. the definition has memory of the identity of the neighbours of a given element of S. The reason is that we are interested in describing matter in solid state, where the behaviour of constitutive particles depends on local molecular interactions which are preserved during the evolution of the system. In the next papers, we shall relax this hypothesis.
So, we select a leader (or more than one) that is an element L placed at a node {i, j} of the lattice with an assigned displacement M, i.e.
That means a selection of points of S whose initial coordinates are a subset of C 0 .
In particular, the deformations we have compared are related to a rectangular specimen of size 1.2 m × 0.3 m that undergoes simple tensile stress with and without a slot (dimensions of the slot are 0.9 m × 0.075 m), posed in the centre of the sample. Points on the left side are clamped while the points on the right side of the sample are moved up to the final position which is assigned. Each time unit is composed by two steps. The movement of the leaders and the movement of the followers. No fracture is considered in this case. The simple rule, governing follower's motion is expressed by Eq. 4. This rule can be defined as "barycentre of its neighbours" in case of complete shell that means when all neighbours are considered. The neighbours are determined by the coordination number of the lattice; therefore, the leader's motion implies a displacement of the first layer that propagates in successive "time steps" to the other particles, according to the γ frames used. This means that the displacements, at each time step, involve a larger shell of points until it regards all the lattice points. The quotation marks are to remind that in our model the time is only fictitious because it is related to the iterative process. The model solved with COMSOL treats stationary case; therefore, the time is not important at all. This led us to two problems: how to link our model's parameters (α i , β i , γ ) with the constitutive parameters (e.g. ν, E) and how to simulate higher-order gradients continua.
In this section, we will show how we managed an empirical link between the Poisson coefficient ν with the only parameter α i in order to reproduce with tiny errors a first-order gradient continuum; meanwhile, for second-order gradient continua also β i plays an important role.
After that, we will show a quantitative comparison between our simulations and FEM solutions, an important step forward with respect to the previous papers [64] and [18].
We want to underline that in this paper we are not considering the fracture. Moreover, the dynamic evolution of the swarm is only a mean to obtain the final equilibrium configuration; in the future we shall discuss as to link this evolution with a real motion. Remember as the value of γ does not influence the final configuration as shown in [18], but only the intermediate steps.
First order gradient
In the previous papers [64] and [18], we stated that the interaction term W i,j is sufficient to describe qualitatively a first-order gradient deformation. In this subsection, we want to validate this statement showing that our model can describe first-order gradient deformation also quantitatively, comparing our results with those obtained by FEM analysis.
In [64], we noted that using all the α i with the same value α (Eq. 1) the Poisson effect does not exist In order to obtain Poisson's effect we can assign different values of α for each neighbour in all the shells of the swarm; by this way, we can vary the Poisson effect. In particular, in the case of a square lattice, we assign a value α for neighbours positioned in vertical and horizontal directions with respect to the centre point of the shell, and a different value, α d , for neighbours positioned in the diagonal direction.
Then we analysed the Poisson effect varying α d and keeping α fixed. We observed that for α d = 0 there is no Poisson effect; indeed, the displacement of each particle depends only on the interaction with vertical and horizontal neighbours. Therefore, the vertical and horizontal displacement are uncoupled. Increasing α d , keeping α fixed, the Poisson effect become more and more significant. Furthermore, we noted that varying both α d and α keeping their ratio fixed does not alter the Poisson effect (for values of the parameters in standard range as specified in [64]).
Thus, we hypothesize that only the ratio (α d / α) affects the intensity of the Poisson effect. Particularly, we expect that there is an increasing monotone function ν = ν (α d /α) because we are increasing the horizontal and vertical links. To test this hypothesis, we perform several simulations varying the ratio (α d /α) and evaluating "geometrically" the Poisson coefficient ν for each simulation. This means that we compute as Poisson coefficient the ratio of the transverse strain to the longitudinal strain of the specimen; the transverse strain is considered in the middle of the beam. We obtained the relation ν(α d /α) interpolating the collected data, see Fig. 1 (Consider the value of β = 0. Inverting this relation, we can evaluate the ratio (α d /α) that reproduces the desired Poisson coefficient. Remember as this relationship is purely empirical and there is no theoretical link between our parameter and Poisson coefficient. Forcing the coupling between the ratio (α d / α) to higher values, we are introducing a sort of anisotropy in our model showing Poisson's ratio larger than 0.5.
We underline that this analysis was performed only for swarms without the slot. This is because we try to simulate the same material with different shapes.
Second-order gradient
In previous papers [18,64], we supposed that second-order gradient phenomena could be obtained extending the W i,j interaction to second neighbours. We have observed that the Y i,l interaction is just enough to give good results, and we believe that it is a better candidate to describe second-order gradient phenomena with respect to the use of the second neighbours. Therefore, in the following we will consider only the W i,j and Y i,l interactions extended to the first neighbours in order to simulate the second gradient continua. No second neighbours will be considered in this paper. Since βis linked to three alignment points it is able to introduce non-local interaction albeit in a simple way. For these reasons, we explore the effect of β in reproducing second gradient phenomena.
As in the first gradient case, we performed several simulations varying the parameters (α, α d , β) considering the same βfor all the couples of first neighbours. For each simulation we valued geometrically the Poisson coefficient. In Fig. 1 is shown how β influences ν: fixed the ratio (α d /α), the Poisson coefficient decreases if β increases. A possible explanation could be that increasing β the system becomes stiffer, so Poisson effect is decreasing.
In this case, the relation ν = ν (α d /α, β) is not trivial: different triplets (α, α d , β) lead to the same Poisson coefficient but with different global deformations of the swarm. So, being unable to select the right triplets for now, we proceeded in heuristic way to find the triplets that better fit COMSOL simulations.
Also, in this case the magnitude of the Poisson effect does not depend directly on α, but the ratio (α d / α) affects the intensity of the Poisson effect, as shown in Fig. 1 for different β.
Comparison
To obtain a quantitative measure of the agreement between the results we consider the coordinates of a subset of points, N g , in the deformed configuration and we measure the distances between the corresponding points using the two methods, FEM and our tool. A colour plot with the differences shown as a very good agreement can be reached. Three cases are considered with three different nominal Poisson coefficients (0.2, 0.3 and 0.4). The swarm is composed of about 2500 elements; the pictures show a sub sampling of about 300 elements. The value of β is 0 in the case of the first gradient, while 1.3 in the second gradient model; we found that this value has the better match with the FEM solutions. The value of α/α d is given in Table 2. The motion is characterized by the features presented in Table 1.
We then modulated the parameters of our model so that the results coincided as closely as possible with those obtained by the FEM. We have considered two kinds of sample (with and without slot), three Poisson coefficients and first and second gradient cases. Finding the match was not easy, especially in the case of the second gradient, where the parameters involved are more than two owing to the presence of β. Having obtained very similar results, we can make the following considerations.
The differences between our tool solution and FEM are very small. In Table 2, there are max and mean errors, computed between the coordinates of the lattice, for all the cases studied. Now we will see where the differences are located within the sample. In the case of the first gradient, without slot, these are located in the lateral area (coloured yellow, see Fig. 2). In the centre of the sample, the differences are almost zero, while near the leaders, where the deformations are greater, the area is more critical. However, the location of the differences is not quite the same; increasing Poisson coefficient we have different areas where the mismatch is greater, also if they always are distributed close to the corners as can be seen from Fig. 2. Far from the leaders, where are concentrated deformation owing to edge effect and at the centre of the specimen, we note a uniform deformation, as it is expected in this kind of tests. In contrast, in the case of the second gradient, the main differences are always located in the same position, i.e. about one-quarter and three-quarters of the sample, and in the centre with respect to the vertical axis. See Fig. 3. This area corresponds to the location of maximum second gradient deformation. Also, in the case of the sample with the slot there are differences. It would seem that for the second gradient the difference between our method and the FEM solution is more pronounced close to the slot towards its centre; the slot is a critical area.
Generally, the agreement between models is greater in the first gradient case. But there is also a qualitative fact. Take for example the case of the sample with the slot in the second gradient condition. For high values of Focusing our attention on a particular area of our sample, in the worst case with the highest Poisson's modulus, we can see the quantitative and qualitative differences described above, see Fig. 6. Moreover, look now at this interesting phenomenon, in Table 2. In the FEM analysis, we assigned the Poisson coefficient and performed the simulation; if we measure "a posteriori" the Poisson coefficient from its geometrical meaning, as a variation of the sample width, we get a rather different value from the nominal one used in the calculation. This is because the deformation is not purely uniaxial. Again, the agreement with our model (without a nominally assigned Poisson coefficient) is very good as can be seen in Table 2. In the case of the second gradient, the differences between the nominal and measured values are even greater due to the higher stiffness of the sample.
In Table 2, ν N is the nominal Poisson coefficient that was used in the constitutive continuum equation for FEM analysis, ν S is measured by FEM simulation and ν M is measured by our tool. From Table 2, we note that the trend of these values closes to 0.4 is different going towards its limit corresponding to the incompressible behaviour. Note as β =1.3 in the second gradient and 0 in the first gradient to approximate FEM solutions.
Limit of the model: an example
In this section, we intend to give some insights about the limitations of our model, particularly on the second gradient effects. For this purpose, we consider the same specimen as in the previous cases subjected to a shear test, with Poisson coefficient 0.4. We can see that the deformed external shape of the specimen, solved by means of the COMSOL system, is in very good agreement with our model in both the first and second gradient cases (Fig. 7). However, the internal strain distribution is quite different, especially in the case of the second gradient. In particular, our model shows excessive rotation of the vertical sections, compared with the solution found with the COMSOL system. Probably, the linearity of the rule (Eq. 4) fails to capture some aspects of the strain. In this paper, we stop here but these results deserve more attention in a future paper.
Conclusion
This paper deals with the possibility of describing a deformable body employing the rules governing the behaviour of a swarm of robots.
We have presented a simple algorithm easily adaptable to different problems to describe complex physical effects plausibly. This discrete model is purely kinematic, so only the positions of the particles describing the body appear. Deformation is applied via a displacement assigned to the leading particles, while the follower Fig. 7 Final configuration of the sample undergone a shear test. First gradient case (up) and second gradient case (down). Units on the axes are metres particles move according to "position rules" until a final equilibrium configuration is reached. These displacements are determined by the relative position of the particle's neighbours, as in a swarm of birds. Therefore, the deformed configuration is not computed from Newton's law, but only from the relative positions between the particles in the system, saving machine time for computation since this approach requires only the solution of an algebraic equation. The model is also easily extensible in the case of second gradient continua.
In this work, we compared the results obtained by solving a simple tensile problem by FEM and using our kinematic tool. Different values of Poisson's modulus were used and applied to a compact sample and another with a crack in the centre to compare the solutions at critical points. The results obtained are in good agreement, allowing us to hope that an improvement in the model, together with an understanding of the physics related to its parameters, may lead us to satisfactory quantitative predictions. Having demonstrated that we are able to obtain the same results, it is now necessary to find the physical meaning of the parameters of our model to make predictions of behaviour starting from defined initial conditions. Therefore, the results presented in this paper could be a promising theoretical tool for discrete approximation of continuous deformable bodies with generalized strain energy, encompassing non-Cauchy continua. In our opinion, what is attractive in the proposed model is its remarkable simplicity and flexibility in the choice of the type of interaction considered and the contact actions imposed. It allows us to describe non-local interaction and the use of second neighbours' particles can be interpreted as a second gradient theory, even though, in this case, the comparison results with the FEM approach are not as good as in the case of the first gradient model.
The proposed model can be enriched and generalized in many ways; more complex interactions can be considered, for example, cases where the interaction can be weighted by local density, distance, directionality, or other characteristics. A micromorphic approach is the natural generalization of the tool, where a material body is described as a continuous collection of deformable particles of finite size endowed with internal structure.
In our paper, we showed how flocking rules, used to determine the position in a robot swarm, can describe the deformation of a bidimensional continuum.
Our tool has many similarities to PBD, but unlike the PBD methods used in computer graphics, we do not require knowledge of velocity and we do not introduce any force to account for external actions. We remain in the framework of a fully kinematic system.
In this article only, static final configurations are shown. Here, we are not interested in evolution, but the model is able to show evolution in pseudo-time and specimen failure (see [18]).
We want to emphasize again that our model does not have a physical meaning yet, but we are working on a physical analogy. This is because we aim to create a complex swarm-based algorithm that can describe material deformations and not obtain another algorithm based on solving physical equations. Clearly, to assay the goodness of the model, we have to compare it with already available proven models and keep in mind that it is still developing.
The results presented are interesting but are still at a preliminary stage; however, they show good agreement with the deformations calculated by traditional methods. It is clear that in order to have a predictive theory of behaviour, we should know the displacement of the leaders instead of imposing it a priori, as done in this paper. The most important future development is to understand how the constitutive parameters of the materials are related to the parameter choices we make in our tool so that we can connect with the usual methods of continuum mechanics. However, the tool has shown enough flexibility to suggest that many other behaviours can be described once the association with the constituent parameters is established. Generalization to 3D should present no difficulty, but there is a need for some optimization in the code to keep computation time in the order of seconds using a standard desktop PC.
The model, based on the behaviour of robotic systems introduced here, will need further investigation and generalization in both theoretical and numerical aspects. Still, it promises to be of great interest.
Funding Open access funding provided by Ente per le Nuove Tecnologie, l'Energia e l'Ambiente within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 10,749.2 | 2023-04-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Repositioning Technique Based on 3D Model Using a Building Shape Registration Algorithm
The demand for 3D spatial data and effective dataset construction has been increasing. However, the construction of 3D spatial datasets is more time-consuming and costlier than that of 2D spatial data. In addition, maintaining 3D city models up to date long after their initial construction is difficult. In this study, we developed a method of updating 3D building information within a relatively short time frame using highly accurate 2D building information. This method can be used to correct 3D building information automatically. Unmanned aerial vehicle (UAV) imagery of the study area was obtained, and a 3D model was developed using commercial software. Subsequently, the 3D model information was mapped onto the 2D information. After transforming the paired objects into point data at a set interval, registration parameters were calculated by applying the iterative closest-point technique. The calculated parameters were used to reposition the 3D model, enabling the creation of a model that overlaps with more than 98% of the existing spatial information data. Thus, it was confirmed that 3D building models can be produced without ground control points and can be readily updated at low cost.
Introduction
Owing to rapid urbanization worldwide, the rate of development of smart cities, which can overcome various problems associated with urbanization through the incorporation of new information and communication technologies, has been increasing. Such cities have been attracting global attention. Therefore, the demand for 3D city models for the integration of various types of city information and as a basis for visualization has also been increasing. Unlike a conventional 3D map, a 3D city model elucidates the spatial characteristics of objects in the city, thereby facilitating the integration and visualization of various types of city information. A 3D city model represents essential spatial data infrastructure that can be used for urban management such as urban planning, traffic control, disaster management, and change detection. (1,2) A 3D building model, a component of the 3D city model, has conventionally been modeled and textured on the basis of aerial images. However, this process is costly and time-consuming; therefore, the model is typically not updated after the initial construction. To address this limitation, various studies have been conducted to construct 3D models at low cost using unmanned aerial vehicle (UAV) images. To evaluate the quality of UAV images and determine their potential as alternatives to conventional aerial images, the accuracies of the images or 3D models were analyzed using image acquisition parameters such as the altitude, overlap degree of images, and number of ground control points (GCPs). (3)(4)(5)(6) In addition, the results of 3D model construction based on the use of multi-directional images whose resolution exceeds that of aerial images were examined. (7,8) A method for modeling multiple buildings in a large area using UAV imagery with a relatively narrow shooting range was established. (9) Furthermore, following the increasing demand for high-quality 3D building models, a technique to determine factors that lead to the loss of details such as occlusion and distortion has been developed. (7) Despite extensive research, UAV imagery has limited usefulness for the construction of 3D building models. UAVs use small sensors whose accuracy is lower than that of aircraft, resulting in the acquisition of images with low positional and attitudinal accuracy. This necessitates GCP measurements, increasing the time and cost required to build 3D models. To overcome these limitations, studies have been conducted to improve the accuracy of UAV imagery without using GCPs. A micro-electromechanical system (MEMS)-type sensor with an integrated global navigation satellite system (GNSS) and inertial sensors, that is, a real-time kinematic (RTK) GNSS receiver, was installed on a UAV, and images were directly georeferenced using location and altitude data obtained from the installed sensor or receiver. (10,11) In addition, a method that can be used to improve the positional accuracy by post-processing the GPS data was established. (12) However, these methods require the installation of additional expensive or heavy equipment on the UAV, necessitating post-processing and consequently increasing the processing duration.
To overcome the above-mentioned limitations, in this study, we developed a method of correcting positions based on existing high-quality 2D building data. This method requires neither GCP measurements nor the installation of additional sensors. UAV images of the study area were acquired, and a 3D model was created without GCPs. The 3D model was flattened, and the same object was searched by comparing the positions using the 2D building as a reference. A dotted line was drawn along the outline of the paired datasets, and the building shape registration (BSR) coefficient was calculated by applying the iterative closest point (ICP) technique to point cloud data registration. Subsequently, the calculated BSR coefficient was used to reposition the 3D model, and the accuracies of the model and the 2D building were compared.
Definition of reference data for position correction
The building data of the road name address map provided by the Ministry of Interior and Safety, South Korea, were used as the target data, which served as reference data for position correction. Digital maps have been implemented since 2014 to indicate road names, building numbers, and detailed addresses assigned under the Road Name Address Act. The digital map contains data regarding the entire country, which is managed by the government, and is drawn using 25-cm-level digital aerial orthoimages. It consists of 11 types of data including those regarding buildings, roads, and administrative districts. The constructed data are provided as a vector file in .shp format in the open data system (https://www.juso.go.kr) used for downloading digital maps; the system can be accessed through the internet and easily integrated with other data. Figure 1 shows the results obtained upon searching 'building' in the digital map of the Yeouido area of Seoul, on which a transparent UAV image is superimposed to indicate the experimental area.
Acquisition of UAV images
Positional errors may occur while operating UAVs owing to fuselage calibration in the case of multiple flights. We obtained images during a single flight. The target area was Yeouido, Seoul. The UAV image area comprised an apartment complex consisting of five buildings, schools, and school annex buildings. Roads and a playground were located between the apartment complex and the schools. UAV images were obtained on February 28, 2019, to create a model for the Yeouido Elementary School and Middle School buildings as well as the apartments in the area. Images were obtained at an altitude of ~150 m. The area (~288000 m 2 ) was divided into nine strip sections, and 147 images were acquired (Fig. 2). The K-mapper X1 developed by SisTech, South Korea, and a mounted sensor (Sony A6000) were used for image acquisition (Fig. 3). The specifications of the camera and UAV are listed in Table 1, and the conditions used for acquiring the actual images are listed in Table 2.
Based on the specifications of the UAV, the available flight time was ~40 min. However, the images were obtained in only ~13 min because they were acquired during the winter, which led to the rapid exhaustion of the equipment battery. In addition, in the case of oblique imagery, creating a model with textures on the side surface is favorable. (13) To minimize the number of external variables, images were obtained in the vertical direction only, thereby reducing the acquisition time. To examine the image acquisition results, orthomosaic images were generated from 147 images using the commercial software ContextCapture (Bentley Systems). In Fig. 2, an orthomosaic is used as the background, and the image acquisition positions are expressed as points. In addition, information that can estimate the accuracy of the image and location during image processing using ContextCapture is summarized in Table 2. An average of 1970 tie points were identified in each image, and a total of 46381 tie points were identified; the position uncertainties in the x and y directions were 0.0126 and 0.011 m, respectively.
2D orthophotos based on UAV images
Because the 3D model was created using UAV imagery without GCPs, the positional accuracy of the images corresponds to the accuracy of the 3D model. Therefore, the initial quality of UAV images is an important factor in model accuracy. To examine the accuracy, 2D orthophotos were created from the acquired images without performing position correction and then superimposed on the digital map (reference data) for comparison (Fig. 4). Figure 5 shows enlarged views of one of the schools (Building 1; upper left) and an apartment (Building 7; lower right). Most buildings partially overlapped the reference polygon, thereby necessitating repositioning.
3D model creation using UAV images
Seven buildings in the test area were modeled using software developed by 3DLabs Inc., South Korea (Fig. 6). A commercial program that is commonly used to process UAV images was used to create a model for the entire photographed area. Meanwhile, the 3DLabs software was used to extract planes parallel to the x-y, y-z, and x-z planes constituting the point cloud; separate models were created for each building object based on the extracted planes. (14) In this study, an efficient technique that can be used to update the 3D building model was identified, and the building models were repositioned using an existing high-quality 2D building dataset. Therefore, a separate model was used for each building.
BSR algorithm
Even when a detailed 3D model is constructed, its applicability decreases when it does not accurately overlap with previously established spatial information including 2D buildings, roads, and land parcels. In other words, it is important to improve the absolute positional accuracy of the 3D model. However, in terms of applicability, it is sufficient to improve the accuracy to the level where model data overlap the existing spatial data precisely. Accordingly, we proposed a BSR algorithm that can be used to reposition 3D buildings modeled without GCP with high accuracy (Fig. 7). The BSR algorithm comprises a data preprocessing process for point-set extraction and the calculation of BSR coefficients based on the ICP algorithm. The horizontal and rotational variations of the calculated BSR coefficient were used to reposition the 3D model. Because the vertical position of the 3D building can be displaced using the elevation of the corresponding position upon the determination of the horizontal position, only the horizontal positions of the 3D buildings were considered in this study.
Data preprocessing
To perform a registration operation based on the ICP algorithm, preprocessing was performed, during which two types of data were mapped onto the same type of point-set data. For the 3D model and 2D building object, that is, the target of position correction and the reference, respectively, point sets were created along the profiles and outlines of the polygons. In general, 3D building data in the spatial information field were modeled using a polygonal method comprising a mesh with a set of points, lines, and surfaces. These data contained normal information, that is, the vertical information regarding each surface that constituted the mesh. Moreover, profile information, that is, flattened model data, were generated using normal information and by extracting all surfaces except for the surfaces of the buildings that were perpendicular to the ground surface. The algorithm used for this preprocessing step is shown in Fig. 8. Point sets were generated by dividing the outline of the created profile into points at 0.5 m intervals. Because the original outlines of the polygon can be used in a 2D building model, additional processing is unnecessary and point sets can be generated by categorizing the original data into points with the same intervals as those used for the profile. The procedure for converting the 3D model data to point sets is shown in Fig. 9.
Calculation of BSR coefficient
The ICP algorithm is the most widely used algorithm for automatic registration of a point cloud acquired using a 3D scanner. (15) In the basic ICP algorithm, point pair matching, outlier rejection, error minimization, and transformation are iteratively performed after point filtering and neighborhood selection until the distance between corresponding points falls within a threshold. The performances of various algorithms have been improved by initializing the image alignment, (16,17) reducing the point weight, and enhancing the computational complexity; (18,19) however, the basic procedure remains unchanged.
This algorithm exhibits superior performance when registering two point clouds with good a priori alignment and rigid transformation. (20) When the a priori alignment is poor, an incorrect registration coefficient is obtained owing to the local minimum error. (21) The position error between the 3D model and 2D building reached 30 m, and the rotation error was within 20°, indicating a priori alignment suitable for the application of the ICP algorithm. In addition, although the absolute positional error of the initial 3D model was large, the distance between the vertices constituting the model was relatively accurate. Therefore, whereas the 3D model and 2D building data were acquired using different methods and equipment, they were based on the same scale. Owing to the use of the same scale, a rigid-body transformation that only considers rotation and displacement is applicable. Therefore, we used the ICP algorithm to calculate the BSR coefficient for the repositioning of the 3D building model relative to the 2D reference object. Table 3 presents the main elements required for ICP implementation for matching two unorganized point sets. We used all sampling points and point-to-point matching because we used point sets (not faces), and it is important to match their shape. As one of the aims of this study is to correct the position of the 3D model, we used the distance weight. Furthermore, because both datasets comprise points extracted from polygons, outliers were not considered. The algorithm for calculating the BSR coefficient, which is used to correct the position of the 3D model (source data) based on the elements defined in this manner, is shown in Fig. 10. Table 4 details the specific parameters of the ICP-based algorithm.
Data pairing
The superimposition of the 3D model created without GCPs onto the national standard 2D data is shown in Fig. 11. Before the calculation of the BSR coefficient using the ICP algorithm, a pairing process was essential. Each red line in Fig. 11 extends the center point between each paired object by 1:1. When each object is rectangular, the center point is the intersection of the inner diagonal lines. When the object is not rectangular, the center point is selected by considering the shape to be rectangular by using the outermost lines in four directions or virtual lines connecting the outermost points, as shown in Fig. 12 for Building 7. Figures 4, 5, and 11 show that when comparing 3D building objects with reference 2D polygon data, most building objects are oriented toward Buildings 2 and 3 in the image. Repositioning was performed by applying the BSR algorithm to each of the seven paired building models.
Calculation and application of BSR coefficient
The paired 3D model and 2D building information were set as the source and target data, respectively, and the BSR coefficient of each building was calculated using the algorithm discussed in Sect. 3.3 (Table 5). To simplify the comparison of the BSR coefficients of the seven buildings, the coefficients presented as a matrix were mapped onto horizontal and rotational variations ( Table 6). All seven buildings exhibited marked overlapping, with the rotational variation between the registration source data and target data being within 1° clockwise. However, considerable horizontal variation was observed with maximum, minimum, and average differences of 6.98, 2.53, and 4.76 m, respectively, between the target and reference data. In particular, when the direction was determined on the basis of the elements of the BSR coefficients in Table 5, the values ranged from −7.80 to 2.70 m in the x direction and −1.21 to −6.26 m in the y direction, indicating large variations between the buildings. Because 3D models created using images acquired from the same mission on the same equipment yield different positional errors, each optimized matching coefficient must be calculated. Repositioning was performed by applying the seven calculated BSR coefficients to the corresponding 3D buildings. Figure 13 shows the superimposition of the 3D building model onto the 2D building object, that is, the reference data, before and after repositioning.
Evaluation of the result of position correction
To evaluate the result of the position correction of the 3D model, the improvement in relative positional accuracy was examined with reference to the highly accurate 2D reference polygons.
In general, because the profile of a 3D model and the geometry of the 2D polygon differ, it is difficult to specify an identical point in the two and make comparisons. In addition, because the difference in the distance between the two points reflects only the difference in a plane, it is difficult to comprehensively determine the magnitude of the correction in terms of the rotation. Therefore, the result of position correction was evaluated with reference to the 2D polygons, and not based on the comparison between the two equivalent points, using the registration improvement rate of the 3D model profile. The registration improvement rate was calculated using the difference between the overlap ratios of the 3D model and 2D polygons before and after repositioning: where P b is the profile polygon of the 3D building before repositioning, P a is the profile polygon of the 3D building after repositioning, T is the 2D polygon used as the repositioning target, and R imp is the rate of 3D building registration improvement. The registration improvements for the seven building models are shown in Table 7. The maximum, minimum, and average overlap ratios of the 3D model superimposed on the 2D reference polygon before repositioning were 93.3, 49.7, and 72.7%, respectively. However, after repositioning, the maximum, minimum, and average overlap ratios were 99.7, 97.9, and 98.86%, respectively, with an average improvement of 26.14%. These results indicated that the average overlap rate was improved by 26.14% through position correction. This also confirmed that the 3D model was effectively positioned through the building shape matching algorithm proposed in this study.
Conclusions
The procedure of geometry-based position correction proposed in this study is as follows. First, the outlines of the profiles created on the basis of the 3D model (target for position correction) and 2D object (reference data) should be mapped onto point sets with equal intervals. Subsequently, the registration BSR coefficient for each building for which the distance between two point sets is minimized should be calculated using the ICP algorithm and then applied to the respective 3D model to complete the repositioning.
To test the algorithms proposed, images were acquired through the sole use of sensor information from UAVs without GCPs, and seven 3D buildings were modeled. Then, the BSR coefficients for each building were calculated using ICP algorithms, and the position was corrected by applying the coefficient to each 3D building. The effects of repositioning were identified by comparing the overlap ratio of the 2D reference object with the x-y plane of the 3D model before and after repositioning. The results show that the overlap ratio of the seven tested models improved by ~26.2%, from ~72.7% before repositioning to ~98.9% after repositioning. In particular, the minimum pre-repositioning overlap ratio was 49.7% but it was 97.9% or higher for all buildings after repositioning.
The findings from this study are as follows. 1) The proposed positional correction algorithm allows the quick and low-cost fabrication of 3D models without GCPs, as well as repositioning.
2) Using periodically acquired UAV images, updates to the 3D model can be automated, thereby ensuring up-to-date urban models. 3) Even buildings modeled on the same imaging strip image have different positional errors and, therefore, individual BSR coefficients for each building model are required. Table 7 Overlap ratio of the 3D model onto the 2D target data before and after repositioning.
Building No.
Overlap ratio before repositioning (%) Overlap ratio after repositioning (%) Improvement rate (%) In this study, 2D data were selected as reference data for repositioning because they are updated on a monthly basis. However, in some cases, 2D reference data corresponding to the 3D model may be missing or the geometry may vary significantly. In such cases, the positions of all buildings can be corrected by applying the registration coefficient of a building adjacent to the applicable building or using the average registration coefficient of the applicable area. | 4,729.2 | 2022-01-05T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
BAIT: A New Medical Decision Support Technology Based on Discrete Choice Theory
We present a novel way to codify medical expertise and to make it available to support medical decision making. Our approach is based on econometric techniques (known as conjoint analysis or discrete choice theory) developed to analyze and forecast consumer or patient behavior; we reconceptualize these techniques and put them to use to generate an explainable, tractable decision support system for medical experts. The approach works as follows: using choice experiments containing systematically composed hypothetical choice scenarios, we collect a set of expert decisions. Then we use those decisions to estimate the weights that experts implicitly assign to various decision factors. The resulting choice model is able to generate a probabilistic assessment for real-life decision situations, in combination with an explanation of which factors led to the assessment. The approach has several advantages, but also potential limitations, compared to rule-based methods and machine learning techniques. We illustrate the choice model approach to support medical decision making by applying it in the context of the difficult choice to proceed to surgery v. comfort care for a critically ill neonate.
We present a novel way to codify medical expertise and to make it available to support medical decision making. Our approach is based on econometric techniques (known as conjoint analysis or discrete choice theory) developed to analyze and forecast consumer or patient behavior; we reconceptualize these techniques and put them to use to generate an explainable, tractable decision support system for medical experts. The approach works as follows: using choice experiments containing systematically composed hypothetical choice scenarios, we collect a set of expert decisions. Then we use those decisions to estimate the weights that experts implicitly assign to various decision factors. The resulting choice model is able to generate a probabilistic assessment for real-life decision situations, in combination with an explanation of which factors led to the assessment. The approach has several advantages, but also potential limitations, compared to rule-based methods and machine learning techniques. We illustrate the choice model approach to support medical decision making by applying it in the context of the difficult choice to proceed to surgery v. comfort care for a critically ill neonate.
Keywords decision aids, decision models, decision support systems, decision support techniques, end-of-life decision, necrotizing enterocolitis Date received: February 12, 2021; accepted: February 13, 2021 Medical decision making is characterized by a high degree of complexity, uncertainty, and time pressure. Many decisions also entail ethical dilemmas. As a consequence, a variety of medical decision support systems have been developed. [1][2][3][4][5] These can be classified into knowledge-based and non-knowledge-based systems. 6 The former require that experts perform the very difficult task of explicating their tacit knowledge into deterministic rules; furthermore, such rule-based systems struggle with capturing the subtleties that are present in real-life contexts. Non-knowledge-based systems require vast amounts of historical data, on which machine learning models are trained to extract implicit relations; these models are opaque, hampering interpretability and accountability.
We present a third way to capture and codify medical expertise (which we colloquially define here as ''knowing what to do in a certain situation, and being able to explain why'') and to make it available to support medical decision making. Our approach, called Behavioral Artificial Intelligence Technology (BAIT), uses choice analysis techniques traditionally employed to identify preferences of large groups of consumers, citizens, or patients and to make predictions regarding their future choice behavior. [7][8][9] We reconceptualize these econometric techniques and put them into practice for codifying the expertise of small groups of experts and supporting their decision making. The objective of BAIT is to make accessible to an expert or group of experts the combined expertise of their peers in the context of a particular decision problem. To illustrate the workings of BAIT, we focus on one of the most difficult (moral) choices in medicine: to proceed to surgery v. comfort care for a critically ill neonate. 10,11 Methods How Does BAIT Work?
First, together with 2 to 4 experts, the expert decision is specified (e.g., perform surgery or not in the context of a particular medical situation and patient profile), and factors are identified that presumably play a role in making that expert decision (e.g., gestational age). Then, for each factor, relevant ranges are determined indicating minimum and maximum values of the factor-values (e.g., 24-30 weeks for gestational age). Constraints are specified to preclude combinations of factor-values that are impossible or highly unlikely to occur in real life. Note that some factors may require no additional investigator manipulation (gestational age, sex) while some would require a predetermined way of being defined (e.g., progress since birth pre-necrotizing enterocolitis [NEC]).
Second, the structure of the choice model is determined; for example, it is decided if nonlinear weights are to be accommodated (e.g., a decreasing or increasing marginal importance) and/or interaction effects (i.e., an additional positive or negative weight assigned to a particular combination of factor-values). Depending on the situation, different choice model types can be specified such as binary or multinomial, nested, or (panel) mixed logit models 12,13 or models based on alternative behavioral theories such as regret-minimization or taboo tradeoff-aversion. 14,15 Third, a choice experiment is designed and implemented, in which the group of experts is invited to make a series of hypothetical choices based on scenarios mimicking the real decision situation. Different types of choice experiments can be used, 16 depending on the specificities of the decision. In our case, a so-called single conjoint is used, which asks respondents a yes/no question in the context of a specific patient-context profile. Another option could be a choice from a multinomial set of candidates (e.g., in the context of triage). Each scenario is specified in terms of a different combination of values taken from the prespecified decision-factors and ranges, taking into account relevant constraints. Crucially, using so-called efficient design techniques, scenarios are constructed such that each choice generates a maximum amount of information. 16 Fourth, the observed choices are used to estimate the importance weights of all factors, including their signs (positive or negative) and any nonlinear curvatures (e.g., concavity or convexity), using maximum likelihood techniques. 12,13 This process involves comparing the model predictions to the actual choices made by the experts. By iteratively adjusting the weights embedded in the model, increasingly accurate choice probability predictions are generated, until no further improvements can be made. The final model's empirical performance is tested by means of various model fit metrics.
In a fifth step, results are presented back to the experts. Factor weights are visualized, showing how each factor contributes to the experts' decisions in the experiment. In addition, the choice model equipped with the estimated weights is used to assess particular artificial choice situations, including cases that were not included in the choice experiment. Generated assessments take the form a probability statement-for example, ''The probability that an expert that is randomly sampled from the expert group would recommend (to the patient's parents) to perform surgery on a patient with this profile equals 18%.'' In conjunction with the probabilistic assessment, color coding is used to highlight which factors had a positive or negative contribution to the assessment.
Case: Whether or Not to Operate on a Premature Neonate with NEC NEC is a devastating intestinal disease, mainly occurring in (very) preterm neonates. 17,18 Due to improved survival of the most preterm infants, NEC incidence is rising. 19 For some 30% to 40% of preterm infants with a diagnosis of Councyl, Delft, Netherlands (AtB, NH, CC); Department of Surgery, Division of Pediatric Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands (JH); University of Groningen, University Medical Center Groningen, Beatrix Kinder Ziekenhuis, Division of Neonatology, Groningen, Netherlands (EK); Faculty Technology Policy and Management, Department of Engineering Systems and Services, Delft University of Technology, Delft, Netherlands (CC). The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Annebel ten Broeke, Nicolaas Heyning, and Caspar Chorus are associated with Councyl (the latter two as cofounders), a Delft University of Technology spin-off that develops and commercializes the behavioral artificial intelligence technology (BAIT) that is presented in this article. Jan Hulscher and Elisabeth Kooi declare that no funding was received and that there are no competing interests. The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Financial support for this study was provided in part by a grant from the European Research Council (ERC-Consolidator Grant BEHAVE, grant 724431). The funding agreement ensured the authors' independence in designing the study, interpreting the data, writing, and publishing the report. NEC, emergency abdominal surgery is necessary. In these cases, children will succumb when surgery is withheld. However, perioperative mortality rates can reach 50%, and long-term morbidity, such as neurodevelopmental deficits and gastrointestinal complications, occurs in over 75%. 20 Each case therefore presents the treating medical team as well as the parents with the dilemma of whether proceeding to surgery will still be in the child's best interest. 18 We focus on the moment where the clinician gives a final recommendation to parents. At this point, parents have developed a preference for surgery or comfort care (or they may be in doubt).
This study was waived for ethical approval by the university medical centre groningen (UMCG) ethical board (METc 2020/245). Note that in the context of this article, the aim of this small-scale case study is to illustrate the workings of BAIT as a technical innovation, as opposed to presenting new insights into the decision making process of local medical professionals regarding their response to NEC cases.
Results
Two pediatric surgeons and 2 neonatologists selected 14 factors with their ranges (see Table 1), which were subsequently combined into 35 choice scenarios (see Figure 1 for an example).
These were assessed by 15 experts (11 neonatologists and 4 surgeons). The estimated model obtained a good level of fit, as indicated by a McFadden's r 2 of 0.32. Choice probabilities predicted by the estimated model closely resemble the observed empirical relative frequencies: the mean absolute deviation equals 4.5 percentage points, implying that, for the average choice scenario, the predicted probability of a recommendation to operate was only 4.5 percentage points higher or lower than the observed relative frequency of choices made by the group of experts. Most factors turn out to have a linear effect on decisions, while some exhibit a nonlinear shape; weights are shown in Table 2. As shown in Figure 2, 5 factors together make up three-quarters of the total importance of all factors combined. Parental preference, ranked fourth, makes up 13% of total importance. Figure 3 shows an example of an assessment generated by the model that was equipped with the estimated importance weights.
Discussion
BAIT presents a clear alternative to conventional approaches. In contrast to rule-based or knowledge-based methods, BAIT uses choices to identify expertise rather than asking experts to explicate their expertise directly. This indirect process is aligned with the notion that humans find it very difficult to explain why they made a certain decision, 21 especially when this involves moral judgment. 22 Also, BAIT results in a flexible model that leads to choice probability predictions rather than attempting to capture the subtle tradeoffs of medical decision making into deterministic rules.
In contrast to machine learning models trained on historic data, BAIT offers a simple and explainable decision model: weights have an unambiguous meaning, and with help of color coding, it becomes clear immediately which combination of factors led to a particular decision. Furthermore, the choice experiment, being based on hypothetical scenarios, avoids data protection-related issues that may surface in the context of historic data.
In terms of a potential limitation of BAIT compared to rule-based expert systems such as clinical practice guidelines, we note that our approach relies on the ability of those experts who participate in the choice experiment, to assess, interpret, and balance a variety of factors and their potential risks for patient health and well-being. In light of the fact that even experienced medical experts may have difficulties assessing such risks, [23][24][25] this may be considered a tall order. In a worst-case scenario, any misjudgment (or skew within the range of acceptable judgments) captured by BAIT in the choice experiment phase might carry over in the subsequent real-life decision making of experts, as their choices could be unduly influenced by the output of the decision support system. Indeed, more classical rule-based approaches such as the ones embedded in conventional guidelines and protocols may be considered less vulnerable to flaws and cognitive biases from the side of individual experts. We see several ways in which this potential problem of bias carryover can be reduced. First, the selection of experts participating in the choice experiment should be done very carefully; a tradeoff needs to be made here, between the need to select (only) experts with very high levels of expertise, while also ensuring that the group is large enough to avoid a situation where one expert's misjudgment has an outsized effect on the model. Second, when the size of the pool of experts allows for this, it may be recommended to have experts perform the choice experiment in pairs (e.g., in our case, coupling a neonatologist to a surgeon), to allow for a discussion and balancing of opinions. Third, the very nature of BAIT suggests that in most contexts, it should best be used as a decision support system, as opposed to offering guidance. 26,27 Concretely, BAIT is able to predict, in a given real-life choice situation, what decision would be made by which share of the expert pool. This, we expect, is very helpful to experts, but it does not equate to the use of a protocol or a set of rules and guidelines. In fact, such protocols and guidelines in Figure 2 Relative importance of decision criteria.
Figure 3
Example of an assessment generated by the model. Green color coding: the value for this factor contributed positively to the assessment. Red color coding: the value for this factor contributed negatively to the assessment. No/ transparent color coding: the value for this factor did not contribute positively or negatively to the assessment.
our view can and should coexist with BAIT, the former being more prescriptive (focusing on what the individual expert ''should'' do) and the latter being more descriptive (focusing on what the pool of experts ''would'' do). Further research into the potential use of BAIT in real life is certainly recommended, to shed more light on this subtle distinction between decision guidance and decision support and how it affects expert decision making and learning in day-to-day medical practice. Future research should focus on dynamic applications where the system is being updated with each new ''reallife'' choice made.
Another interesting avenue for further research is to study the transferability of expertise from one group of experts to another (e.g., in a different hospital). It should be noted here that our current application focused on a local and rather homogeneous group of experts; while this ensures that the resulting model is representative for the local situation, it does create potential risks of tunnel vision and bias carryover (see above). The application of BAIT on a larger scale (e.g., involving a choice experiment that is implemented among peers nationwide) could reduce such risks and hence deserves to be looked at in future studies. | 3,591 | 2021-03-30T00:00:00.000 | [
"Medicine",
"Economics",
"Computer Science"
] |
A Rare Case of Myocarditis After the First Dose of Moderna Vaccine in a Patient With Two Previous COVID-19 Infections
Myocarditis is the inflammation of the cardiac muscle caused by a variety of factors ranging from infections to autoimmune diseases. Most cases of vaccine-induced myocarditis occur after the second dose of vaccination; however, a few cases have been reported following the first dose of vaccination with or without previous coronavirus disease 2019 (COVID-19) infection. A case of myocarditis occurring about three weeks after the first dose of the Moderna vaccine has been reported in a patient with one previous COVID-19 infection. However, there have not been any documented cases of myocarditis after the first dose of the Moderna vaccine in a patient with two prior COVID-19 infections. Our index patient had already experienced two COVID-19 infections in the past and was diagnosed with myocarditis eight hours after receiving the first dose of the Moderna vaccine. The susceptibility to developing this likely stems from the possible production of antibodies to the viral antigen from previous COVID-19 infections. Furthermore, the fact that our patient developed symptoms eight hours after receiving the vaccine suggests a possible additive effect of antibodies produced from the two previous COVID-19 infections. This case report suggests that individuals repeatedly infected with COVID-19 may be at increased risk of myocarditis following the administration of the Moderna vaccine.
Introduction
Myocarditis is defined as the inflammation of the myocardium, occurring as a result of some pathologic immune changes in the heart [1][2][3]. These changes include alterations in the number and subtypes of lymphocytes, macrophages, and antibodies [3]. The long-term effect of these processes compromises the structure and function of the cardiomyocytes, thereby leading to impairment in either the contractile or conducting system of the heart [3]. The prevalence of myocarditis has been estimated to range from 10.2 to 105.6 per 100,000 people worldwide, and it is predominantly found among young adults of the male sex [2,4]. Myocarditis can be caused by both infectious and non-infectious agents [1,3,5], presenting with either focal or diffuse cardiac muscle involvement [3]. It has also been reported to occur as a rare complication of various coronavirus disease 2019 (COVID-19) vaccines [1,2,5,6], and among those who received the Moderna vaccine, it has been observed mostly after the second dose [6]. Also, to date, there have been a few reports of myocarditis after the first dose of the Moderna vaccine, with or without previous COVID-19 infections.
Case Presentation
An 18-year-old male with two previous episodes of COVID-19 infections, eight months and two months prior, presented with a complaint of sudden onset of chest pain that had started about eight hours after receiving the first dose of the Moderna vaccine. The patient reported it to be located in the center of his chest and rated it as a 7/10 in its worse form. He described it as being sharp, pressure-like, and squeezing, radiating to the back and worse with inhalation and coughing. The chest pain was associated with fever, cough, shortness of breath, fatigue, diarrhea, and vomiting. The patient reported a history of nose bleeding that had started about the same time and resolved without any intervention. Also, his past medical and surgical history was non-contributory, and he was not aware of any family history of cardiac diseases. On examination, the patient appeared to be an obese male in acute painful distress with a temperature of 98.2°F , blood pressure of 128/88 mmHg, pulse rate of 88 beats per minute, respiratory rate of 18 cycles per minute, and oxygen saturation of 98% on room air. The precordium was quiet, and first and second heart sounds were heard with no added sounds. A list of differential diagnoses considered in the workup of this patient included ischemic heart disease, pleurisy, pericarditis, pulmonary embolism, pneumonia, and COVID-19.
Cardiac enzymes were noted to be elevated, with a troponin of 1.39 ng/mL (0.00-0.01), total creatine kinase of 900 U/L (39-308), creatine kinase-MB of 73.3 ng/mL (0.0-3.6) and a creatine kinase-MB index of 8.1 (0.0-2.8). A lipid panel was performed, with no abnormal results noted. The EKG showed normal sinus rhythm with ST-segment elevation in some lateral leads ( Figure 1). White blood cell count, C-reactive protein, and D-dimer were elevated at 11.0 K/ul (5.0-10.0), 160 mg/L (0-10), and 0.70 ug/mL (0.00-0.40) respectively. The echocardiogram revealed an ejection fraction (EF) of 60-65% and a right ventricular systolic pressure (RVSP) of 30-35 mmHg. Chest X-ray and CT angiography (CTA) of the chest were negative for any acute changes. He tested negative on the COVID-19 molecular testing while on admission. Cardiac MRI revealed patchy mural delayed myocardial enhancement with relative sparing of the endocardium within the septum, and inferior and lateral walls at the cardiac base ( Figure 2). The patient was managed with steroids and Tylenol as needed for pain management. He was also placed on telemetry for rhythm monitoring. Cardiac enzymes were monitored over the course of the hospitalization and were seen to have plateaued. Therefore, with the resolution of symptoms and the decreasing need for analgesics, the patient was discharged home. He was followed up a month later as an outpatient and was found to be stable. The patient will be seen in the clinic in six months and subsequently every year, provided he continues to remain at baseline.
Discussion
The development of COVID-19 vaccines by various pharmaceutical companies has resulted in a reduction in morbidity and mortality from COVID-19 infection [6]. While these vaccines have been associated with some complications, their benefits have been proven to outweigh the risks [6]. Though rare, myocarditis has been reported following COVID-19 vaccination [2]. The United States Centers for Disease Control and Prevention has reported a prevalence of about 12.6 cases per million doses of second-dose mRNA vaccine in individuals aged 12-39 years, with a male predominance [2]. Although the exact mechanism behind COVID-19 vaccineinduced myocarditis is not well understood, a number of theories have been proposed [2]. Some of these postulations include molecular mimicry between the mRNA vaccine spike protein and self-antigen, immunological response to the mRNA vaccines, triggers from already dysregulated immunologic pathways, and poorly regulated expression of cytokines [2]. The basis behind the male sex predominance may be related to the difference in immune response with sex hormones and myocarditis and the fact that cardiac disease in women is usually under-diagnosed and under-reported [2]. We presented a case report on myocarditis as a rare complication of COVID-19 vaccines.
Our patient was an 18-year-old male with two previous episodes of COVID-19 infection who presented with chest pain, fever, cough, and shortness of breath after receiving the first dose of the Moderna vaccine. He also had other symptoms such as fatigue, nausea, diarrhea, and nose bleeds, which may likely have been the other side effects of the vaccine. However, on presentation, he had an elevated cardiac enzyme with the EKG showing some ST-segment elevations in some of the lateral leads. Cardiac MRI was suggestive of myocarditis. Similar cases have been reported with a majority of them occurring after the second dose of vaccination [6]. However, only a few cases have been observed following the first dose of the Moderna vaccine with or without previous COVID-19 infections [7,8].
A systematic analysis was conducted recently on COVID-19 vaccines and myocarditis. A pooled analysis of the available data showed that COVID-19 vaccine-induced myocarditis was more common in young males, with most of the cases occurring after the second dose and most of them resolving after six days [6]. Findings from the study revealed that 60%, 33%, and 7% of the cases followed the Pfizer-BioNTech vaccine, Moderna vaccine, and Johnson and Johnson vaccine respectively [6]. Also, it was found that while about 67% of the cases occurred after the second dose of the Pfizer-BioNTech vaccine, all the cases of myocarditis associated with the Moderna vaccine occurred after the second dose [6]. However, there was no case of vaccine-induced myocarditis in a patient with a previous COVID-19 infection in that study [6].
Recently, a study on the association between myocarditis and COVID-19 vaccination reported that one out of the four cases have experienced myocarditis 25 days after the first dose of the Moderna vaccine with one prior COVID-19 infection [7]. However, our patient experienced myocarditis eight hours after receiving the vaccine, after two prior COVID-19 infections. Thus, this is the first reported case of vaccine-induced myocarditis following the first dose of the Moderna vaccine in a patient with two previous COVID-19 infections. The susceptibility to developing myocarditis after the first dose of vaccination is likely due to the antibodies developed against the viral antigen from previous infections. The fact that our patient had two different episodes of COVID-19 infection may account for the earlier onset of the myocarditis after the vaccination compared to others. This likely indicates a possible additive effect of antibodies produced from the previous COVID-19 infections. Hence, there is a need for an observational study to better elucidate this assumption.
Conclusions
COVID-19 vaccines can lead to myocarditis in rare cases. Although most cases occur after the second dose of vaccination, a high index of suspicion is needed when patients with previous COVID-19 infections present with symptoms suggestive of myocarditis, even after the first dose of vaccine. Also, the onset of myocarditis within 24 hours post first dose of vaccination should be anticipated in individuals with two or more previous COVID-19 infections.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2,336.6 | 2022-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The economic impact of a local, collaborative, stepped, and personalized care management for older people with chronic diseases: results from the randomized comparative effectiveness LoChro-trial
Background Within the ageing population of Western societies, an increasing number of older people have multiple chronic conditions. Because multiple health problems require the involvement of several health professionals, multimorbid older people often face a fragmented health care system. To address these challenges, in a two-group parallel randomized controlled trial, a newly developed care management approach (LoChro-Care) was compared with usual care. Methods LoChro-Care consists of individualized care provided by chronic care managers with 7 to 16 contacts over 12 months. Patients aged 65 + with chronic conditions were recruited from inpatient and outpatient departments. Healthcare utilization costs are calculated by using an adapted version of the generic, self-reporting FIMA©-questionnaire with the application of standardized unit costs. Questionnaires were given at 3 time points (T0 baseline, T1 after 12 months, T2 after 18 months). The primary outcome was overall 3-month costs of healthcare utilization at T1 and T2. The data were analyzed using generalized linear models with log-link and gamma distribution and adjustment for age, sex, level of care as well as the 3-month costs of care at T0. Results Three hundred thirty patients were analyzed. The results showed no significant difference in the costs of healthcare utilization between participants who received LoChro-Care and those who received usual care, regardless of whether the costs were evaluated 12 (adjusted mean difference € 130.99, 95%CI €-1477.73 to €1739.71, p = 0.873) or 18 (adjusted mean difference €192.99, 95%CI €-1894.66 to €2280.65, p = 0.856) months after the start of the intervention. Conclusion This study revealed no differences in costs between older people receiving LoChro-Care or usual care. Before implementing the intervention, further studies with larger sample sizes are needed to provide robust evidence on the cost effects of LoChro-Care. Trial registration German Clinical Trials Register (DRKS): DRKS00013904, https://drks.de/search/de/trial/DRKS00013904; date of first registration 02/02/2018. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-023-10401-1.
Introduction
Against the background that older people often have chronic, mostly multiple illnesses and these are accompanied by physical, mental, and functional limitations, a new local, collaborative, stepped, and personalized form of care, the LoChro-Care intervention, was developed and evaluated [1][2][3][4][5][6].LoChro-Care was designed to improve patients' self-management in coordinating their individual care network [1,7].For this purpose, trained chronic care managers (CCM) provided assistance to establish contact to formal and informal support (e.g., general practitioner, family, regional geriatric outpatient services).In detail, LoChro-Care comprised (a) a comprehensive assessment of the patients' health constitution and context, (b) the creation of a tailored healthcare plan that aligns with the patient's prioritized healthcare issues and preferences, (c) the implementation, monitoring, and modification of the plan, and (d) a closing session [1,7].In the case of mild depression, diabetes, or the absence of a primary caregiver, extra interventional components were applied (problem solving therapy, skill training, trained volunteers).At least the first three contacts took place in the home environment, whereas the subsequent sessions could also be conducted by telephone.The intervention lasted 12 month, with 7-16 contacts with the CCM.As a result, patients' health-related outcomes were expected to be improved or at least worsening progression delayed.Therefore, LoChro-Care was evaluated in terms of patients' physical, psychological, and social health status (as indicated by functional health and depression), as well as their perceived heath care situation, health-related quality of life, life-satisfaction [7], and medication appropriateness.
The objective of the present study is to outline the effectiveness of LoChro-Care regarding the secondary endpoint of health resource utilization.Specifically, we hypothesized that LoChro-Care would lead to a more appropriate utilization of health and nursing care services in terms of decreased emergency hospitalizations, reduced non-elective hospital days and nursing home admissions, more adequate use of informal and formal community services, as well as enhanced disease selfmanagement abilities that contribute to save health care costs [1].
Methods
A two-group, parallel randomized controlled trial was conducted.Patients aged 65 + with one or multiple chronic conditions or geriatric symptoms (e.g., diabetes, hypertension, ischemic heart disease, atrial fibrillation) were recruited by research associates at inpatient and outpatient departments of the Medical Centre, University of Freiburg, Germany, between January 2018 and March 2020 [7].Eligible patients were asked to participate in a short screening ("Identification of Seniors at Risk" questionnaire [8] to assess their risk of unplanned readmission and need for nursing care.Inclusion criteria required at least 2 positive responses out of 6 risk domains.Patients with terminal medical conditions and insufficient German language skills were excluded.Healthcare utilization costs are calculated by using an adapted version of the generic, self-reporting FIMA©questionnaire [9][10][11][12] in combination with the application of standardized unit costs [13,14].Questionnaires were given at 3 time points (T 0 baseline, T 1 after 12 months, T 2 after 18 months).Overall, utilization of 10 cost indicators (General practitioner, Specialist, Day hospital, Hospitalization days [normal ward and intensive care days], Inpatient rehabilitation, Ambulatory nursing, Inpatient nursing, Remendies, Auxiliary means) were measured and total healthcare utilization costs were calculated for a 3-month period prior to T 0 , T 1 and T 2 .All costs are expressed in 2021 values and represent the perspective of the healthcare system.
Utilization of the different cost indicators at T 1 and T 2 was analyzed using negative binomial regression models with adjustment for age, sex and level of care (at baseline) as well as the utilization of the respective indicator at T 0 [15].In Germany, there are different levels of care, which also depends on the amount of financial support a patient receives from the statutory long term care insurance.To determine the level of care, an assessment is carried out, which evaluates the individual's ability to perform everyday activities and the level of support required.As a higher level of care translates into more financial support from the compulsory long-term care insurance, the level of care may change frequently over time when the degree of care dependency increases.
In addition, joint analysis of T 1 and T 2 utilizations are applied using confounder adjusted negative binomial regression models with the patients ID as a random intercept to account for multiple records on the patient level.The results are shown as adjusted incidence rate ratios.
The overall 3-month costs of care at T 1 and T 2 are analyzed using generalized linear models with log-link and gamma distribution [16,17].Again, adjustment for age, sex, the level of care at 3-month costs prior to T 0 took place.Joint analysis of 3-month costs at T 1 and T 2 were conducted using a population -averaged panel data model (with log-link and gamma distribution) to account for multiple records on the patient level.In a last step, the impact of baseline characteristics on overall 3-month costs of care were analyzed across all three periods (T 0 , T 1 and T 2 ).Furthermore, a population-averaged panel data model (with log-link and gamma distribution) was used to account for multiple records on the patient level.Included confounders were group, age sex and level of care.All analyses were performed using Stata 17 (Stata Corp., Texas, USA).
Results
Three hundred thirty patients were eligible for the final investigation, which were well balanced between the groups (163 patients from the intervention group and 167 patients from the control group).Out of the 167 control group patients, 40.12% were males while in the intervention group, 46.01%were males.The mean age of the participants was 77.36 ± 6.60 and 76.19 ± 6.12 in the control group and intervention group, respectively.As shown in Table 1, most of the patients (78.44% and 76.07% in the control group and intervention group, respectively) were not eligible for long-term care benefits from the statutory long-term care insurance (level of care 0).With respect to the other levels of care, percentages are balanced between the groups.
When comparing the various cost indicators, no significant difference was found between the two groups.The total costs of health care utilization were at comparable levels in T 1 (intervention group: M = 6656.79€,SD = 10709.03€;control group: M = 6178.09€,SD = 10595.24€)and T 2 (intervention group: M = 6809.13€,SD = 9907.18€;control group: M = 6221.26€,SD = 9616.46€).The same is true for the 10 cost indicators that were collected for 3-month periods prior to T 0 , T 1 and T 2 (see Table 2).
When analyzing different cost indicators at T 1 and T 2, negative binomial regression models were performed.Figure 1 shows the corresponding incidence rate ratios when analyzing over both measurement points (T 1 and T 2 ).All cost indicators were statistically insignificant (p-values > 0.05).Similar results occur when analyzing T 1 and T 2 separately (see supplemental material, Figure S1 and S2).In summary, no statistically significant difference between the intervention group and the control group could be found in any of the endpoints.
The analysis of potential confounders showed that group (Intervention vs. control group, p = 0.600), age (p = 0.499) and sex (p = 0.506) did not impact 3-month costs of care.The level of care, however, had a major impact on the 3-month costs of care.As shown in Fig. 2, a patient at care level 0 (N = 255) was associated €5509.64costs of care (95%CI €4906.66 to €6112.61) while a patient at care level 2 (N = 37) was associated €11147.50costs of care (95%CI €9098.35 to €13196.64).
Discussion
The economic evaluation of our new local, collaborative, stepped, and personalized LoChro care management program showed no significant difference in health care costs between participants who received the LoChro-Care and those who received usual care, regardless of whether health care costs were measured 12 or 18 months after the start of the intervention.Thus, we did not find evidence to support our hypothesis that LoChro-Care would be associated with savings in health care costs.This result suggests that the overall extent of health care utilization progressed similarly between the two groups, regardless of the type of intervention they received.
In addition, our hypothesis -that LoChro-Care leads to more appropriate use of health and care serviceswas not supported.For the ten different cost indicators measured 12 and 18 months after the start of the intervention, we found no group differences.This means that participants who received LoChro-Care were not associated with the expected reduction in emergency hospital admissions, reduction in non-elective hospital days and nursing home admissions, more appropriate use of informal and formal community services, and improved ability to self-manage their condition, compared to participants who received usual care.
Our analysis however showed that 3-month health care costs of LoChro-Care are highly correlated with the patients' formal level of care.In Germany, a structured assessment of care needs, e.g. for activities of daily living, mobility or personal hygiene, is used to determine the level of care and thus the amount of financial support a patient receives from the statutory long term care insurance.This financial support was not included in the healthcare utilization costs assessed in this study because it is provided by the German long term care insurance rather than the health care insurance, on which our analysis focused.In addition, because reducing the need for nursing care is a lengthy process, we hypothesized that LoChro-Care would offer potential for short-term health care cost savings rather than impacting long-term costs.
Direct comparison of the results with other studies offers a number of challenges.First, inclusion in the study occurred during a hospital contact.Secondly, inclusion in the study was based on a pre-assessment regarding the participants' risk of unplanned readmission and need for nursing care.Nevertheless, the result of the LoChro study is in line with previous studies analyzing the costs of care among older patients in the outpatient sector [14,[18][19][20][21][22][23].From the point of view of the intervention, Kari's study appears to be the most suitable for direct comparison with the present results.Unfortunately, the site of inclusion differs substantially between the studies: in Kari's study, patients were invited to participate by letter and irrespective of a hospital contact, whereas in the present study, patients were approached during a hospital contact combined with a pre-assessment of the severity of underlying conditions.Without going into detail about the intervention, Kari's people-centered care model is quite comparable to the LoChro intervention.The same applies to the results regarding the impact of the intervention on the cost of care.Neither in the first year of the study (p = 0.31) nor in the second year (p = 0.76), nor over both years together (p = 0.42) a difference between intervention and control group could be shown in the study by Kari et al. [18].A look at the individual components of the costs analysed by Kari and colleagues also showed no trend towards a more appropriate use of health and care services (less emergency admissions, hospital stays) between intervention and control group [18].In contrast, intervention programs for multimorbid older people, which were found to be cost effective, were characterized by an earlier start in the development of chronic multimorbidity (e.g. with preventive home visits) [24], or comprised not only support for self-management but also active therapeutic measures such as home safety modifications [25,26], or mobility training [27].LoChro care adaptions in this direction might be reasonable, followed by a re-evaluation of the adapted program.
Limitations
Taking into account that LoChro-Care was a novel intervention being implemented for the first time, some limitations should be mentioned.First, the self-reporting nature of the questionnaires may have resulted in recall biases, especially in face of our target sample of older people.
A substantial limitation regarding the external validity could be the regional specificity of the study.The conduct of the study was limited to the area of Freiburg and the surrounding area.The implementation of the intervention and the study results may have been influenced by specific characteristics of this area, such as relatively high socioeconomic performance.Moreover, we excluded patients with terminal illnesses and insufficient knowledge of German.
Although the sample size could be considered considerable in the context of geriatric research, it was relatively small in terms of a cost-effectiveness analysis.Given the enormous standard deviation in total costs (see Table 2 for details), a multiple of the current sample size would have been necessary to show even a moderately large difference in cost values.Moreover, we have limited ourselves to a simple cost-cost comparison from the perspective of the health care system.The background to this is the ineffectiveness of the LoChro trial regarding the endpoints of physical, psychological, and social health status as well as health-related quality of life and life satisfaction [7], as well as the lack of difference in service utilization between the groups.For the same reasons, the costs of the intervention were not calculated.However, even though this study has shown negative findings, the "Absence of evidence is not evidence of absence" [28].
Conclusion
This study revealed no differences in costs between older people receiving our new local, collaborative, stepped, and personalized LoChro-Care management program or usual care.Keeping in mind the relatively small sample size per economic standards, there is currently no economic incentive for a wider implementation of the intervention.Further studies with larger sample sizes are needed to provide robust evidence of cost savings or cost neutrality of LoChro-Care.
Table 1
Baseline characteristics
Table 2
Healthcare utilization | 3,515.4 | 2023-12-15T00:00:00.000 | [
"Economics",
"Medicine"
] |
An Empirical Analysis on India’s Food Grain Cultivation, Production and Yield in Pre & Post Globalisation
Globalization is, directly and indirectly, contributing to its effect in all sectors of an economy. The agricultural sector is not exempted from the effect of change due to globalization as a component of the primary sector and a prime sector for human survival needs. The status of self-sufficient in the production of food grain will lead a nation to make a walk of pride among the other member globally. India is an agro-economy. In other words, agriculture is the backbone of the Indian economy. So the production of food grain and its cultivation and yield should be normally high to meet out the demand of a growing population. Also, with the implementation of a policy of globalization, there might be some change in cultivation, production, and yield of food grains in India. In this paper, an attempt was made to examine/ identify the change in area under cultivation, yield, and production by using the secondary sources of data from 1970 to 2017. The selected breakeven point of time was 1991-1992. The annual growth rate pictured the change in a particular point of time; the linear and quadratic model gave the growth over the period selected for the study, and dummy used regression model presented the difference in structural change. AGR results dominated by the negative growth rate; the linear growth model for production depicts that 3.6 percentage of tons of production will be move when a year moves upward. The area under cultivation is deteriorating in AGR, and other models used gave a weakness in explanatory level concerning time for the area under cultivation of food grain. Regarding the obtained results for yield reflect that a positive change exists after globalization, even though a reduction in area under cultivation.
Introduction
Globalization has allowed agricultural production to grow much faster than in the past. A few decades ago, fast-growth was somewhat over 3 percent per year. Now it is 4 to 6 percent. 1 In 2017-18, total food grain production was estimated at 275 million tons (MT). India is the largest producer (25% of global production), a consumer (27% of world consumption) and importer (14%) of pulses in the world 2 Agriculture, as the largest private enterprise in India and provides the underpinning for India's food and livelihood security and supports the economic growth and social transformation. During 2008-09 the agricultural sector contributed to approximately 15.7 percent of India's GDP (at 2004-05 prices). It was 14.6 percent in 2009-10 and 10.59% percent of total exports besides employing around 58 percent of the work force. The target of GDP growth in the country for the Eleventh Plan is 8.5 percent per annum, with the agriculture sector expanded to grow at an annual average rate of 3-3.5 percent.
A higher allocation of public sector resources was projected for Agriculture and Allied Activities, from the Tenth Plan realization level of Rs.60,702 crore to Rs. 1,36,381 crore during the 11th Five Year Plan at 2006-07 prices by the Centre, States, and UTs, which was 124 % step-up; share of Centre is 50,924 crores. Although the global recession witnessed during the Eleventh Plan period affected the overall availability of resources, allocation to Agriculture and Allied sector in the Central Plan. The outlay has been significantly raised during the 11th Five-Year Plan, which can be seen from the following Gross Capital Formation (GCF) in agriculture and allied activities, which was around 8 percent of GDP from agriculture and allied activities during the nineties, has since increased to 20 percent in 2009-10. 3 On the other hand, the contribution of the agricultural sector to GDP has continued to decline over the years. Successive Five Year plans have stressed self-sufficiency and self-reliance in food grains production, and concerted efforts in this direction have resulted in a substantial increase in agricultural production and productivity. This is clear from the fact that from a level of about 52 million tons in 1951-52, food grains production rose to above 241.5 million tons (4th advance estimates) in 2010-11 (Gol, 2011b. However, since the early 1990s, liberalization and globalization have become core elements of the development strategy of the government, which had indirect policy implications and impacts on Indian agriculture. As a part of economic reforms, agricultural markets were freed, external trade in agricultural commodities was liberalized, and industry was de-protected to create more competition, thereby reducing input prices and making terms of trade favorable to agriculture. In this context, the present paper discusses the globalization in India concerning food grains.
Review of Literatures
Indian agriculture can be divided into six phases viz. green revolution period (1960-61 to 1968-69), early green revolution period (1968-69 to 1975-76), a period of wider technology dissemination (1975-76 to 1988-89), a period of diversification (1988-89 to 1995-96); post-reform period (1995-96 to 2004-3 www.planningcommission.gov.in. 05), and period of recovery (2004-05 to 2010-11) 4 The period of diversification, reform, recovery periods is termed as the post-globalization period. Globalization is the new buzz word that has come to dominate the world since the nineties of the last century. Globalization can be simply defined as "The Expansion of Economic activities across political boundaries of native states." Globalization refers to increases in the movement of finance, inputs, outputs, information, and science across vast geographic areas. Globalization aims at the integration of the Domestic Economy with the Global Economy and the optimum utilization of growth potential. Calum Brown et al. (2014) stated that a reductions in overall productivity, but increases in production per unit area, under globalization, and increases in overall productivity under regionalization, reducing the productivity gap between globalized and regionalized systems. 5 A researcher Renjiani (2012) attempted to identify the factors affecting food grain production, was carried out a regression analysis. A positive shift in production was observed more in the case of rice and wheat as compared to other cereals and pulses both in India and Punjab. Overall growth in the area, production, and productivity of rice were observed more for Punjab as compared to India, while in the case of wheat, both India and Punjab followed the same trend. Decomposition analysis of growth in production revealed that productivity was the major contributory factor in changes in food grain production. Regarding variability, rice and wheat observed more stable as compared to other crops. The impact of Minimum Support Price (MSP) on production was found significant for Punjab. In contrast, that of the net irrigated area was found significant for India. 6 Kaushik Basu (2018) suggested and stated that agriculture as a share of value-added in GDP has over the last 50 years become quite small, but it is still a vital sector that employs around half 4. Khatkar, B.S., Chaudhary, N. and Dangi, P. "Production and Consumption of Grains: India." Encyclopedia of Food Grains, vol. 1, 2016, pp. 367-373. 5. Brown, C. et al. "Experiments in Globalisation, Food Security and Land Use Decision Making." PLoS ONE, vol. 9, no. 12, 2014, e114213. 6. Renjini, V.R. Growth and Stability of Foodgrain Production in India with Special Reference to Punjab. Punjab Agricultural University, 2012 of the nation's labor force. Even a small decline in its production can cause food inflation, large welfare losses among the poor, and even political instability. Therefore, agriculture as a sector will continue to need nurture. 7 Further, specifically, food grain production covered under this statement for selfsufficient of India by a single word of food inflation. Meanwhile, Khatkar (2016) et al. said that the total food grain demand would increase from 201 million tonnes in 2000 to about 291 and 377 million tonnes by 2025 and 2050, respectively. 8 So the production of food grain in India is essential to meet the growing population demand to fulfill the second prime goal of Sustainable development. Hence the researcher intended to examine the status of total food grains production, cultivation, and productivity in India before and after globalization, which will help the Indian Planners and policy makers to decide about the implications of globalization.
Objectives a. To trace the movements of food grains production, an area under cultivation, and productivity of food grains in India during preglobalization and post-globalization period b. To examine the structural difference in the production, an area under cultivation and yield of food grains in India during the pre-globalization and post-globalization period
Methodology of Data Analysis
The necessary data has been collected from Agricultural Statistics at a Glance 2018 published by the government of India from page number 71 and 72 for this paper, which carries the data from 1950 to 2018. The study period is divided into two periods as Pre-globalization (1970Pre-globalization ( -71 to 1990 and postglobalization period (1991-92 to 2015-16). Annual Growth rate, linear Growth rate, and Quadratic growth model were used to find the movements of food grain production, an area under cultivation, and yield. To estimate the Annual Growth Rate (AGR), Where AGR = Annual Growth Rate, Y t = current year, Y t-1 = Previous year, t = Time Period. The linear growth model: Y t = α+β 1 Years + U t and Quadratic equations / model Y t = α+ β 1 Years +β 2 Years 2 + U t were used to trace movements of a trend in Production, Area under Cultivation and Yield concerning time/years as an independent variable. The second objective is to study the structural difference in the production, an area under cultivation, and yield of food grains in India used the Dummy variable model. The objective of this model is to provide a means for analyzing the behavior of the system to experiment and improve its performances concerning globalization. Y = α +β 0 t+β 1 D t +U t where Y = Production, Area under cultivation and yield of food grains in India t = Time trend variable taking values 1,2,3,... D = 1, for time period 1990-91 to 2015-16 (postglobalization) D = 0, for otherwise in 1970-71 to 1990-91 (preglobalization) α, β 0 , β 1 are unknown parameters where α is an intercept, β 0 , β 1 is differential coefficients. The difference between the differential intercept coefficients of β 0 , β 1 will give the coefficient result of the benchmark or pre globalization period. The above table 1 shows the annual growth of foodgrain production for pre-globalization (1971 to 1991) and the post-globalization period (1992 to 2017). The agricultural production in India has been moved in a zig-zag from 1970-71 to 2015-17. The highest annual growth of food grains production in India for the pre-globalization period is 21.24 percent in the year 1975-76, and the highest annual growth rate for the post-globalization is 21.98 percent in the year 2003-04. The annual growth rate is negative in the eighties before and after 1983-84 in the pre-globalization period with a high difference in previous and succeeding years. This was the period of sixth five-year plan and end of Mrs. Indra Gandhi as a Prime minister and attempted to execute her 20 point program for the betterment of the economy. The magazine, India Today (Dec31-1983), stated that there was an increase as expected in Kharif production and especially in rice output, which should be 51.5 million tonnes with a significant upsurge of 20 percent. Rabi performance, which is relatively better protected against the weather by irrigation, was unlikely as spectacular with foodgrains growing a mere 2 percent, in simple words, there was a bountiful harvest in 1983-84. Again it was in 1988-89 happened and election year too. The Rajiv Gandhi government policy emphasized the industrial sector.
Findings and Discussion (a) Production of Foodgrains 1970 to 2017
Meanwhile, a good monsoon led a glut in food grain production and followed by V.P Singh government leadership in the period. The annual growth rate was double-digit in the years 1996-97, 2003-04, and 2010-11 as 10.54, 21.98, and 12.23, respectively. Food grain production in 2015 was a negative annual growth rate, which made fear about the fulfilling of domestic basic survival needs of the citizen. The policy makers and country ruling government attempted to use the globalization apt way and moved to positive in 2016 and reached 9.08 percent in 2017, which is a good sign for the availability of food grain to meet the domestic consumption.
The overall change in the production of food grain crops for the pre-globalisation and postglobalization period was manipulated by using a linear and quadratic equation. The linear equation gives the changes over the period in constant scale, while the quadratic equation gives the speed of change in the growth of food grain production. The following table-2 gives the information of pre, post, and total periods of food grains production taken for the study as 1970-91, 1991-2017, and 1970-2017 respectively. The obtained result of the total explanatory level of a model or the explanatory level of time-variable R-Square is above 85percent in all three periods' regression or for all equations. Also, the F value states that as significant in all three sets of values. The individual co-efficient of time variable value is 3.6 for pre, post and a total period of linear models, which means that a unit change or addition of year make a change in the increase of 3.6 percentage of production over the pre, post and total period of food grains taken for the study. In a linear model, β 1 reflects the growth rate, which is the slope of the linear equation. So, the growth rate 3.6 is the same for pre-globalization food grain production and post-globalization period in case of linear growth model and overall periods of food grain production too. According to the linear growth model, there is no impact of globalization on the growth of food grain production in India. The speed of production of food grain is positive over the period before and after globalization from the sign of b2 value and for overall periods. The summation of β 1 +2β 2 t gives the growth rate for the quadratic model or speed of production over the periods. The sign of parameters gives a positive growth rate in the production of food grain over the period. The growth rate of the quadratic model for the pre globalization period is 74.739, which is manipulated by the summation of β 1 +2β 2 t. The post globalization period growth rate is 95.55 and for the overall period is 170.657. The result of the quadratic model gives the picture of difference or change in the post-globalization period growth rate as 20.811 percentages in positive increasing speed. The growth rate of the overall period is 170.657. The difference will be checking out by the following dummy used model by breaking the same point of time. The value of R -Square gives the explanatory level as 95.6 percent by the selected explanatory variable as time and dummy. The F test is used to determine whether a significant relationship exists between the dependent variable and the set of all the independent variables. The F test is referred to as the test for overall significance. The F value is significant and reconfirms the goodness of model, while the Durbin Watson-d Statistic, also near two, (1.9 approximately) gives the absence of autocorrelation. The dummy used model,α stands for intercept and (α) is a measure of the extent of the variable (Y) that is not affected by the changes in a variable (X). The beta coefficients (the standardized b values) useful for comparing the relative weights of the independent variables.β 0 stands for co efficient of overall years /time without a break, β 1 stands for differential co efficient or gives the value of the postglobalization period.
If the estimated regression equation is to be used only for predictive purposes, Multicollinearity is usually not a serious problem. Every attempt should be made to avoid including independent variables that are highly correlated. VIF (variance inflation factor) for each term in the model measures the combined effect of the dependencies among the regressors on the variance of the term. Practical experience indicates that if any of the VIFs exceeds 5 or 10, it is an indication that the associated regression coefficients are poorly estimated because of multicollinearity. Here, in this model, the obtained result of VIF 3.874 and tolerance 0.258 for collinearity statistics, which is below the value 5 and given assurance of explanatory variables are not correlated and absence of multicollinearity.
The value of α is 91.304, which reflects the value of the linear model before and overall periods of table -2. So, the changes in explanatory variables are not affected, and certainly a production of 91.304 tons of food grain production produced in India. According to the last year, 2016-17 data said that 275.11 tons of production of food grains in India. If we compare with this result, more than one-third of the production of India is not affected by time variable and omitted variables. It means that, at any cost, certainly, there will be more than 91.304 tons of production of food grain in India, which is an important result for the policymaker to know the minimum expected production of food grain in India for revealing the food security measures.
In the above result of table-3, the time factor is shown as a positive sign and valuedas3.635 tons of food grains production in India. A unit or a year change will impact the agricultural production of food grain is upward sloping supply, positively as 3.635 percentage often production. The result reminds the result of the linear growth rate of overall periods, before and after globalization. So as per the variable time /result of β 0 , the production of food grain is normal constant growth irrespective of globalization.
The differential intercept coefficients β 1 gives the value of how much the value of the intercept that receives the value of 1 differs from the intercept coefficient of the benchmark category-the value of the bench mark category obtained from the difference of β 0 , β 1 coefficients. Here, from the table, the value of 3.027 obtained by subtracting the value of the coefficient of β 0 , β 1 which is positive and states that the unit of year movements made a 3.027 percentage of tons of production of food grain in pre globalization period. Meanwhile, the differential intercept coefficients, β 1 sign is negative; it means that there is an inverse relationship between food grain production in India and the period of post-globalization. A year or a unit change will impact a negative change of food grain production or deteriorating of .608 percentage of food grain production from 1991 to 2017. It may be possible if there is an importing of food grain due to globalization.
(b) Area Under Cultivation of Foodgrains in India
Table -4 shows the Growth of area under cultivation pre-globalization (1970-71 to 1990-91) and post-globalization period (1991-92 to 2015-16). The area under cultivation in India has been fluctuations from 1970-71 to 2016-17. The pre-globalization highest annual growth rate of 6.67 percent is seen in the year 1988-89. The postglobalization highest annual growth rate of 8.41 percent is seen in the year 2003-04. The model used for studying food grain production was used for studying the area under cultivation of food grain, which gave poor goodness of fit. Table 5 and 6 depicts the value of R-Square and F statistic, which are revealing the overall weakness of the model. So, studying the area under cultivation is not explained by the time variable and may say that 87 percent is representing the omitted stochastic error term. The sign of β 1 gives a negative relationship concerning time variable and area under cultivation in the post-globalization period. This result justifies the reasons for the negative of food grain production in the post globalization period in table three.
(c) Yield Per Hectare Foodgrains in India During
The following table 7 shows the yield of food grains production in the pre-globalization and postglobalization period. The yield of food grains in India has been zig-zag status from 1970-71 to 2016-2017. The annual growth of yield of food grains in India is moving up and down during 1970-71 to 2016-17. The highest annual growth rate is 16.78 percent seen in the year 1980-81and the highest annual growth rate of 12.51 percent during the year 2003-04. The table -8 and 9 reveals the yield of food grain in the study period of linear, quadratic, and dummy used models. The value of R-square and F -statistic depicts that the goodness of the model. In general, the explanatory level of time variable is good and more than 88 percentages and confirming with the overall significance of F statistic values presented in both the tables. The obtained result of the linear model growth rate is 26.96, 25.997, and 28.875 for the pre-globalization, post-globalization, and total study periods, respectively. The yield is upward trending in both the periods but a percentage of growth decreasing in the post-globalization period compared with pre globalization period. Source: Manipulated from secondary data of table 5 The obtained result of VIF depicts that no multicollinearity and the value of Durbin Watson Statistics present in table 9 reveals that the absence of Autocorrelation. The sign of differential intercept coefficient β 1 is positive, meanwhile not significant at 5 percent and 10 percent level. The pre globalization period is significant and represents that a movement of one year to another year negative change as -54.308, which is obtained from the subtraction of the differential coefficient value of β 0 , β 1 . The intercept or constant value is 751.236 yield of food grains will exist in India. The value of the intercept is reflected in a linear growth model value of the constant term in pre globalization and over all study periods presented in table-8. The obtained result means that there exist constant yields of 751.236 tons irrespective of the time variable.
Conclusion
The present study reveals that globalization had a positive impact on the yield of food grains in India and hurt area under cultivation of food grains in India, which depicts that there exists a technology transition in India due to globalization. Hence, globalization is a boon to food grain yield, even though there exists a deteriorating of the area under cultivation of food grain and which is reflected in the production of food grain. Last but not least, India is constantly producing a minimum of 100 tons of food grain production and 750 tons of yield irrespective of time and other stochastic variables influencing the food grain production and yield. | 5,272.4 | 2020-06-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
The PARIGA Server for Real Time Filtering and Analysis of Reciprocal BLAST Results
BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.
BLAST Parameters
The home page presents a basic interface that asks for the dataset type (Protein/Nucleic Acid) and automatically selects the corresponding BLAST program (blastp/blastn).
By clicking on "Show Options" several menus will appear where the user can modify the default values of the most common BLAST parameters. Clicking on "Hide Options" will restore the initial screen.
Input Data
According to the options selected in the previous section, the system will ask the user to upload a protein dataset or a DNA/RNA dataset.
The two datasets can be uploaded as files or directly pasted in the input form. The two options are available by clicking the "change input type" button. If the second dataset is not provided, the software will perform a self-blast on the first dataset.
Test Datasets
Some example datasets are available in the "test datasets" section. Test datasets (nucleotide or amino acid sequences) can be selected via the scroll menu, and loaded into the input form by clicking on the "Load" button.
Run
The job is submitted by clicking the "run" button at the bottom of the page, and a new form with the jobID will appear. The jobID can be used to retrieve the results later.
Results
When a job is completed, a page with two tables will appear with the results of the Blast searches of dataset 1 vs 2 and dataset 2 vs 1.
The header of each table contains two menus (Query number and Query title) to jump to a particular result ("a"). Alternatively the "prev-next" buttons allow the user to scroll over the results ("b"). The "open filter" button ("c") opens a form where the user can select the minimum, maximum or both values for the desired parameters. By clicking the "filter" button, only matches satisfying the filters will be displayed.
The sequence alignment of the individual hits can be visualized by clicking on the hit name. 4 Selecting one the four icons available at the top of each table, a pop-up window with additional information will be displayed: Graphical report of the distribution of the Blast hits on the query sequence, where hits are colored according to the alignment score.
Export of the current table.
List of the sequences producing significant alignments, with score and E-value.
Blast statistics for the current search.
Logical operations
Pariga is able to perform logical operations with the results of the two Blast searches. The tools can be accessed by clicking the "search tools" link at the top of the result page.
This section enables the user to apply three different search criteria: -Common: Select two or more sequences in the first data set to find out whether they match the same sequence(s) in the second dataset -Cross: Select one sequence in the first data set to find if there is a reciprocal match with sequence(s) in the second dataset -Multiple: Select one sequence in the first data set to find out whether they match more than one region in the same sequence(s) in the second data set An overall summary of the results obtained with the different queries is available in the Summary Results section: the Common summary table contains a list of sequences with common hits; the Cross summary table contains the results of a cross search for each sequence; the Multiple summary table contains a list of sequences with multiple hits on the same sequence.
By using the options available in the Selection Tools section it is possible to quickly select/unselect a set of sequences or filter for sequences that match more than one region on the same entry (show multiple). 6 The reference dataset for the summary results and the individual searches can be selected using the Reference Dataset buttons. To perform a new search and discard the results of the previous one, the reset button has to be clicked. A contextual help ( ) is associated to each button. The two-columns table contains the names of the sequences in the two datasets that can be selected for the different searches.
Common
The Common search checks whether two (or more) sequences in the reference dataset (identified by the checkbox in the table header, "reference dataset") share common results among the matched sequence in the other dataset. By indicating the number corresponding to the sequences or selecting them via the checkboxes and clicking the "common" button, a table with results will appear. This table will indicate the name of the common matched sequence in the header, and the result parameters for each of the selected sequences. If more than one sequence in the dataset satisfies the logical query, the "next-prev" buttons can be used to scroll through the results. The "open filter" button allows the user to filter the results as previously described. The alignment will appear in a pop-up window by clicking on the sequence name.
Cross
The Cross search allows checking whether the selected sequence in the reference dataset is reciprocally matched by a sequence in the other dataset. Sequence selection is carried out as described in the previous section. Clicking the "cross" button will display two tables showing the reciprocal blast results. The "prev-next" pairs of buttons will allow scrolling over the results. The alignment will appear in a pop-up window by clicking on the sequence name.
Multiple
The Multiple search will check if the selected sequence in the reference dataset matches sequence(s) in the other dataset in more than one region. Once selected the reference dataset and the desired sequence, clicking the "multiple" button will show a table with the results, if any. The "prev-next" pairs of button will allow scrolling over the results. The alignment will appear as popup window by clicking on the sequence name.
Case study 1 -Identification of miRNA targets
Recent studies have shown that, in some forms of tumors, several genes elude the miRNA-based repression by a mechanism based on the alternative splicing of the polyadenylation signal [1]. In practice, two (or more) transcript splicing isoforms differ for the length of their 3'UTR only, one with the "canonical" length, the other(s) with a shortened variant downstream a secondary polyadenilation signal. While the former contains multiple sites for miRNA pairing the latter includes just one (or fewer) miRNA pairing site(s). The second isoform can then elude the miRNA driven repression.
We decided to investigate if targets of the miRNA family let-7 (predicted by TarBase [2]) contain multiple potential miRNA pairing sites across different polyadenylation sites in their 3' sequences.
We show here how we took advantage of PARIGA to investigate the problem.
Dataset 1: mature sequences of miRNA let-7 family, 17 sequences (downloaded from miRBase [3])
Dataset 2: 3'UTR sequences of transcripts of top 20 predicted target genes (according to TarBase), downloaded through Ensembl web sites [4] and processed using an ad-hoc python script to generate all possible variants terminating the original sequence at every polyadenylation signal (AAUAAA as indicated by [5]). This resulted in176 variants.
Results
As shown in the header of the first table, only 9 miRNAs had a match. The first is the hsa-let-7a-5p. Due to the 8 By performing the reciprocal analysis, only 37 sequences out of 176 had a match in the dataset 1. For example, in the first result, the ENST00000315927_var1, had a perfect match (100%) with 7 miRNAs, three of them with an aligned region of 12 nucleotides and the others with 11 aligned nucleotides. All of them are good candidates for mRNA repression since the perfectly aligned region maps the mirna's 5' [3] as can be noted in the hstart-hstop columns.
The results can be scrolled by clicking the "prev-next" button.
As described before, alignments can be retrieved by clicking on the sequence names.
Results can be further explored using the "search tools". 9 For example, we can investigate whether hsa-let-7a-5p and hsa-let-7b-5p share a common target. First, we select the reference database, then the two miRNAs by clicking on the checkboxes near their names. By clicking the "common" button a table appears showing that the sequence ENST00000315927_var1 is matched by the two selected miRNAs at the same potential binding site (positions 25-36, columns hstart, hstop in the table).
Using the ""prev-next" buttons we can browse the other results: We could also investigate, for a given mirna (es hsa-let-7c), which sequence is reciprocally blasted. First of all we have to select the reference database by selecting 1 in the Reference Dataset section, and the miRNAs by clicking on the checkboxes near their names. Then, by clicking the "cross" button, two tables will appear showing the results for the two blast searches, and highlighting only the two involved sequence. As an example in the pair considered here, the blast search of the hsalet-7c against the 3' dataset shows that the ENST00000315927_var5 has 8 matching regions of twelve nucleotides each (qstart,qstop columns in the first table) and viceversa. The ranking in the relative blast searches is indicated in the column "position" while the numbers in the parenthesis in the headers of the tables indicate the position of that sequence in the original dataset.
11
The results of a similar search for the mirna hsa-let-7d-5p is shown in the following figure, where the scroll menu to navigate among results is highlighted.
Finally, a further application could be the identification of UTR sequences which are matched by a given miRNA (the hsa-let-7d-5p in the figure) more than once. As usual, we have to select the reference database, then the desired miRNA, followed by clicking on "multiple" button. In this example, nine UTR sequences are matched by the selected miRNA more than once: in the following figure, ENST00000315927_var3 is matched by the hsa-let-7d-5p miRNA at the positions 25-36 and 2886-2897 (columns hstart,hstop). The scroll menu can be used to select other results. 13
Case study 2 -Identification of Pfam domains
Let us assume that we have several protein sequences for which no functional experimental evidences is known, or that are derived from in silico studies. We can investigate whether any of these peptides shows sequence similarity with annotated DNA topoisomerase I protein sequences.
On the contrary, the two splicing variant ENSP00000307928 and ENSP00000454207 share common results. It is noteworthy that both proteins belong to the gene ENSG00000174450, which has four splicing variants in the dataset.
Further, these two peptides have multiple matches against different sequences in the PF02919 seed dataset. As usual they can be showed by the scroll menu. Limits of the regions are listed in the qstart-qstop columns.
If we would like to know whether, for a given sequence, there are reciprocal hit matches we can select the sequence and then click the "cross" button.
In the case shown in the figure, the two reciprocal hit sequences are the ENSP00000307928 protein and the A7TJW1 putative topoisomerase from Vanderwaltozyma polyspora.
It can be noted that while the protein matches the domain sequence just once (Table 1), the domain sequence matches the protein in different regions (Table 2) with different similarity and length of the alignment. 17 Also in this case, when no sequences are cross matched a message will appear.
As we noted before the ENSP00000307928 protein is matched by the A7TJW1 sequence domain more than once. If we would like to further investigate this aspect we can use the "multiple" option. First, we click on the sequence name and then on the "multiple" button: a table with the regions matched by the query sequence will be shown in the result table.
As mentioned before, these two sequences have multiple matching regions. The column position indicated where these results rank in the blast searches, while the columns qstart,qstart show where the alignments occur. Also in this case, if no multiple sequences occur, a message will be displayed. | 3,021.2 | 2013-05-07T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Design of microhidro turbine for electricity plants based on techno park in cimanggu village
The utilization of water as a natural resource of renewable energy for power plant is one of the alternative solutions to replace the need for fossil fuels. In Cimanggu village, there is a river that has a waterfall, which can be used to generate electricity. But the waterfall is currently used as a village tour. From observations and the calculation speed of the flow by using buoys, and river cross-sectional area obtained the total river debit Q more than 0.07 m3 / s. From the measurements is also obtained high fall the waterfall H = 22 m. Based on the existing height and debit of the water fall the major dimensions of a water turbine Pelton Micro Hydro type as the driving power generator was planned. The results of calculations for effective head = 18 m, with a water debit that is used to drive the runners Q = 0.07 m3/s, gained power generated by 7,0 kW. By the data is planned the main dimensions of micro turbine, Pelton outer runner diameter D = 250 mm, the diameter of the circle pin DL = 346 mm, number of blades z = 12 with 2 nozzles, and a shaft diameter ds = 30 mm. In planning this turbine, water is only directed to radiate to one side in order to replace the function of the waterfall. Because water is directed to one side, the shaft will experience an axial force (which is parallel to the shaft).
Introduction
Electrical energy is a mutual need of human life in all aspects, due to the reason various methods are used to explore the use of various alternative energy, one of them is natural water resources that can be used as a source of energy for electricity generation. In Cimanggu village, there is a waterfall that can be used to generate electricity which has been misused as a vacation spot.
Most of waterfall area could be used as scale power plants (micro-hydro), but there is no support from the local government, because it is more profitable to develop the waterfall area to be a tourism destination than building a power plant.
The design of turbine-based on techno park is created to to replace the function of the waterfall. The construction of the micro water turbine is able to produce electrical energy which can be used to increase tourist destination.
Theoretical Basis 2.1 Understanding Turbine
Turbine is a driving machine which is the fluid is used directly to turn the turbine wheel. Water turbines mean the water is a working fluid. Classification of Water Turbines. Water turbines can be classified based on several things, including: 1. Based on work principles
Pelton Turbine
Pelton turbines are included in the group of impulse turbines. The general characteristic is the entry as a flow of water into the runner at atmospheric pressure.
Micro-pelton turbines has capacity smaller than common pelton turbines. Micro shows the size of generating capacity which is between 5 kW to 50 kW. Main components of Pelton Turbine: 1. Turbine House.
The turbine house besides being a place where the turbine is installed also functions to capture and bend the splash of water flow out of the bowl so that both the runner and the beam are not
Nozzle
The nozzle consists of a nose-like sheath that is mounted on a pipe, and the needle nozzle is usually moved in a needle cone bend and a wear-free sheath. Absolute speed can be calculated by equation (
Shaft
The shaft is one of the most important parts of each machine. If the correction factor is fc, the plan power Pd (kW) as a benchmark is: Pd = fc . P If the twisting moment is called the plan moment, T (kg.mm), then the torsional moment of the plan is determined by the equation (Sularso To reduce friction loss, rolling bearings have been chosen in this plan that can withstand radial and axial forces. The choice of bearing is determined based on the axial force that occurs.
Electric Generator
Electric generator functions to convert mechanical energy to turn shaft into electrical energy. PLTMH uses a 3 phase alternating current generator.
Rapid Pipeline Calculation.
Head effective can calculation He = H -Hf. H = difference in height of water source with turbine Hf = Head loss in the pipeline is fast Where : = Total head loss in the rapid pipeline (m) g = gravitational acceleration (m/s 2 ) V = the speed of water in a rapid pipeline (m/s) Q = Discharge of water in the rapid pipeline (m 3 /s) d = Inner daimeter in the rapid pipeline (m) In pipe selection can be determined the estimated price of pipe roughness in the interior using the Moody diagram according to the pipe age plan.
From there, the price of k / d can be obtained so that using the Moody diagram the value f can be obtained.
L= Horizontal distance from the source to the turbine house (m) Hf = pressure losses on pipes and auxiliary materials
Design of Pelton Turbine
Pelton turbines are one type of water turbine which is suitable for watersheds that have a high head. Pelton turbines are one of the most efficient types of water turbines. The shape of the turbine blade consists of two symmetrical parts. Blades are formed so that the jet of water will hit the middle of the blade and the jet will turn both directions so that it can reverse the jet of water well and free the blade from the side forces so that kinetic energy is converted To get good efficiency, in the Pelton turbine there must be a relationship between the traveling speed (u1), and the exit speed (c1). To analyze the flow through the motion of a curved propeller it is necessary to draw a velocity triangle.
In planning Pelton turbines there are several things that must be considered, including the specific speed equation
Calculation of Head Loss in Rapid Pipes.
In rapid pipeline calculations, the discharge used to rotate the turbine is determined at 0.07 m 3 /s. The length of the pipe is obtained by measuring the field as follows: Straight pipe : 48 m Rapid pipeline material was chosen from PVC, with a nominal diameter of 160 mm, the friction factor was searched using Moody's diagram: Get value f = 0,018. In the pipeline, the supporting materials are used as follows: : Selection of Bearings To reduce the power lost due to friction, the bearings used are rolling bearings. From fig. 3.2, it can be seen that the water velocity out of the turbine is as follows:
Figure 5.Velocity triangle
The greater the angle taken then the axial force that occurs will be greater and the speed of the water emitting out of the turbine will be even greater, but the power of the turbine will be smaller.
Data Processing Results
From the calculation results it is known that the Pelton turbine is suitable for designing Micro Hydro electric generator drive turbines because it works at low compressive height. This can be seen in the calculation data as follows: a. Turbine -The actual power produced by the turbine, P = 7,0 kW -Actual jet speed, The selected bearings are rolling bearings that can withstand axial forces. -Same as shaft diameter, ds=30 mm
Discussion
In this plan, a water discharge is used to rotate the turbine as big as 0,07 m 3 /s, with effective head 18,0 m, the power generated by the turbine is may be obtained 7,0 kW, where total efficiency is 60% based on the range of application of Pelton micro turbines.
Conclusions and Recommendations 5.1 Conclusion
Based on the result of research and calculations it can be concluded : 1. A Pelton Micro Hydro type water turbine can be designed to be used as a driver for a power plant in the village of Cimanggu. Produced from Turbine design • The actual power produced by the turbine, P = 7,0 kW | 1,887.2 | 2020-01-21T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Functional surfaces, films, and coatings with lignin – a critical review
Lignin is the most abundant polyaromatic biopolymer. Due to its rich and versatile chemistry, many applications have been proposed, which include the formulation of functional coatings and films. In addition to replacing fossil-based polymers, the lignin biopolymer can be part of new material solutions. Functionalities may be added, such as UV-blocking, oxygen scavenging, antimicrobial, and barrier properties, which draw on lignin's intrinsic and unique features. As a result, various applications have been proposed, including polymer coatings, adsorbents, paper-sizing additives, wood veneers, food packaging, biomaterials, fertilizers, corrosion inhibitors, and antifouling membranes. Today, technical lignin is produced in large volumes in the pulp and paper industry, whereas even more diverse products are prospected to be available from future biorefineries. Developing new applications for lignin is hence paramount – both from a technological and economic point of view. This review article is therefore summarizing and discussing the current research-state of functional surfaces, films, and coatings with lignin, where emphasis is put on the formulation and application of such solutions.
Introduction
Lignin is the second most abundant biopolymer on earth, aer cellulose. Natural lignin is synthesized from the three monolignol precursors, namely p-hydroxyphenyl (H unit), guaiacyl (G unit), and syringyl (S unit) phenylpropanoid. 1 Lignin from sowood consists primarily of G units, whereas hardwood lignin contains both G and S units. 2 Moreover, lignin from annual plants, such as grass or straw, can contain all three monolignol units.
Technical lignin is the product of biomass separation processes and hence differs from natural or pristine lignin, as it is found in lignocellulose biomass. 3 The composition and properties of technical lignin are largely determined by their botanical origin, extraction process, purication, and potential chemical modication. 4 Presently, there are some 50-70 million tons technical lignin available from pulping or biorenery operations. Most is burned to produce energy in biorenery processes and only approx. 2% is sold commercially. 5 Technical lignin isolated from pulping processes includes Kra and soda lignin from alkali pulping, lignosulfonates from sulte pulping, and organosolv lignin from solvent pulping. 6 The two main types of technical lignin are lignosulfonates (approx. 1 million Dr Jost Ruwoldt is a research scientist at RISE PFI, Norway. He graduated with a PhD in Chemical Engineering from the Norwegian University of Science and Technology (NTNU) in 2018, and an MSc in Chemical and Bioprocess Engineering from Hamburg University of Technology (TUHH) in 2015. His current work includes lignin technology, thermoforming of wood pulp, and biomass conversion and utilization. In addition to his work at RISE PFI, he is a visiting researcher and lecturer at TU Berlin, Germany.
Dr Fredrik Heen Blindheim is a Postdoctoral researcher at RISE PFI. He received a PhD in Organic Chemistry at the Norwegian University of Science and Technology, specializing in medicinal chemistry and the development of small-molecule bacterial kinase inhibitors. In his current position, he works with chemical modication, quantication, and characterization of technical lignins for green applications in industry. His main interests are in organic synthesis and spectroscopic analysis.
tons per year) and kra lignin (<100 000 tons per year). In addition, the advent of hydrolysis and steam-explosion lignin have created new types of technical lignin. 7,8 The use of ionicliquids or supercritical solvents have furthermore yielded the products ionosolv lignin and aquasolv lignin, respectively, with new and interesting features. 9,10 Lignin is polyaromatic and due to this structure, it is less hydrophilic than polysaccharidic biopolymers, e.g., cellulose, hemicellulose, starch, alginate or chitosan. 11 It is hence a promising candidate in various applications, including: (i) reduction of wettability of hydrophilic materials, (ii) addition of functionalities, such as protection from UV light, antioxidant and antimicrobial properties, and (iii) tailoring of materials and formulations, e.g., for controlled substance release, adsorption, or antifouling mechanisms. [12][13][14][15][16] However, chemical modication is required for most applications of lignin. Such modications frequently make use of lignin's hydroxyl groups, for example, by graing reactions during phosphorylation, sulfomethylation, esterication, or amination. 17 The aromatic moieties in lignin can furthermore be targeted for, e.g., replacing phenol in formaldehyde resins. 18 At last, the carboxyl groups in lignin may also serve as reactive sites for polyesters. 19 Interest has also been strong for the use of technical lignin in polymeric materials, e.g., for thermoplastics or thermosets. 20 Processability of lignin in thermoplastics can be done without modication, as lignin is an inherently thermoplastic material. 21,22 Lignin's glass transition temperature can range from about 60-190°C and may depend on many factors, including the botanical origin and pulping type, moisture content, and chemical modication. 23,24 Lignin can also be chemically modied to improve the application of lignin as specialty chemicals or in polymeric materials. [25][26][27] Additionally, the utilization of lignin as macromonomer, i.e., thermoset precursor, can be done as part of polyurethanes, polyesters, epoxide resins, and phenolic resins. 11 End-uses include the production of rigid or elastic foams, rigid and self-healing materials, adhesives, biocomposites, and coatings. 19,[28][29][30][31][32][33] One long-held belief is that lignin provides water-proong in the wood cell wall to support water-transport. 34 Despite yielding a contact angle below 90°, which would be required to pose as a hydrophobic material, various researchers have shown that lignin can reduce the wettability and water-uptake of wood and pulp products. 18,[35][36][37] Hence, both technical and chemically modied lignin have been proposed as additives for packaging materials. 38 Reduction of wettability of ber-based packing is a particularly interesting application, considering environmental and societal drivers regarding reduction of single-use plastics and environmental pollution. Lignin could thus form the basis of coatings or impregnation blends, provided that the lignin-coating complies with food contact requirements. One example for lignin-blends is the combination with starch during surface-sizing of paper, which can improve extensibility and reduce wetting of the starch-matrix. 35,39 Layer-by-layer assembly with multivalent cations or polycationic polymers has also been done, which can improve the strength and hydrophobicity of cellulose. 40,41 Other applications of lignin, its derivatives and mixtures include the use for controlled-release fertilizers, antifouling membranes, re retardancy, dye sorption, wastewater treatment, and corrosion inhibitors. [14][15][16][42][43][44] One publication even reported an unintentional but yet advantageous coating of coir bers, where the lignin delayed oxidation and thermal degradation of the bers in a polypropylene composite. 45 Major drivers for using lignin are economical aspects by attributing value to a by-product from pulping or biorenery operations, and sustainability by replacing fossil-based materials with biopolymers. Many applications can thus benet from the inclusion of lignin in functional surfaces, lms, and coatings. The mechanism of action and application mode can hereby differ greatly. This review therefore represents an effort to structure and summarize recent progress, where emphasis is put on both the process and nal use for lignin in surfaces and coatings.
Structure and composition of natural lignin
Lignin is part of the lignin-carbohydrate complexes (LCC) that are found in cell walls of plants and woody materials, as illustrated in Fig. 1. The cellulose bers are tightly bound to a complex network of hemicellulose and lignin, and the three biopolymers provide strength and stability to the cell walls. In addition to providing structural integrity, lignin helps building hydrophobic surfaces which are important in transport channels for water and nutrients. 46 The complex lignin network consists of the three 4-hydroxyphenyl propylene units, or monolignols, formed from the parent compounds p-coumaryl-(p-hydroxyphenyl, H-unit), coniferyl-(guaiacyl, G-unit) and sinapyl alcohol (syringyl, Sunit), see Fig. 2. 47 The monolignols differ only in the presence or absence of one or two aromatic methoxy groups ortho to the hydroxyl group. These are synthesized in vivo from the aromatic amino acid phenylalanine, formed in the shikimic acid pathway in plants. 48 radical cross-coupling reactions which results in the complex, and varied, lignin network. The ratios of the three monolignols in lignin from different sources can vary quite signicantly, hardwood lignins contain G-(25-50%) and S-units (50-70%), sowood lignins contain mostly G-units (80-90%), while grass lignin contains mixtures of S-(25-50%), G-(25-50%) and Hunits (10-25%). 49 The monolignol composition (H : G : S ratio) can also vary between tissue types in the same organism, which has been illustrated in the cork oak, Quercus suber. Lignin from the xylem (1 : 45 : 55) and phloem (1 : 58 : 41) differ less in composition than the two compared to the phellem (cork-part, 2 : 85 : 13). These differences affect the occurrence of specic interunit linkages, where an increase in S-units lead to an increase in alkyl-aryl ether (b-O-4) bonds: 68% in cork, 71% in phloem, 77% in xylem. 50 The difference in abundance of the three monolignols lead to many different types of interunit linkages in lignin, specically between angiosperm (hardwood and grass) and gymnosperm (sowood) lignin. 51 The most common interunit linkage is the b-O-4 alkyl-aryl ether bond (Fig. 3), which occurs between 45-50% or 60-62% of phenyl propylene unit (C 9 units) in sowoods and hardwoods, respectively. As this is the most common linkage, many delignication processes target this specic linkage. For sowoods, the 5-5 linkage is also important, and has an abundance of 18-25% per 100 C 9 units, while this linkage occurs only around 3-9% in hardwoods. 52 In the process of isolating technical lignins, both the labile aryl-alkyl and b-O-4 bonds are most prone to cleavage. 53 This results in technical lignins having more condensed and variable structures than native lignin, and a wide variety in molecular weight (M w ). Mass average values (M w ) of 1000-15 000 g mol −1 for soda lignin, 1500-25 000 g mol −1 for Kra lignin, and 1000-150 000 g mol −1 for lignosulfonates have been reported, depending on botanical origin and process conditions. 54 Native lignin is a virtually innite macromer that is both randomlyand poly-branched. 55 The bonds between the lignin and surrounding hemicellulose and cellulose found in LCC have recently been reviewed. 56 All sowood lignin, and 47-66% of hardwood lignin, is reportedly bound covalently to carbohydrates, and mainly to hemicellulose. The most common types of linkages found in LCCs are benzyl ether-, benzyl ester-, ferulate ester-, phenyl glycosidic-and diferulate ester bonds. 57 Note that due to the high degree of variability in inter-unit and LCC linkages, Fig. 4 should only be taken as an illustrative example. The lignin macromolecule is polydisperse and may exhibit various linkages and functional groups. 4 In other words, lignin should be considered as statistical entities rather than distinct polymers.
Isolation of technical lignin
Lignocellulosic biomass consists of cellulose (30-50%), hemicellulose (20-35%) and lignin (15-30%), where the lignin acts as a "glue" within the LCCs. 58,59 The actual lignin content of the biomass is highly inuenced by its botanical origin e.g., 28-32% is found in pine and eucalyptus wood, while switchgrass contains only 17-18% lignin, 60 and less than 15% is typically found in annual plants. 61 The rst step in lignin valorization is biomass fractionation, where the cellulose, hemicellulose, and lignin are separated from each other. Several techniques have been developed, which can be grouped into sulfur and sulfurfree pulping from the paper and pulp industry, and bio-renery processes that aim to produce of materials, chemicals, and energy from biomass. 59 The latter may specically be designed to isolate lignin of high purity and reactivity, whereas pulping originally produced lignin as a by-or waste-product. An overview is given in Fig. 5.
The three industrial extraction methods for lignin are kra, sulte and soda pulping. In addition, organosolv pulping has been developed to extract lignin and separate the pulp bers. Commercialization of this process has not yet been done, but interest has risen recently in this technology, as organosolv pulping produces a technical lignin of high purity and reactivity. Several other methods also exist, but these are mainly used in lab-scale and are referred to as biorenery concepts, or "pretreatments". 62 In the Kra pulping process, the lignocellulosic biomass is mixed with a highly alkaline cooking liquid containing sodium hydroxide (NaOH) and sodium sulte (Na 2 S), at elevated temperatures of 150-180°C. From the resulting black liquor, kra lignin can be precipitated out by lowering the pH to around 5-7.5. 46 In the LignoBoost process, this precipitation is done by adding rst CO 2 and then sulfuric acid. Kra lignin has a sulfur content of 1-3%, is highly condensed, contains low amounts of b-O-4 linkages, and is frequently burned for energy and chemical recovery at the mills. [63][64][65] In the Kra process, lignin is fragmented through aaryl ether or b-aryl ether bonds, which results in increased phenolic OH content in the resultant lignin. 66 The sulte process is another specialized pulping technique, which utilizes a cooking liquor containing sodium, calcium, magnesium or ammonium sulte and bisulte salts. 65 Treatments are typically conducted at 120-180°C under high pressures, which gives lignosulfonates that contain 2.1-9.4% sulfur, mostly in the benzylic position. 67 Lignosulfonates are cleaved mainly through sulfonation at the a-carbon, which leads to cleavage of arylether bonds and subsequent crosslinking. 68 Both Kra and sulte black liquor typically contain signicant amounts of carbohydrate and inorganic impurities. 64 The sodaanthraquinone process is mostly applied in the paper industry on non-woody materials like sugarcane bagasse or straw. 64 The material is treated with an NaOH solution (13-16 wt%) at high pressures and temperatures of 140-170°C, where anthraquinone is added to stabilize hydrocelluloses. 64,69 The resulting soda lignin is sulfur-free and contains little hemicellulose or oxidized moieties. Organosolv lignin is produced in an extraction process using organic solvents and results in separation of dissolved and depolymerized hemicellulose, cellulose as residual solids, and lignin that can be precipitated from the cooking liquor. 65 Various solvent combinations are possible, such as ethanol/water (Alcell process) or methanol followed by methanol and NaOH and antraquinone (Organocell process), which will affect structure of the resultant materials. 51 Common for all organosolv lignins is that their structures are closer to that of natural lignins, in particular compared to Kra lignin or lignosulfonates. They are additionally sulfur-free and tend to contain less than 1% carbohydrates. 65 Several other methods of biomass processing have been developed that are targeted at lignin extraction, rather than producing cellulose bers, where lignin is as a byproduct. 64 Milled wood lignin (MWL) can be produced to closely emulate native lignin, but at the expense of process yields. 70 This method is considered gentle but time consuming, oen requiring weeks of processing, making it viable only in a laboratory setting. 58 Other techniques that aim to produce native lignin analogues include cellulolytic enzymatic lignin (CEL) and enzymatic mild acidolysis lignin (EMAL). The CEL procedure was developed as an improvement of the MWL process, where higher yields were obtained without increasing milling duration. 46 By adding an additional acidolysis step, Guerra et al. were able to again improve on the yield, while still producing lignin that closely resembled the native structure. 70 The physicochemical pretreatments aim to reduce lignin particle size through mechanical force, extrusion, or other. These techniques include steam explosion, CO 2 explosion, ammonia ber expansion (AFEX) and liquid hot water (LHW) pretreatments. 46 Ionic liquids have also been successfully used for lignin isolation. Five cations with good solubilizing abilities were identied: the imidazolium, pyridinium, ammonium and phosphonium cations, while the two large and non-coordinating anions [BF 4 ] − and [PF 6 ] − were found to disrupt dissolution of the lignin. 46 The chosen extractive method will not only affect the characteristics of the resulting lignin, but also the amount that is extracted. Several methods have been developed for the delignication of sugarcane bagasse, e.g., milling, alkaline or ionic liquid extraction, where yields of 17-32% were obtained depending on the method of choice. 71
Chemical modication
Chemical modication of technical lignins is well explored and include a huge variety of techniques (see Fig. 6 for illustrative examples). Technical lignins have been modied by a myriad of techniques, such as esterication, phenolation and ether-ication. 6 Urethanization with isocyanates has been explored towards polyurethan production, 72 and allylation of phenolic OH groups enabled Claisen rearrangement into the ortho-allyl regioisomer which is of interest for its thermoplastic properties. 73 The solubility and charge density of technical lignins can be affected by sulfomethylation or sulfonation, 17,74 and methylation of the phenolic OH groups have led to lignin with an increased resistance to self-polymerization 17 The thermal stability of lignins has also been improved by silylating the hydroxyl groups with TBDMS-Cl, and the resulting material could be incorporated into low-density polyethylene (LDPE) blends forming a hydrophobic polymer matrix. 75 Lignin is a versatile scaffold for different modications depending on the desired application. For the production of epoxy resins, epoxidation with epichlorohydrin is a common technique. This approach has also been combined with CO 2 xation resulting in cyclic carbonates being incorporated in the lignin. 76
Analysis techniques
Techniques to assess lignins and lignocellulosic biomass have long been a topic of great interest, both for quantitative and qualitative characterization. Such techniques are also critical to probe and assess chemical modications. A summary of common methods is given in Table 1.
Different techniques are oen combined to provide a better overall picture. For example, chemical modication of lignin may be probed in terms of molecular weight, i.e., by using sizeexclusion chromatography, and abundance of functional groups, as determined by FTIR or 2D NMR analysis. The technique of choice can depend on factors such as the target groups of interest, but also on availability and cost. The polydisperse nature of technical lignin can sometimes make accurate measurements difficult. This is manifested, for example, in the incomplete ionization of phenolic moieties during titration or UV spectrophotometry, as the conguration and side chains of phenolic moieties induce varying degrees of resonance stabilization.
Formulations and applications of lignin-based surfaces and coatings
The coatings and surface modications in this review most oen fulll one of two purposes. Firstly, they may seek to protect the underlying substrate, e.g., from mechanical wear, chemical attack (corrosion), or UV radiation. Secondly, they add functionality such as antioxidant, controlled substance release, or antimicrobial properties. Reduced wetting and hydrophobization are frequently mentioned for lignin, 34,77,78 which would normally fall into the second category, unless the purpose is to protect the underlying substrate from degradation by water. The different applications will be discussed more in detail in this chapter.
The end-use usually determines the manner, in which mixtures and coatings must be formulated. In principle, four different approaches can be distinguished, which are (1) application of neat lignin, (2) blends of lignin with other active Time-of-ight secondary ion mass spectrometry (ToF-SIMS) Visualization of monolignol distribution on plant sample crosssections or inert materials, (3) the blending of lignin in thermoplastic materials, and (4) the use of lignin as a precursor for synthesizing thermoset polymers. An overview of the different approaches for formulation and application is given in Fig. 7. These will be discussed in more detail further on. While surface layer or coating are usually applied onto another material, there are also implementations that include lignin as part of the overall base-matrix. Examples for the latter include lignin-derived biocarbon particles for CO 2 capture or wastewater treatment, polyurethane foams, and lignin as an internal sizing agent in pulp products. 36,43,79,80 The predominant way of using lignin in functional surfaces is by blending with other substances. Such formulations oen include agents, which are established for a particular application, e.g., starch for paper sizing or clay for controlled-release urea fertilizers. 81,82 Formulations in polymer synthesis usually draw on specic functional groups that are found in lignin, for example, the hydroxyl groups as polyol replacement in polyurethane or the aromatic moieties as phenol replacement in phenolformaldehyde resins. 18,33
Surfaces and coatings with neat lignin
Applying technical lignin by itself is a simple approach, as no co-agents are required. While some degree of adhesion to the substrate is oen given, pressure and heat may be applied in addition. Publications pertaining to this topic can be grouped into two categories, i.e., fundamental research studying the formation and properties of lignin-based lms and coatings, as well as applied research, which is usually focused on a specic end-use.
3.1.1. Fundamental research. A fundamental study was performed by Borrega et al., who prepared thin spin-coated lms from six different lignin samples in aqueous ammonium media. 83 The lms exhibited hydrophilicity with contact angles ranging from 40-60°. Despite widely diverse compositions, the solubility in water was found to be the parameter governing the properties of the thin lms. Similar results were obtained by Notley and Norgren, who found that lignin coatings prepared from diiomethane or formamide yielded even lower contact angles at about 20-30°. 34 The approach was further rened by Souza et al., who treated the spin-coated lignin lms via UV radiation or SF6 plasma treatment in addition. 84 While the UV treatment reduced the contact angle from about 90°to 40°, the plasma treatment produced superhydrophobic surfaces with contact angles exceeding 160°. The latter was also shown to induce major surface restructuring with a strong incorporation of CF x and CH x groups, which would account for the large increase in contact angle. Coatings with lignin-based nanoparticles can also be made by evaporation-induced selfassembly, whose properties and morphology are strongly governed by the drying conditions and evaporation rate. 85 An example of the obtained morphologies is shown in Fig. 8. Based on these reports, research is generally concurring on the fact that lignin by itself is not a hydrophobic substance. Harsh treatments, chemical modications, or ne-tuning of surface morphology are necessary to invoke hydrophobicity.
Spin-coated lms of milled-wood lignin have furthermore been investigated for enzyme adsorption. 86 Similarly, the adsorption of proteins on colloidal lignin has been studied by Leskinen et al., who produced protein coronas on the lignin particles via self-assembly. 87 The authors further showed that this deposition was governed by the amino acid composition of the protein, as well as environmental parameters such as the pH and ionic strength. The use of lignin for protein-adsorption is an interesting implementation, as it can provide different surface chemistries than its lignocellulosic counterparts. Still, the compatibility with in vivo environments is questionable, as biodegradation is not given here. 3.1.2. Applied research. An example for applied research would be paper and pulp products, which can be rendered less hydrophilic by surface-sizing. Application of the lignin can be done via an aqueous dispersion or alternatively by impregnation aer dissolution in a solvent. 35,88 A similar approach was used to treat beech wood with lignin nanoparticle via dipcoating, which improved the weathering resistance of the wood. 89 Such dip-coating may preserve breathability of the substrate due to the porous structure. In this context, the patent application WO2015054736A1 should be mentioned, which discloses a waterproof coating on a range of substrates including paper. 90 In this invention, the lignin is coated onto the substrate aer at least partial dissolution, followed by heat or acid treatment. However, as discussed above, the lignin by itself is not a hydrophobic material. While lignin-nanoparticles may alter the surface morphology of pulp products, an improvement in long-term water-resistance may be mostly determined by affecting mass-transfer kinetics.
Deposition of lignosulfonates on nylon has been demonstrated, which improved the ultraviolet protection ability of the fabric. 91 This deposition took place from aqueous solution and under heating, reportedly yielding a chemical bonding of lignin's OH groups to the NH groups of nylon 6. Such bonding would indeed be necessary, as the lignosulfonate would otherwise be easily washed away.
Zheng et al. coated microbrillated cellulose with Kra lignin and sulfonate Kra lignin, which promoted re retardancy of the material. 42 At last, iron-phosphated steel was rendered more resistant to corrosion aer spray coating with lignin, which was rst dissolved in DMSO and other commercial lignin-solvents. 92 While proven in the lab, these two applications must be considered with care, as unmodied lignin is a brittle material, which can limit the long-term durability of such products.
The use of chemically modied lignin
Chemical modication of lignin is frequently done to improve or enable the processability in blends with materials. In addition, chemical modication may add or alter functionalities as required in specic applications.
3.2.1. Lignin-ester derivatives. Esterication of lignin with fatty acids has been investigated by several authors. This approach bears potential, as it combined two bio-derived (macro-)molecules. The lignin contributes a backbone for graing and may improve dispersibility and adhesion of the fatty acids on lipophobic surfaces. The fatty acids can in turn render the lignin more hydrophobic, improving the water barrier, e.g., on paper substrates. To improve the reaction yield, reactive intermediates are frequently used. Several publications have studied the use of lignin esteried with fatty acid-chlorides as hydrophobization agents for paper and pulp products. 78,93 The coating affected both the surface chemistry and morphology, as illustrated in Fig. 9. The result is usually a decrease in water-vapor transmission rate (WVTR), oxygen transmission rate (OTR), and an increase in aqueous contact angle. Oxypropylation with propylene carbonate has been used as an alternative esterication approach, which yielded a similar hydrophobization and barrier effect on recycled paper. 94 A downside of oxypropylation is the use of toxic reactants, i.e., propylene oxide, and the requirement for high pressure during the reaction. While fatty acid chlorides do not need high pressures, these chemicals are highly corrosive and require the absence of water. All mentioned aspects can stand in the way of commercial implementation.
Hua et al. reacted sowood Kra lignin with ethylene carbonate to convert phenolic hydroxyl units to aliphatic ones, 95 as these are considered more reactive. The samples were further esteried with oleic acid and spin-or spray-coated onto glass, wood, and Kra pulp sheets. The authors showed that hydrophobic surfaces with contact angles ranging from 95-147°were possible. The pulp boards furthermore showed a more uniform surface aer the coating. Esterication with lauroyl chloride was also used by Gordobil et al., who studied their application as wood veneer by press-molding and dip-coating. 96 While the feasibility to treat wood and wood-based products was demonstrated on a technological level, the comparison to established treatment agents is frequently lacking. For example, linseed oil is an established wood-treatment agent, which undergoes self-polymerization in the presence of air. Papersizing agents can be based on compounds that are similar in function to fatty acids, such as resin acids or alkenyl succinic anhydride. Considering these examples, the questions arises whether modifying lignin bears an advantage over using established coatings or sizing agents. In the light of this discussion, the acid-catalyzed transesterication of lignin with linseed oil should be mentioned. 77 According to the authors, a suberin-like lignin-derivative was produced, which introduced hydrophobicity on mechanical pulp sheets, while being more compatible with the bers than linseed oil alone. The proposed process is simple in setup and reactants, which facilitates ease of implementation. In addition, the lignin is prescribed a key function, i.e., acting as a compatibilizer between the bers and the triglycerides.
At last, controlled-release fertilizers with lignin-fatty acid gra polymers have been proposed. Wei et al. crosslinked sodium lignosulfonate with epichlorohydrin, followed by esterication with lauroyl chloride. 97 Sadehi et al. reacted lignosulfonate with oxalic acid, proprionic acid, adipic acid, oleic acid, and stearic acid. 98 The modied lignin was further used to spray-coat urea granules. Both implementations showed enhanced hydrophobicity and the ability to coat urea for slower release of nitrogen. Still, it would be important to compare such approaches with established coating or blends of lignin and natural waxes or triglycerides, which do not require an elaborated synthesis.
3.2.2. Enzymatic modication. Enzymatic modication of lignin has the advantage of comparably mild reactions conditions, which can have a positive impact on process economics. On the downside, enzymes are comparably expensive and imposes higher technological demands. In addition, the variety of lignin-compatible enzymes is somewhat limited. Enzymatic treatment can induce a number of changes to lignin, such as oxidation, depolymerization, polymerization, and graing with other components. 99 For example, Mayr et al. coupled lignosulfonates with 4-[4-(triuoromethyl)phenoxy]phenol using laccase enzymes. 100 Aer successful coupling, the lignosulfonate lms exhibited reduced swelling and an increase in aqueous contact angle. Fernandez-Costas et al. performed laccase-mediated graing of Kra lignin on wood as a preservative treatment. 101 While the reaction itself was deemed a success, the desired antifungal effect was only obtained aer inclusion of additional treatment agents, such as copper. It is hence questionable if enzymatically coupled lignin poses as a competitive wood-treatment agent, as the lignin could also be used in wood-varnish formulations with a higher technological maturity.
3.2.3. Other approaches. A variety of other modications has been proposed to develop coatings from lignin. For example, Dastpak et al. reacted lignin with triethyl phosphate to spray-coat iron-phosphated steel for corrosion protection. 44 Coating of aminosilica gel with oxidated Kra lignin was performed by electrostatic deposition, which improved the adsorption capacity for dyes from wastewater. 102 Wang et al. phenolated lignosulfonate, followed by Mannich reaction with ethylene diamine and formaldehyde to produce slow-release nitrogen fertilizers. 103 The nal product exhibited elevated contact angles, however, an increased surface roughness likely also contributed to this effect, as the phenolated and aminated lignin exhibited nanoparticle structures. A different approach was taken by Behin and Sadeghi, who acetylated lignin with acetic acid to coat urea particles in a rotary drum coater. 104 The use of lignin in slow-release fertilizers can be useful, as lignin can have a soil-conditioning effect. However, biodegradability also must be considered, which can be negatively affected by chemical modication.
Self-healing elastomers were synthesized by Cui et al., who graed lignin with poly(ethylene glycol) (PEG) terminated with epoxy groups. 31 The authors concluded that a new material was developed with potential application for adhesives, but the ultimate stress was comparably low at 10-12 MPa. The material was named as a self-healing elastomer; however, the appearance and rheological properties suggest a thixotropic gel instead.
Blends of lignin with other substances
In the context of this review, the largest number of publications was found for lignin-blends with other substances. The advantage of this approach lies in the ease of implementation, exibility for later adjustments, and potential synergies with other co-agents. The lignin and other additives may be mixed right before or during surface modication, hence not requiring lengthy preparations such as the synthesis of chemically modied lignin or a pre-polymer. To facilitate better overview, this section was subdivided into several sub-section, which were distinguished by the application area or formulation-approach.
3.3.1. Cellulose bers and other wood-based products. The use of lignin in combination with cellulose bers, brils, or derivatives has received considerable attention, as this can yield all-biobased materials and coatings. For example, eucalyptus Kra lignin and cellulose acetate were combined in solution and cast onto beech-wood, which produced a protective coating similar to bark. 37 However, the authors did not determine the mechanical properties of the product, which would be important to address, as the potential brittleness could impart practical use. On the other hand, the biodegradation of lignin is indeed more challenging than that of cellulose and hemicellulose, 105 which may hence contribute to an improved resistance against certain fungi and bacteria. In addition, the ligninbased veneer may add functionalities such as water-repellence, UV-protection, and improved abrasion resistance, 106 but still a comparison with established treatment agents is lacking.
Cellulose nanobrils (CNF) and (cationic) colloidal lignin particles was cast into lms by Farooq et al., yielding improved mechanical strength as compared to the CNF alone. 107 A schematic of the proposed interactions is given in Fig. 10. The authors concluded that the lignin particles acted as lubricating and stress transferring agents, which additionally improved the barrier properties. The discussed effects could also be induced by the lignin acting as a binder, hence lling gaps and providing an overall tighter network. 36,88 Riviere et al. combined ligninnanoparticles and cationic lignin with CNF, however, the oxygen barrier and mechanical strength were lower than the CNF without added lignin. 108 This effect was likely due to a disruption of the binding between CNF networks. The polyphenolic backbone of lignin generally provides less opportunities for hydrogen bonding than compared to the cellulose macromolecule. The authors work on solvent extraction of lignin from hydrolysis residues is noteworthy, however, and the work showed promising potential for antioxidant use.
LCC were combined with hydroxyethyl cellulose, producing free-standing composite lms. 109 In this study, the addition of LCC enhanced the oxygen barrier properties and could also improve the mechanical stability and rigidity. A better effect of LCC was noted than combining lignosulfonates with hydroxyethyl cellulose alone. Synergies could hence arise from carbohydrates that are covalently bond onto the lignin.
An interesting approach was taken by Hambardzumyan et al., who Fenton's reagent to partially gra organosolv lignin onto cellulose nanocrystals. 110 The product was cast into thin lms, which showed nanostructured morphologies with increased water resistance and the ability to form selfsupported hydrogel-lms. In another publication, Hambardzumyan et al. simply mixed the cellulose nanocrystals with lignin in solution, aer which lms were cast onto quartz slides and dried by evaporation. 111 The authors found that optically transparent lms with UV-blocking ability could be produced. It was concluded that increasing the CNF concentration allowed for better dispersion of the lignin macromolecules, dislocating the p-p aromatic aggregates and hence yielding a higher extinction coefficient.
An elaborate work on lignin-starch composite lms was conducted by Baumberger. 39 The lms were produced via one of two methods: (1) powder blending of thermoplastic starch and lignin, followed by heat pressing and rapid cooling, and (2) dissolution in water or dimethyl sulfoxide followed by solventcasting and solvent evaporation. The author concluded that the lignin acted either as ller or as extender of the starch matrix, where the compatibility was favored by medium relative humidity, high amylopectin/amylose ratios, and low molecular weight lignin. Lignosulfonates formed good blends and imparted a higher extensibility onto the starch lms, likely due to benecial interactions between sulfonic and hydroxyl groups. Non-sulfonated lignin, on the other hand, improved waterresistance to a greater extent.
Three recent studies have found that incorporating lignin into a molded pulp materials can reduce the wettability of the material, as witnessed by an increase in contact angle or a decrease in water-uptake. 8,36,88 The advantage of such implementation is that high temperature and pressure will promote densication, as the lignin can ow into cavities. High densities of up to 1200 kg m −3 were reported, where the uptake of water is hindered not only by limiting mass-transport, but also by conning the swelling of cellulose bers. 88 Various researchers have included lignin in the formulation of paper-sizing agents. In one implementation, Javed et al. blended Kra lignin with starch, glycerol, and ammonium zirconium carbonate to produce self-supporting lms and paperboard coatings. 112 The mechanical lm stability was better when using ammonium zirconium carbonate as a cross-linking agent, in addition to reducing the water-transmission rate. Both the lignin and the ammonium zirconium carbonate also reduced leaching of starch when in contact with water. In a second publication, the author further developed the formulation's use in pilot trials. 81 Johansson et al. coated paperboard, aluminium foil, and glass with mixtures of latex, starch, clay, glycerol, laccase enzyme, and technical lignin. 113 The authors found that the oxygen scavenging activity was greatest for lignosulfonates, as compared to organosolv, alkali or hydrolysis lignin. This effect was explained by a greater ability of the laccase to introduce cross-linking on the lignosulfonate macromolecules. In another publication, Johansson et al. also combined lignosulfonates with styrene-butadiene latex, starch, clay, glycerol, and laccase enzyme. 114 The results showed that both active enzyme and high relative humidity were necessary for good oxygen scavenging activity. Laccase-catalyzed oxidation of lignosulfonates furthermore resulted in increased stiffness and water-resistance of the starch-based lms. Winestrand et al. prepared paperboard coatings using a mixture of latex, clay, lignosulfonates, starch, and laccase enzyme. 115 The lms showed improved contact angle with active enzyme and oxygenscavenging activity for food-packaging applications. While the results for paper-sizing with addition of lignin show promising potential, food packaging applications may impose additional requirements. For example, stability of the coatings may not be given in environments that contain both moisture and lipids. In addition, to the best of our knowledge, no study addressed the migration of sizing-agents into food. Still, the utilization of lignin as oxygen scavenger is promising, as this utilizes one of lignins inherent properties, which are found in few other biopolymers.
As an alternative to technical lignin, Dong et al. applied alkaline peroxide mechanical pulping effluent in paper-sizing, which comprised 20.1 wt% lignin and 16.5 wt% extractives based on dry matter weight. 116 Blended with starch, the effluent improved the tensile index and reduce the Cobb value of paper, while providing contact angles of 120°and higher. Such implementation can, however, also aggravate certain properties of the paper, as aging and yellowing may be promoted by the presence of acids and chromophores. Layer-by-layer self-assembly was used by Peng et al. to produce superhydrophobic paper coated with alkylated lignosulfonate and poly(allylamine hydrochloride). 40 Alternatively, Lit et al. deposited such layers on cellulose bers by combining lignosulfonates with the divalent copper cation instead of a polycation. 41 A similar effect on surface morphology was noted, while contact angles within the hydrophobic regime could be achieved. The utilization of lignosulfonate-polycation assemblies for cellulose hydrophobization is somewhat counterintuitive, since polyelectrolyte complexes tend to be hydrophilic and can swell in water. The long-term stability of such coatings in water is hence questionable, still for short contact times the modulation of surface roughness and chemistry can be benecial. Solvent casting was employed by Wu et al., using ionic liquids to dissolve cellulose, starch, and lignin. 117 The biopolymers were coagulated by addition of the non-solvent water, further being processed into exible amorphous lms. The process appears similar to the production of cellulose regenerates. Utilizing other biopolymers than cellulose, i.e., lignin, hemicellulose, and starch, is an interesting approach for netuning the desired lm properties.
Zhao et al. used evaporation induced self-assembly of lignin nanoparticles and CNF, which were subsequently oxidized at 250°C and then carbonized at 600-900°C. 79 These nano-and micro-sized particles could be used for CO 2 adsorption, where synergistic effects between the CNF and lignin nanoparticles were noted. An illustration of the particles is shown in Fig. 11.
Agrochemical formulations with lignin-based coatings predominantly involve fertilizer formulations, i.e., for controlled release of nutrients. The lignin can be part of a coating, which then acts as a mass-transfer barrier that delays the dissolution of nutrients. 82,118,119 The focus is usually on urea as nitrogen fertilizer or calcium phosphate as superphosphate fertilizers. 82,118-120 An advantage of using lignin, apart from being biodegradable and water-insoluble, is the potential function as soil amendment. 121 Two approaches can generally be distinguished, based on either the use of neat or chemically modied lignin. Properties such as water-permeability and nitrogen or phosphor release can be positively affected; however, chemical modication may impair biodegradation. With that said, the work of Fertahi et al. should be noted, who coated triple superphosphate fertilizers with mixtures of carrageenan, PEG, and lignin. 118 The latter had been obtained from alkali pulping of olive pomace. The three mentioned coating-materials are in principle all biodegradable. Blending lignin with carrageenan or PEG improved the mechanical stability of the lms compared to lignin alone, while also increasing the swelling of the coatings. Similar blends were studied by Mulder et al., who found that glycerol or polyols such as PEG 400 could improve the lm forming properties. 120 The water resistance, on the other hand, was improved by using high molecular weight PEG or crosslinking agents such as Acronal or Styronal (commercial name). On the downside, the biodegradability will be negatively affected by such crosslinking agents, especially acrylates or styrene-based chemistries.
Chemical modication of lignin for coating of superphosphate fertilizers was also conducted by Rotondo et al., 119 where the technical lignin was either hydroxymethylated or acetylated. Apart from utilizing toxic chemicals in the synthesis, these modications alone do not pose as a detriment to biodegradability. However, the Rotondo et al. also synthesized phenolformaldehyde resin to coat the fertilizer cores, which could be troubling, as the authors basically suggested adding plastics to the soil. Zhang et al. furthermore modied lignin by graing quaternary ammonium groups onto it. 82 While the quaternary ammonium may conveniently bind anions and add nitrogen to the soil, some of its degradation products are highly toxic and hence concerning, unless the goal is to add biocides to the soil. A similar approach was done by Li et al., 14 who synthesized multifunctional fertilizers. First, alkali lignin and NH 4 ZnPO 4 were mixed and dissolved to produce fertilizer cores, which were further coated with cellulose acetate butyrate and liquid paraffin. A second coating was then applied as a superabsorbent, which was based on alkali lignin graed with poly(acrylic acid) in a blend with attapulgite. Both the paraffin and poly(acrylic acid) gra should have been avoided due to environmental incompatibilities.
At last, a different application was explored by Nguyen et al., i.e., the encapsulation of photo-liable compounds with a lignin coating layer. 122 In particular, the authors emulsied the insecticide deltamethrin in a corn oil nanoemulsion with polysorbate 80 and soybean lectin as emulsier. The droplets were further coated with chitosan and lignosulfonate. The lignin contributed hereby to both the UV-protection of the emulsied insecticide, as well as to its controlled release. This approach is positive in several regards, as only biobased agents were used in the formulation, the lignosulfonates were not chemically modied, and the application drew on some of lignin's inherent properties.
3.3.2. Biomaterials and biomedical applications. A biomaterial, i.e., a material intended for use in or on the human body, must comply with certain requirements. This implies that the material should be biocompatible and should not cause an unacceptable effect on the human body. 123 However, the denition of biocompatibility has been debated in the literature. In addition to a modied denition of "biocompatibility", Ratner proposed "biotolerability" to describe biomaterials in medicine. 124 Biocompatibility was dened by "the ability of a material to locally trigger and guide nonbrotic wound healing, reconstruction and tissue integration", while biotolerability was proposed to be "the ability of a material to reside in the body for long periods of time with only low degrees of inammatory reactions". Novel biomaterials developed for biomedical applications could be dened by these terms with the target of limited brotic reactions, 125 and lignin may be within this group of biomaterials. Lignin is a material derived from biobased resources, with attractive properties for biomedical use, primarily with antioxidant and antibacterial characteristics. The antioxidant property of lignin is dependent on the phenolic hydroxy groups capable of free-radical scavenging. The antimicrobial effect is also caused by the phenolic compounds. 126 As expected, the antibacterial, antioxidant and cytotoxic properties may also depend on the type of lignin. 127,128 For example, kra lignin has been found to have less antibacterial properties compared to organosolv lignin due to the larger methoxyl content in organosolv lignins. 127 Several authors have attempted to draw on lignin's antibacterial and antiviral properties, which could be useful in surfaces for biomaterials and biomedical applications. Antimicrobial coatings were, for example, prepared by Lintinen et al. via deprotonation and ion exchange with silver, 129 as shown in Fig. 12. Jankovic et al. also developed such surfaces by ashfreezing a dispersion of organosolv lignin and hydroxyapatite with or without incorporated silver. 130 Aer freezing, the samples were dried by cryogenic multipulse laser irradiation, producing a non-cytotoxic composite, which were further tested on their inhibitory activity. A similar approach was taken by Eraković et al. 131 The authors prepared silver doped hydroxyapatite powder, which was then suspended in ethanol with organosolv lignin and coated via electrophoretic deposition onto titanium. 131 This composite showed sufficient release of silver to impose antimicrobial effect, while posing non-toxic for healthy immunocompetent peripheral blood mononuclear cells at the applied concentrations. However, the use of silver has caused some environmental concerns that should be addressed. 132 As an alternative, copper has been reported with better antibacterial effect than silver, which to our knowledge is currently unexplored in antimicrobial lignin complexes. 133 Lignin-titanium dioxide nanocomposites were prepared by precipitation from solution and tested for their antimicrobial and UV-blocking properties. 134 The authors concluded that the lignin could function as the sole capping and stabilization agent for the titanium dioxide nanocomposites. Better performance of the nanocomposites for antioxidant, UV-shielding, and antimicrobial properties was reported, as compared to the lignin or titanium dioxide alone.
Kra lignin and oxidized Kra lignin were processed into colloidal lignin particles and coated with b-casein, which was further cross-linked. 29 This work aimed to produce biomaterials and bio-adhesives, where the colloidal lignin acted as the scaffold intended for the synthesis of bio-compatible particles. However, no assessment of biocompatibility of the generated complexes was performed. Hence, the question remains whether this approach may be suitable for biomedical applications.
According to Dominguez-Robles, there are various additional biomedical applications, in which lignin could be promising, e.g., as hydrogels, nanoparticles and nanotubes, for wound healing and tissue engineering. 135 The interest in lignin for biomedical applications was also emphasized by the increasing amount of publications related to lignin applied as a functional material for tissue engineering, drug delivery and pharmaceutical use. 135,136 However, the reported studies on lignin for biomedical applications is still limited and various challenges will need to be overcome to advance in this area. These are especially regarding relevant assessment of lignins toxicological prole and biocompatibility. Not to mention the large variability of lignins when it comes to the source of lignin, the fractionation processes and posterior modication, which may affect the chemical structure, homogeneity, and purity.
3.3.3. Wastewater treatment. Different researchers have formulated lignin-based materials, which were designed for the purication of dye containing wastewater. 16,102,137 Suitability for such applications is principally derived from the chemical similarities, which exist between lignin and many dyes, i.e., an abundance of heteroatoms and aromatic moieties. Such adsorbents can be produced via deposition on silica gel, 102 coating onto carbon particles, 16 carbonization, 43 sorption and co-precipitation. 138 Development of such materials is generally positive, as it draws on the unique composition of lignin.
A different technological approach within the same area would be membranes. Layer-by-layer assembly is a frequently used technique, in which polyanionic lignosulfonates or sulfonated Kra lignin are combined with a polycation. Multiple bilayers are made by stepwise application of the polyelectrolytes, e.g., by immersion in a solution of polyanion rst, followed by drying and immersion in a solution of polycation. This approach was used by Shamaei et al. as an antifouling coating for membranes, improving the treatment of oily wastewater. 139 Gu et al. used lignosulfonates and polyethyleneimine on a polysulfone membrane, which successfully repelled the adsorption of proteins. 15 While the preparation is straight-forward, the long-term stability of such coatings also must be demonstrated. Both the lignin and (poly-)cation were water-soluble, so the coating would be pointless if washed away with the retentate.
3.3.4. Packaging applications. Packaging applications can benet from lignin-containing surfaces in various regards. Improvements in the water-resistance, water and oxygen barrier, and mechanical strength of cellulose-based substrates have been reported. 35,107,108,140 A more detailed summary of these materials is given in Section 3.3.1. The lignin may also serve as an oxygen scavenger. To implement this, several authors have formulated coatings that include both lignin and laccase enzyme. [113][114][115] At last, the antibacterial and UV-shielding properties of lignin have been mentioned as benecial contributors. 83,107,110 While the studies demonstrate feasibility on a technological level, there are other factors that must be considered as well. Long-term stability and migration of the coatings is rarely addressed, despite this being a crucial parameter in food packaging. In other words, one must be sure that no detrimental substances are transferred to the food. Some foods release both water and fat, for which lignin is in theory a good match, as it is soluble in neither. Packaging of non-foods generally poses less harsh requirements. The requirements on price per volume are greater; however, the pricing of technical lignin should be competitive.
3.3.5. UV-protection. The polyaromatic backbone of lignin provides extended absorption at sub-visible wavelengths of light. UV-shielding applications hence draw on one of the intrinsic properties of lignin. One example would be the development of natural sunscreens via hydroxylation of titanium oxide particles, during which lignosulfonates were added. 141 Unsurprisingly, it was concluded that the lignin enhanced the UV-blocking effect of the titanium oxide particles. Another publication explored sunscreens, where nanoparticlesize lignin was added to commercial lotion formulations. 142 Fig. 12 Simplified schematic of the production of silver-doped lignin nanoparticles. 129 The UV absorbance was improved by both nanoparticle formation and pretreatment with the CatLignin process. The latter was explained by partial demethylation and boosting of chromophoric moieties. While lignin may conveniently replace fossil-based and non-biodegradable UV actives, other factors also need to be tested for such a product to become feasible, e.g., non-hazardousness, safety for human contact, and skin tolerance.
Other applications that can prot from this property include UV-protective clothing, 91 packaging materials, 83,107 agrochemical formulations, 122 and personal protective equipment. 134 It should be mentioned, however, that enhanced UV absorbance is not always benecial, as it can also lead to faster degradation of the lignin-containing materials. 12
Lignin as part of thermoplastic materials
Lignin is a thermoplastic material with glass-transition temperatures in the range of 110-190°C. 24 As such, it is straight forward to use technical lignin as a ller material in, e.g., thermoplastics or bitumen admixtures. 38 Potential advantages of lignin in thermoplastic polymer coatings have been discussed by Parit and Jiang, 21 i.e., by adding UV-blocking and antioxidant activity as required in packaging applications. In general, the addition of lignin in thermoplastics can increase stiffness, but at the expense of extensibility. 38 Chemical modi-cation (alkylation) may be required to improve both tensile stiffness and strength of olenic polymers. 26 On the other hand, lignin's amphiphilic make-up can impart advantages, e.g., by improving the adhesion of polypropylene coatings. 143 Another example would be the use of lignin in biocomposites from polypropylene and coir bers. 45 While no signicant effect on tensile strength of the composites was found, adding lignin reportedly delayed the thermal decomposition.
Coatings with polymers are frequently used to protect the mechanical integrity of the underlying substrate. For added lignin to be advantageous, the mechanical characteristics of the polymer blend must hence be improved. While publications in this area frequently focus on the added functionalities, some also reported improvements in the mechanical strength of the coatings. 12,26 On the downside, the addition of lignin is oen limited to low ratios and chemical modication may be required. 12 These factors can limit the overall sustainability gain, which biopolymers have over fossil-based polymers and llers.
At last, slow-release fertilizers can be prepared from thermoplastics and lignin. Li et al. blended poly(lactic acid) (PLA) with Kra lignin samples, some of which had been chemically modied by esterication or Mannich reaction. 144 Urea particles were then coated by solvent casting or dip-coating, where the alkylated lignin yielded improved barrier properties and better compatibility with PLA. Microscope images of the coated urea particles are shown in Fig. 13. While lignin-PLA blends can potentially be more biodegradable than lignin-based resins, the biodegradation in soil may still be insufficient. Our recommendation is hence to favor blends of unmodied lignin with biopolymers, such as starch, cellulose, or carrageenan, as this will not contribute to microplastics pollution.
Lignin as a precursor to thermosets
The four most common applications of lignin in thermosets are polyurethane, epoxide resins, phenolic resins, and polyesters. 11 Unsurprisingly, formulations of lignin-based thermoset coatings are oen derived from such chemistries. The lignin can also be rendered compatible with other formulations, e.g., with polyacrylates by graing with methacrylic acid. 145 Such graing reactions are indeed instrumental to overcome some of the traditional challenges of lignin; 146 however, they can also be accompanied by unwanted side-effects, such as poor biodegradability.
3.5.1. Lignin-based polyurethane coatings. Lignin utilization in polyurethanes is done as polyol replacement, where lignin's hydroxyl groups are reacted with isocyanate groups acting as cross-linker. The lignin may even be soluble in the polyol, which aids straight-forward substitution. Lignin derivatization to improve the compatibility and performance includes hydroxyalkylation (e.g., with propylene oxide, propylene carbonate, or epichlorohydrin), esterication with unsaturated fatty acids, methylolation, and demethylation. 28 Chen et al. blended alkali lignin and PEG, which were further polymerized with hexamethylene diisocyanate in presence of silica as leveling agent. 147 Experiments were limited to 60 wt% lignin, as higher ratios yielded an embrittlement. The mixtures were processed into lms, which showed some potential for biodegradation. These results indeed corroborated by other authors, which also state that lignin incorporation in polyurethanes yielded a limited degree of biodegradability. 148 A different approach was made by Rahman et al., 149 who synthesized waterborne polyurethane adhesives with aminated lignin. The tensile strength and Young's modulus improved with increasing ratios of aminated lignin, which could be due to an increased cross-linking density. Still, the overall percentages of lignin in the coatings were comparably low, as the authors added only between 0-6.5 mol% lignin. It is curious to note that the authors proclaimed better storage stability of aminated lignin dispersions, yet only the weathering resistance of the nal coating was measured.
Some of the challenges with lignin in polyurethane materials include reactivity and a high cross-linking density. Due to the latter, polyurethane formulations are frequently limited to low percentages of lignin, typically 20-30 wt% at max, as higher ratios can yield brittle and low-strength materials. 150 One approach is to increase the degree of substitution is depolymerization of lignin, but other chemical modications or fractionations may be equally applicable. In this context, the work by Klein et al. should be mentioned, who reported polyurethane coatings with lignin ratios of up to 80%. 151 A comparably low curing temperature of 35°C was used, which could also entail incomplete reaction. Curiously, there is no data on the mechanical strength of the lms. In addition, the authors measurements of hydroxyl groups via ISO 14900 and 31 p-NMR are widely divergent. In two other publications by the same author, the antioxidant properties and antimicrobial effect of such lms were studied. 13,152 In a different study, methyltetrahydrofuran was used to extract the low-molecular weight portion from Kra lignin. 153 The authors used between 70 to 90 wt% lignin in the nal formulation at NCO/OH molar ratios of 0.16-0.04. While providing a good adhesive strength, the lms elastic modulus is within the same range of the fractionated lignin, whereas no information on the material strength was provided. It would thus appear that the elevated crosslinking density may be circumvented, i.e., simply by reacting only a sub-fraction of the available hydroxyl groups of lignin. Still, it has yet to be demonstrated that such coatings are also competitive in mechanical strength and abrasion resistance.
3.5.2. Lignin-based phenolic resin coatings. Lignin may also be used as a phenol-substituent in phenol-formaldehyde resins. 20 This approach was utilized by Park et al. to produce cardboard composites by spray coating. 154 The authors reported that lignin purication by solvent extraction yielded better results than by acid precipitation. Substituting with 20-40 wt% lignin surprisingly accelerated the curing kinetics, compared to the lignin-free case. The coated cardboard showed lower water absorption; however, the contact angle was also lower, which could be due to a change in surface chemistry and morphology. It would be interesting to study even higher degrees of substitution and to delineate with the mechanical strength. Still, it appears that coatings with lignin-phenol-formaldehyde have so far been aimed at providing a water-barrier. For example, the work by Rotondo et al. coated superphosphate fertilizers with hydroxymethylated lignin resins, 119 which signicantly slowed the phosphate release.
3.5.3. Lignin-based epoxy resin coatings. Similar to lignincontaining polyurethanes, epoxy resins also target a reaction with the hydroxyl groups. In analogy to that, chemical conditioning such as depolymerization can potentially improve the nal material. For example, Ferdosian et al. tested different ratios of depolymerized Kra or organosolv lignin in conventional epoxy resin formulations. 155 The authors showed that large amounts of lignin retarded the curing process particularly in the late stage of curing. At the right dosage (25%), the ligninbased epoxy exhibited better mechanical properties than the neat formulation, while improving adhesion on stainless steel. Both effects appear plausible considering lignins macromolecular and polydisperse composition. In this context, a recent patent by Akzo Nobel should also be mentioned, which describes the use of lignin and potential epoxy crosslinker for functional coatings. 156 A different approach was chosen by Hao et al., who carboxylated Kra lignin rst, followed by its reaction with PEG-epoxy. 157 The coatings possessed a lignin content of 47%. In addition, the self-healing ability was demonstrated by transesterication reaction in presence of zinc acetylacetonate catalyst.
Crosslinking of nanoparticles is an interesting approach, as the coagulation to nanoparticles may favor a different ratio of functional groups at the surface than in the bulk lignin. In addition, this approach can produce composite materials, which exhibit different characteristics than a homogeneous polymer. For instance, Henn et al. combined ligninnanoparticles with an epoxy resin, i.e., glycerol diglycidyl ether, to treat wood surfaces. 106 The coatings showed nanostructured morphology, which still preserved the breathability of the wood, hence drawing advantage from lignin's nanoparticle formation. Zou et al. coprecipitated sowood Kra lignin together with bisphenol-a-diglycidyl ether to produce hybrid nanoparticles. 158 The particles were either cured in dispersion for further cationization or directly tested in their function as wood adhesives. The use of lignin-based nanoparticles in curable epoxy resins is hence promising, as it can generate new functionalities, but the maturity of this technology still needs to be advanced.
3.5.4. Lignin-based polyester coatings. While the use of lignin in polyester coatings is technologically feasible, few publications were found to this topic. One reason for this could be the slow reaction kinetics of direct esterication. Coupled with lignin's structure and chemistry, polyester-based coatings would be less straight forward than polyurethanes or epoxy resins, which involve highly reactive coupling agents. As discussed previously, chemical modication of lignin may improve this circumstance, for example by depolymerization or introduction of new reactive sites. As such, oxidative depolymerization and subsequent membrane fractionation has been suggested to produce a raw material, which can be utilized in subsequent polyester coatings. 159 A second example would be solvent-fractionated lignin, which has been carboxylated by esterication with succinic anhydride. 19 As illustrated in Fig. 14, the modied lignin reportedly underwent self-polymerization, where the graed carboxyl groups reacted with residual hydroxyl groups on the lignin. Development in this area has potential, as polyesters tend to exhibit better biodegradability than polyolens.
3.5.5. Lignin-based acrylate coatings. Lignin-based acrylates rely on the graing of acrylate moieties, as these are not inherent to lignin. For example, methacrylation of Kra lignin was done to produce UV-curable coatings. 145 The authors concluded that incorporating lignin into the formulation improved thermal stability, cure percentage, and adhesive performance. An elaborate study on aging of lignin-containing polymer materials was conducted by Goliszek et al. 30 The authors graed Kra lignin with methacrylic anhydride and further polymerized the product with styrene or methyl methacrylate. Low amounts of lignin (1-5%) showed incorporation into the network, whereas higher concentrations showed a plasticizing and more heterogeneous effect. High lignin loadings also enhanced the detrimental effects of aging, which may seem counterintuitive, as other reports frequently state a UV-protective ability of lignin. Still, increase absorption of UV light can also amplify the detrimental effects thereof. A combination of epoxy and acrylate was used to develop dualcured coatings with organosolv lignin. 160 The lignin was rst reacted with epoxy resin and subsequently with acrylate to form a prepolymer. In a second step, the prepolymer was mixed with initiators and diluent to be coated onto tinplate substrates. All in all, lignin-based acrylate coatings appear to have reached sufficient technological maturity, yet the advantage of adding lignin is sometimes unclear.
3.5.6. Other approaches. Szabo et al. graed Kra lignin with p-toluenesulfonyl chloride, whose product was then grafted onto carbon bers. 161 The results suggested an improved shear tolerance of the modied carbon bers in epoxy or cellulose-based composites. Silylation was furthermore performed of Kra lignin, which was further co-polymerized with polyacrylonitrile. 162 The authors concluded that silylation improved the compatibility for surface coatings and lms.
Lignin in technical applicationsa critical commentary
The development high-value products from lignin has been a topic of great interest for some years, and is still gaining popularity. 163 Added-value applications are being pursued, ranging from asphalt emulsiers or rubber reinforcing agents, to the production of aromatic compounds via thermochemical conversion. 164 The question arises, however, if including lignin in a coating can really lead to a better overall product? Comparison with state-of-the-art formulations is frequently omitted, benchmarking lignin-based solutions only to a reference case with low performance. "Attributing value to waste" is one of the primary motivations behind lignin-oriented research. For example, bioethanol production from lignocellulose biomass oen gives rise to a lignin-rich byproduct. The overall economics of such bioreneries could be improved if the ligninrich residue could be marketed at a value. Still, to establish a new product on the market, this product also needs to compete with existing solutions in terms of performance and price. This point is oen overlooked in literature, in particular concerning lignin-based surfaces and coatings.
Harnessing lignin's inherent properties is key, as this can create synergies and yield an advantage over other biopolymers. It comes to no surprise that the dominant use of technical lignin is in water-soluble surfactants, as polydispersity is a key feature here. 3 As has been pointed out, the performance of surfactant-blends oen outperforms single surfactants in realworld applications, since the mixture can preserve its function over a wider range of environmental conditions. A second example of key properties would be lignin's polyphenolic structure, which is not found in common polysaccharides. Lignin has hence been investigated as a UV blocking additive in e.g., sunscreen products or packaging. 165 However, compatibility of the resulting product with human-or food-contact is addressed insufficiently by many authors. A similar situation was given in case of lignin as antioxidant additive in cosmetics, 166 where the dark color and smell may limit the nal use.
Compared to cellulose or hemicellulose, lignin has a higher carbon-to-oxygen ratio. Due to this and its polyaromatic structure, it would indeed be a better raw material for producing carbonaceous materials. Research on activated carbon, graphitic carbon, and carbon bers has indeed being conducted. 167 A key step toward lignin-based carbon ber production was identied as removal of b-O-aryl ether bonds. 60 In addition, the charring ability of lignin has been proposed as a benet in re retardants. 168 Still, lignin-based re retardants oen use chemical modications, such as phosphorylation. If chemical modication is necessary, the question arises if such chemistries really need to be based on lignin, since other biomacromolecules may possess a higher reactivity and number of reactive sites.
Lignin can be readily precipitated from solution into nanoparticles and nanospheres. Various applications have been suggested based on this, such as functional colloids and composite materials with uses in ame retardancy, food packaging, agriculture, energy storage, and the biomedical eld. 169 A more specic example would be nanoparticulated lignin in poly(vinyl alcohol) lms with increased UV absorption. 170 While this technology appears straight-forward, its nal use has yet to be proven.
At last, technical lignin is usually thermoplastic, exhibiting glass-transition temperatures in the range of 110-190°C. 24 The use of lignin as polymeric ller or in thermoplastic blends is hence promising. In some cases, chemical modication may be necessary to improve compatibility, e.g., with polyolens; 26 however, the use as simple ller material would not necessitate modication. Additional strength could also be derived from added cellulose bers, which could potentially benet from added lignin as compatibilizer.
In summary, one needs to build on the inherent properties of lignin, such as polydispersity, poly-aromaticity, a higher C/O ratio than for polysaccharides, and thermoplasticity. Only by utilizing characteristics that set lignin apart from other biopolymers, can solutions be developed that are innovative and market competitive. Chemical modication is a useful tool for tailoring; however, each processing step will add an economical and environmental cost to the nal product. In other words, the simplest approach is oen the bestsomething that is frequently disregarded when developing complex synthesis protocols for lignin.
Summary and conclusion
Functional surfaces and coatings can be formulated in a variety of ways, which includes the use of neat, chemically modied, blended, and cross-linked lignin. This review provides a summary of the current developments in research, where focus was placed on the formulation and nal applications.
Overall, coatings with neat lignin or blends of lignin with other active ingredients appear the most practical. Reduced wetting is hereby achieved, as the lignin can alter the surface morphology, hinder mass-transport, and conne swelling of enclosed bers. The lignin itself is not considered a hydrophobic material, because the contact angle is usually below 90°. On the other hand, hydrophobicity can be induced by plasma surface treatment, blending with other agents, or chemical modication. For the latter, graing or esterication of lignin with alkyl-containing moieties is a frequently taken approach. Chemical modication may also be used to improve the compatibility with olenic thermoplastics. Addition of lignin can ne-tune the characteristics thermoplastics and improve adhesion to other materials. On the downside, embrittlement frequently limits this technology to low percentages of lignin. Thermoset coatings with lignin can be based on chemistries such as polyurethanes, phenolic resins, epoxy resins, polyesters, and polyacrylates. Various synthesis routes have been proposed in literature, which can benet to some degree of the inherent properties of lignin.
Both the formulation and processing depend on the nal application of the coating or surface functionalization. The use of lignin with cellulose-based substrates is frequently suggested, as this can yield all-biobased materials. Lignin can improve the resistance to wetting of paper and pulp products. In addition, it can add UV protection and oxygen-scavenging capabilities in packaging applications. Lignin-based surfaces have also been proposed for adsorbents for wastewater treatment, wood veneers, and corrosion inhibitors for steel. The biomedical eld has also explored lignin-based biomaterials, which draw on its potential antimicrobial properties. A great number of publications also reports on agricultural uses, where a lignin-based coating may account for slower release of fertilizer. At last, general-purpose polymer coatings can be tailored via the inclusion of lignin, and the resistance to fouling of membranes can be improved. All mentioned applications were discussed critically in this review, placing emphasis on the benet that adding lignin may provide. While introduction of functionalities may be possible, publications frequently do not compare to a well-performing reference case, hence limiting the assessment of the true potential. In addition, the ratio of lignin in thermoset coatings is usually quite low. Higher levels may be achieved aer chemical modication, but such synthesis can also have negative implications on the economic and environmental cost of the nal product.
In conclusion, the advancement of functional surfaces and coatings with lignin has yielded promising results. However, there also must be a benet of using lignin compared to other biopolymers or existing petrochemical solutions. Only by harnessing lignin's inherent properties, can solutions be developed that are competitive and value-creating. These properties include lignin's polyphenolic structure, a higher C/O ratio than, e.g., polysaccharide biopolymers, its ability to self-associate into nano-aggregates, and its thermoplasticity. These properties are utilized in some of the reviewed literature, hence providing the ground for new and promising technology in the future.
Conflicts of interest
The authors declare no conict of interest. | 15,757 | 2023-04-17T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Measuring the Sustainability of Water Plans in Inter-Regional Spanish River Basins
This paper analyses and compares the sustainability of the water plans in the Spanish River basins according to the objectives of the Water Framework Directive. Even though the concept of sustainability has been traditionally associated with the triple bottom line framework, composed of economic, environmental, and social dimensions, in this paper sustainability has been enlarged by including governance aspects. Two multicriteria decision analysis approaches are proposed to aggregate the sustainability dimensions. Results show that the environmental dimension plays the most important role in the whole sustainability (40%) of water basins, followed by both economic and social criteria (25%). By contrast, the dimension of governance is the least important for sustainability (11%). A classification of the Spanish basins according to their sustainability indicates that the water agency with the highest sustainability is Western Cantabrian, followed by Eastern Cantabrian and Tagus. By contrast, Minho-Sil, Jucar, and Douro are the least sustainable.
Introduction
A modern water management system must be not only effectively provide water security, but also be sustainable, combining economic progress with social development and the conservation of habitats and ecosystems.The Water Framework Directive (WFD)-Directive 2000/60/EC [1]-and the introduction of river basin districts may help to fulfil such objectives.The environmental objectives are defined in Article 4-the core article-of the WFD, aiming to achieve a sustainable water management system on the basis of a high level of protection of the aquatic environment.Achieving such sustainability requires some boundaries, as through the definition of river basin districts.These districts are hydrological units selected on the basis of the spatial catchment area of the river, and not depending on any administrative or political boundary.
Spain has a wide tradition in water management through agencies called basin water agencies (BWAs), which have been operative since 1920.BWAs play an important role in water planning, resource management and land use, protection of the public water domain, management of water use rights, water quality control, planning and execution of new water infrastructure, dam safety programs, etc.
The WFD sets out clear deadlines for each of the requirements as can be consulted in [2].Within such milestones, water administration agencies from each member state have to report each issue to the European Commission on time, with 2015 being a relevant date in the WFD implementation.Thus, the first management plan (River Basin Management Plan 2009-2015) has been finalised and the second management plan (River Basin Management Plan 2015-2021) and the First Flood Risk Management Plan have just started.
Since the first River Basin Management Plan has finalised quite recently, it is of particular interest analysing the sustainability of Spanish BWAs in water management and their contribution to fulfil the WFD objectives.In this sense [3], it is recommended to strengthen the links between water planners and academics in order to improve future revisions of the River Basin Management Plans.More concretely, it is proposed that the assessment and the selection of methods were done jointly in order to design and implement new water policies in Spain.In addition, the role of BWAs is highlighted as potential coordinators of such evidence-based policy-making.
Considering this framework, the objective of this paper is to analyse and compare the sustainability of water plans in the Spanish river basins according to the objectives of the WFD.In addition, dimensions that may be enhanced to improve the basins' sustainability are analysed, being this analysis a starting point to improve water management sustainability in the following management plans.
After this brief introduction, Section 2 reviews some of the previous works on assessing sustainability by using multicriteria decision-making methods.In the Section 3 the case study is presented.Sections 4 and 5 include the methods used to assess the sustainability of water plans and results.Finally, Section 6 concludes the paper.
Literature Review
Sustainability has been used as a criterion to analyse water resource management quite often in the literature.In order to assess such sustainability, multicriteria decision analysis (MCDA) has been commonly used since the 1970s.It is possible to find a considerable number of applications related to water management on different river basins.Thus, Hajkowicz and Collins [4] reviewed 113 studies that used MCDA for analysing water resource management.They found that these methods are of relevance since the annual publication rate has been steadily growing since the late 1980s.The majority of applications are related to the fields of water policy, supply planning and the evaluation of major infrastructure.
Regarding the evaluation of different water management strategies, it is worth highlighting [5], in which a three-step process is developed to evaluate different water management strategies in a river basin in Brazil.The analytical hierarchy process (AHP) was used to help identifying the groups of interest, articulate their preferences and find the dominant preferences of the community within the river basin, as well as to get a consistent evaluation of management strategies.In addition, Martín-Ortega et al. [6] performed a multicriteria analysis of water management under the WFD.They selected some measures for a sustainable and socially accepted water management in the Guadalquivir river basin in order to test the applicability of the AHP in the new WFD context.A survey was carried out in the context of a future enlargement of a reservoir.Results suggest that the AHP is an adequate tool for the WFD purposes and a useful complement for the cost-effectiveness analysis.
There are other works that analyse different water management strategies to address concrete problems in some areas.In this line, Jaber and Mohsen [7] proposed a support system for decision evaluation and selection of nonconventional water resources in the river Jordan.They include desalination of saline and seawater, treated waste water, importation of water across boundaries, and water harvesting.Using AHP, they found that water desalination was ranked the highest, being the most promising resource, followed by water harvesting.Freiras and Magrini [8] presented a selection of sustainable water management strategies for a mining complex located in the southeast region of Brazil, which concentrates most of the country's population and the mining facilities, but a small portion of the water available in the territory.A stepwise process for incorporating environmental risks into the decision-making using a multicriteria approach and AHP was developed and applied in this case study.Da Cruz and Marques [9] used the MACBETH multicriteria model to determine sustainability level of urban water cycle services (UWCS).They show that it is possible to assess both global sustainability and performance of UWCS in each particular dimension of the sustainability, taking into account the values and judgments of the legitimate stakeholders.Recently, Marques et al. [10] discussed the concept of sustainable water services and suggested using MACBETH multicriteria method to assess it.They illustrated a real-world application of the method in urban water services (UWSs) in Portugal and used a simple additive aggregation model to calculate the sustainability score of each UWS.Finally, the work of [11] implemented MCDA in an irrigated area in Spain.They found six factors to define alternative strategies (policies) that could change the planning scenario of the irrigation system: irrigation system, water pricing, water allocation, crop distribution, fertiliser use and subsidies received.Five different MCDA techniques were used and results indicated that all techniques choose the same alternative strategy as the preferred one: sprinkler irrigation system, with no change in the existing water pricing and water allocation schemes, growing wheat and barley as the main crops with organic fertilisers and without any change in the subsidy policy.
Case Study
The main Spanish BWAs exceed a single region, being called as inter-regional water agencies (IRWAs).We can distinguish ten different IRWAs in Spain, that is, Western and Eastern Cantabrian (Cantábrico oriental y occidental), Minho-Sil (Miño-Sil), Douro (Duero), Tagus (Tajo), Guadiana, Guadalquivir, Segura, Jucar, and Ebro.In addition, there are minor basins comprised in one single region, and called intra-regional water agencies, such as Galician Coast, Andalusian Mediterranean Basin, Tinto, Odiel and Piedras, Guadalete and Barbate, inland basins of Catalonia, Balearic Islands, and Canary Islands.The location of BWAs is showed in Figure 1.
Water 2016, 8, 342 3 of 14 particular dimension of the sustainability, taking into account the values and judgments of the legitimate stakeholders.Recently, Marques et al. [10] discussed the concept of sustainable water services and suggested using MACBETH multicriteria method to assess it.They illustrated a real-world application of the method in urban water services (UWSs) in Portugal and used a simple additive aggregation model to calculate the sustainability score of each UWS.Finally, the work of [11] implemented MCDA in an irrigated area in Spain.They found six factors to define alternative strategies (policies) that could change the planning scenario of the irrigation system: irrigation system, water pricing, water allocation, crop distribution, fertiliser use and subsidies received.Five different MCDA techniques were used and results indicated that all techniques choose the same alternative strategy as the preferred one: sprinkler irrigation system, with no change in the existing water pricing and water allocation schemes, growing wheat and barley as the main crops with organic fertilisers and without any change in the subsidy policy.
Case Study
The main Spanish BWAs exceed a single region, being called as inter-regional water agencies (IRWAs).We can distinguish ten different IRWAs in Spain, that is, Western and Eastern Cantabrian (Cantábrico oriental y occidental), Minho-Sil (Miño-Sil), Douro (Duero), Tagus (Tajo), Guadiana, Guadalquivir, Segura, Jucar, and Ebro.In addition, there are minor basins comprised in one single region, and called intra-regional water agencies, such as Galician Coast, Andalusian Mediterranean Basin, Tinto, Odiel and Piedras, Guadalete and Barbate, inland basins of Catalonia, Balearic Islands, and Canary Islands.The location of BWAs is showed in Figure 1.This paper is focused on the analysis of the sustainability of integral water management in IRWAs, which account for 87% of the Spanish area and 64% of population.Among the IRWAs we can see high differences in the area and population covered.Tagus is the river basin that supplies water to the highest percentage of population, mainly because it includes one of the biggest Spanish cities, Madrid, with a metropolitan area population of around 6.5 million.Regarding the size of the IRWA, Ebro extends for nine regions, being the largest basin in Spain.By contrast, Eastern Cantabrian is the lowest basin and covers the lowest ratio of population.This paper is focused on the analysis of the sustainability of integral water management in IRWAs, which account for 87% of the Spanish area and 64% of population.Among the IRWAs we can see high differences in the area and population covered.Tagus is the river basin that supplies water to the highest percentage of population, mainly because it includes one of the biggest Spanish cities, Madrid, with a metropolitan area population of around 6.5 million.Regarding the size of the IRWA, Ebro extends for nine regions, being the largest basin in Spain.By contrast, Eastern Cantabrian is the lowest basin and covers the lowest ratio of population.
The main characteristics of the inter-regional water basins under study are summarized in Table 1.
Methods
Within the framework of the MCDA, this paper assesses the sustainability of inter-regional water agencies (IRWAs).Sustainability is assessed by considering the traditional economic, environmental, and social dimensions (Triple Bottom Line [23]), but also governance.Each of the sustainability dimensions has been analysed using a number of indicators that will be presented below in detail.In a second step, the relative importance of indicators and dimensions/criteria is assessed through the analytical hierarchy process (AHP).Later, the IRWAs are classified in a ranking in terms of their sustainability according to the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) (see Figure 2).In summary, MCDA allows us to aggregate the performance of each attribute in each dimension, and afterwards to get a sustainability measure on the basis of the aggregation of each dimension.
Water 2016, 8, 342 4 of 14 The main characteristics of the inter-regional water basins under study are summarized in Table 1.
Methods
Within the framework of the MCDA, this paper assesses the sustainability of inter-regional water agencies (IRWAs).Sustainability is assessed by considering the traditional economic, environmental, and social dimensions (Triple Bottom Line [23]), but also governance.Each of the sustainability dimensions has been analysed using a number of indicators that will be presented below in detail.In a second step, the relative importance of indicators and dimensions/criteria is assessed through the analytical hierarchy process (AHP).Later, the IRWAs are classified in a ranking in terms of their sustainability according to the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) (see Figure 2).In summary, MCDA allows us to aggregate the performance of each attribute in each dimension, and afterwards to get a sustainability measure on the basis of the aggregation of each dimension.Table 2 shows the dimensions/criteria and indicators selected to assess IRWAs' sustainability.
Economic
Ratio of cost recovery for water services.
Water productivity, measured as the ratio between the gross values added of economic sectors (GVA) and the volume of water supplied to each sector.
Budget limits, measured as the maximum expenditure in investments.
Environmental
Water stress, measured as the ratio of the volume of water consumed and existing water resources in the basin.
Number of measures aimed at achieving environmental objectives.
Efficiency: losses in distribution infrastructures.
Volume of reused water in the total amount of water supplied.
Social
Additional population served over the resident population in the basin.
Number of measures aimed at satisfying demands.
Employment relative to the volume of water supplied in the basin.
Governance
Number of measures to improve governance.
Number of administrations involved in the management, implementation and/or financing measures.
Number of initiatives to encourage active participation of the public.
The selection of indicators in each dimension has been based on both a literature review [24][25][26] and the expertise of a panel of experts.
The economic dimension is measured through three indicators: 1. Ratio of cost recovery for water services.The concept of cost recovery appears in the WFD (Article 9) in the sense that member states shall take into account such principles, including environmental and resource costs, having regard for the economic analysis, and in accordance to the polluter-pays principle.Member states shall report in the river basin management plans the steps towards implementing the recovery of the costs of water services.Taking into account the WFD, the ratio of cost recovery is calculated as the ratio between revenues and costs for water services, including financial, environmental, and resource costs.An estimation of the cost recovery ratio of financial costs related to water services can be found in [27].Environmental costs are related to the externalities that occur mainly in water extraction and discharge processes when affecting other users or ecosystems.Resource costs refer to the value of water scarcity.More information about environmental and resource cost in the context of the European WFD can be found in [28].The higher the ratio of cost recovery, the higher the economic sustainability of the IRWA.
2.
Water productivity, measured as the ratio between the gross value added (GVA) of economic sectors and the volume of water supplied to each sector.More information about the estimation of water productivity values can be found in [29].The higher the water productivity the higher the economic sustainability of the BWA.
3.
Budget limits, measured as the maximum expenditure in water investments.Due to the economic crisis in Spain, the IRWAs have limited their budget for investments.This may have an impact on the measures needed to achieve the objectives of the WFD.The lower the budget limits, the higher the economic sustainability of the IRWA.
The environmental dimension is assessed on the basis of four indicators: 1.
Water stress, measured as the ratio of the volume of water consumed and existing water resources in the basin.Water stress is an increasingly important phenomenon that causes deterioration of fresh water resources in terms of quantity (overexploited aquifers, dry rivers, and polluted lakes) and quality (eutrophication, organic matter pollution, and saline intrusion).It happens when water demand is greater than the available amount during a certain time or when it is restricted by its low quality for a time period.The lower the water stress, the higher the environmental sustainability of the IRWA.
2.
Number of measures aimed at achieving environmental objectives.The main environmental objective established in the WFD is to achieve good status of water bodies.To do this, the IRWAs establish measures to prevent or mitigate the punctual and diffuse pollution and to involve hydrological and environmental restoration of the basin.The higher the number of measures aimed at achieving environmental objectives, the higher the environmental sustainability of the IRWA.
3.
Efficiency measured as losses in distribution infrastructures.Once captured, the water must be transported to the point of purification, to then be stored in tanks from which the distribution infrastructures are supplied to the points of domestic, agricultural, or industrial supply, in which once used it is evacuated.The main technical problem of water distribution infrastructures is the volume of losses due to deterioration.The lower the losses in distribution infrastructures, the higher the environmental sustainability of the IRWA.
4.
Recycled water volume in the total amount of water supplied.Reusing wastewater is an increasing practice in arid or semiarid countries, where water resources are scarce.The uses that can be given to recycled wastewater are many and varied: watering (crops, gardens, greenbelts, golf camps, etc.), industrial reuse (cooling, boiler feed), non-potable urban uses (greenery, fire extinction, sanitary, air conditioning, washing cars, cleaning streets, etc.), and others (aquaculture, livestock cleaning, snowmelt, construction, dust removal, etc.).The higher the recycled water volume, the higher the environmental sustainability of the IRWA.
The social dimension is measured using three indicators: 1.
Additional population served over the resident population in the basin.In addition to the local population in the basin, the population may increase during certain seasonal periods for different reasons: work, holidays, etc.This indicator measures the capacity of the basin to satisfy this additional water demand.The higher the additional population served, the higher the social sustainability of the IRWA.
2.
Number of measures aimed at satisfying demands.Economic sectors require water (and other resources) to develop their economic activities.The IRWA provides a series of measures to be able to respond to this demand.The objectives of these measures are to increase the availability of resources through regulation and management infrastructures, encourage recycling, and increase water use efficiency.The higher the number of measures aimed at satisfying demands, the higher the social sustainability of the IRWA.
3.
Employment relative to the volume of water supplied in the basin.This indicator refers to employment on activities that require water resources for their economic development.The higher the employment ratio, the higher the social sustainability of the IRWA.
Finally, the governance dimension is assessed using three indicators: 1. Number of measures to improve governance.Governance allows addressing the problems of resource and territory management through an integrated and systematic way.Clark and Semmahasak [30] examine the introduction of adaptive governance to water management in Thailand.The analysis shows the significant role that the new approach may play in resolving underlying differences between stakeholders.The higher the number of measures to improve governance, the higher the governance sustainability of the IRWA.
2.
Number of administrations involved in management, implementation and/or financing of measures.Besides the IRWAs, other administrations and institutions are also involved in the development, implementation, and financing of programs of measures.The higher the number of administrations, the higher the governance sustainability of the IRWA.
3.
Number of initiatives to encourage active participation of the public.These initiatives encourage the transparency and participation of stakeholders in both the decision-making and the planning processes.Hedelin [31] analyses two criteria based on the concepts of participation and integration.She notes that these concepts work as well-established dimensions of both sustainable development and management.The higher the number of initiatives, the higher the governance sustainability of the IRWA.
Considering the indicators mentioned above, two multicriteria decision-making methods were used to assess the sustainability of IRWAs.More concretely, AHP was used to get the importance of each dimension and each indicator in the sustainability of the IRWA, and afterwards TOPSIS allowed us to rank the IRWAs according to their sustainability.
The AHP method was created by [32] as a structured but flexible technique for making decisions in a multicriteria context.This method is based on dealing with complex decision problems using a hierarchical structure.Figure 3 shows the three-level structure considered for our case study.
sustainable development and management.The higher the number of initiatives, the higher the governance sustainability of the IRWA.
Considering the indicators mentioned above, two multicriteria decision-making methods were used to assess the sustainability of IRWAs.More concretely, AHP was used to get the importance of each dimension and each indicator in the sustainability of the IRWA, and afterwards TOPSIS allowed us to rank the IRWAs according to their sustainability.
The AHP method was created by [32] as a structured but flexible technique for making decisions in a multicriteria context.This method is based on dealing with complex decision problems using a hierarchical structure.Figure 3 shows the three-level structure considered for our case study.In this hierarchical structure, the relative importance or weights (wk) of each criterion or subcriterion hanging on each node are obtained from pairwise comparisons between them.In order to perform these pairwise comparisons, a 1-9 scale is used, as proposed by [33].Table 3 shows the relative scores and their interpretation.
Value of ajk
Scale Meaning 1 j and k are equally important 3 j is slightly more important than k 5 j is more important than k 7 j is strongly more important than k 9 j is absolutely more important than k 2, 4, 6, 8 Middle values of the above reciprocal ajk = 1/akj Scores of these comparisons are used to build the Saaty matrices (A = ajk), which are employed to determine the vector of priorities or weights (w1, ...wk, ...wn).Although different procedures to estimate these weights have been proposed, for this case we selected the simplest one: the geometric mean method [34].
The AHP decision technique was originally designed for individual decision-makers, but was promptly extended for group decisions [34], such as our case study.Thus, in order to determine the weights attached to each criterion we have to consider the judgments of a group of people (p), each with his/her own pairwise comparison matrix (Ap = ajkp) and its related weights (wkp).This individual information is suitably treated in order to obtain a synthesis of aggregated weights (wk).
Environmental Social Governance
Cost recovery Water productiv.In this hierarchical structure, the relative importance or weights (w k ) of each criterion or subcriterion hanging on each node are obtained from pairwise comparisons between them.In order to perform these pairwise comparisons, a 1-9 scale is used, as proposed by [33].Table 3 shows the relative scores and their interpretation.
Value of a jk
Scale Meaning 1 j and k are equally important 3 j is slightly more important than k 5 j is more important than k 7 j is strongly more important than k 9 j is absolutely more important than k 2, 4, 6, 8 Middle values of the above reciprocal a jk = 1/a kj Scores of these comparisons are used to build the Saaty matrices (A = a jk ), which are employed to determine the vector of priorities or weights (w 1 , ...w k , ...w n ).Although different procedures to estimate these weights have been proposed, for this case we selected the simplest one: the geometric mean method [34].
The AHP decision technique was originally designed for individual decision-makers, but was promptly extended for group decisions [34], such as our case study.Thus, in order to determine the weights attached to each criterion we have to consider the judgments of a group of people (p), each with his/her own pairwise comparison matrix (A p = a jkp ) and its related weights (w kp ).This individual information is suitably treated in order to obtain a synthesis of aggregated weights (w k ).
For this purpose, Saaty et al. [35,36] suggest that group decision-making should be done by aggregating individual priorities using the geometric mean: For indicators weights, a panel of 25 experts in water management sustainability was consulted.The members of this panel have been selected on the basis of their experience in water management, their scientific and technical contribution to the analysis of water sustainability and their involvement in the development and implementation of river basin plans.In addition, experts have been also selected in order to cover different technical profiles, such as university lecturers, researchers in agricultural research centres, civil servants in charge of water policy implementation, environmental journalists, hydrogeologists, agronomists, economists, environmental organisations, and farmers.
Before aggregating priority scores, the consistency of respondents' pairwise choices was tested by means of the consistency ratio (CR) based on the eigenvalue method [37].In this paper we consider only CR lower than 0.1 [38].Taking into account this CR, the percentage of consistent experts was 72%.
Once the weights of each dimension had been calculated, by considering the experts' evaluations, another MCDA technique was applied in order to rank IRWAs according to their sustainability.To do that, TOPSIS was used.The principle behind the method is that the optimal alternative should have the shortest distance from the positive ideal solution and the furthest distance from the negative ideal solution.The positive and negative ideal solutions are artificial alternatives which are hypothesised by the decision-maker, based on the ideal solution for all criteria and the worst solution which possesses the most inferior decision variables.Assuming that every indicator has an increasing or decreasing scale, TOPSIS calculates the results by comparing Euclidean distances between the actual and the hypothesised alternatives.
Generally, the TOPSIS approach consists of seven steps, as it is summarized below [39,40].
Step 1. Constructing the decision matrix D on the basis of the value of each indicator (F i ) by IRWA (A i ), where f ij is the performance of the IRWA A i with respect to the indicator F j .
Step 2. Normalizing the initial decision matrix to eliminate the effects of complex relations.
The normalized value v ij is calculated as: Step 3. Calculating the weighted normalized decision matrix R by using the weights w j obtained through the APH for each indicator.The weighted normalized value f ij is calculated as: Step 4. Determining the positive and negative ideal reference points: T `" r 1 , r 2 , . . ., r ǹ ( " `max i r ij ˇˇj P J 1 ˘, `min i r ij |j P J 2 ˘( T ´" r 1 , r 2 , . . ., r ń ( " `min i r ij ˇˇj P J 1 ˘, `max i r ij |j P J 2 ˘( where J 1 and J" are linked to the indicators with positive polarity (more is better) and the indicators with negative polarity (less is better), respectively.Step 5. Calculating the distances to the positive and negative ideal reference points using the Euclidean distance.The separation of each IRWA from the positive-ideal solution (S ì ) and the separation of each IRWA from the negative-ideal solution (S í ) is given by the expressions: Step 6. Calculating the relative closeness to the ideal solution for each IRWA (C i ): where C i is an index with values ranging between 0 and 1, where 0 corresponds to the worst possible performance of the IRWA and 1 to the best.Step 7. ranking the IRWA, according to the C i values.
Results
Table 4 shows the results of the application of the AHP method.First, we can see the weights for the sustainability dimensions according to the preferences of the group of experts.The environmental dimension is playing the most important role in the whole sustainability (40%), followed by both the economic and social criteria (25%).The governance dimension is the least important for sustainability (11%) according to the panel of experts.Considering these weights, the overall sustainability level of each IRWA can be assessed through TOPSIS.Table 5 shows the ranking of the Spanish IRWAs according to their sustainability in the water plans.The river basin with the highest sustainability is Western Cantabrian, followed by Eastern Cantabrian and Tagus.By contrast, Minho-Sil, Jucar, and Douro are the least sustainable basins.Regarding the Segura Basin, our results coincides with [41], classifying this basin as intermediate sustainable.Senent-Aparicio et al. [41] applied a watershed sustainability index (WSI), assuming that the sustainability of the basin depends on its hydrology environment, life, and policies in water resources.The greatest strengths of the basin were related to political indicators, while the biggest weaknesses were the hydrological indicators on quantity mainly due to the situation of water scarcity.Although not all the dimensions are comparable between studies, water scarcity or water stress appears to be one of the main weaknesses of Segura sustainability in both analyses.When analysing separately the dimensions of the sustainability (i.e., economic, environmental, social, and governance dimensions) for each IRWA, we obtained the results in Tables 6-9.The basin with the greatest economic sustainability is the Eastern Cantabrian river basin, followed by the Western Cantabrian basins.In this case, Douro and Minho-Sil are still in the last places of the ranking, and Segura shows the least economic sustainability.
Regarding the environmental sustainability, the dimension with the largest importance in river basin sustainability (Table 7), we can see that Minho-Sil is the basin with the greatest environmental sustainability, followed by Tagus.
Table 8 shows the classification of basins derived from the social dimension of sustainability.In this case, Eastern Cantabrian is in the first position, followed by Western Cantabrian and Tagus.The lasts are Jucar and Guadiana, showing the last one a significant distance with the others.
Finally, analysing the dimension of governance, which has the lower weight in sustainability, we can see that Ebro is the most sustainable basin, followed by Segura and Minho-Sil.By contrast, Guadalquivir and Western Cantabrian show the lowest sustainability in governance.
Different sustainability scores can be explained mainly by lower water stress (environmental dimension) and higher water productivity (economic dimension) of northern water basins.Due to the location of these basins, rainfall is more constant and consequently water stress is lower than in other basins of the country.In addition, we can see that water productivity is also higher in northern basins due to the weight of industrial activities.By contrast, IRWAs such as Jucar or Douro show the lowest sustainability due to the lower scores on economic, social, and governance dimensions for Douro, and environmental and social dimensions for Jucar.In Douro, low water productivity and water efficiency on distribution results in lower global sustainability.For Jucar, water stress due to the location of the basin and a low number of environmental and social measures make the basin the least sustainable.
Global and partial sustainability results have been showed to the panel of experts for their feedback.The experts agreed that the methodology is appropriate to measure the sustainability of IRWAs, and that the identification of the weaknesses of each IRWA may contribute to improve its sustainability in the future.
Concluding Remarks
This paper contributes to analysis of the dimensions that may be enhanced to improve basins' sustainability in order to fulfil the objectives and requirements set by the WFD on basin management, and consequently may be a starting point to improve water management sustainability in the following planning cycles.
The river basins of Minho-Sil, Jucar, and Douro are the least sustainable in the integral water plans.Such results on sustainability can be improved following different strategies depending on the river basin analysed.Douro, the river with the lowest sustainability, may improve in most of the dimensions (i.e., economic, social, and governance), whereas it is well positioned on the environmental criterion.In the case of the Jucar basin, it may focus on environmental and social aspects in order to improve its sustainability.Since environment is the dimension with the highest importance in global sustainability, Jucar may decrease the water stress or raise the number of measures aimed at achieving environmental objectives, since these two indicators show the highest contribution to environmental sustainability.Finally, Minho-Sil may raise mainly its economic and social dimensions.It has a good position on environmental and governance aspects, but it needs to improve mainly on the economic dimension.
Not only basins positioned in the last places may improve their sustainability, but the rest as well, since the maximum score is 0.677.The Western Cantabrian river basin is in the first position on sustainability of river basins but with the lowest score in governance.It may make progress in at least this dimension in order to improve.The same strategy should be followed by Eastern Cantabrian.Tagus is the most stable river basin in all the dimensions of the sustainability, but there is still room for improvement, especially on governance of stakeholders in decision-making.
Future research on this topic might analyse what would happen with sustainability in each water use provided in the Article 9.1 of WFD: agricultural, domestic, and industrial.In this case it would be very interesting to analyse how results may change when industrial and agricultural uses are differentiated to measure water productivity.Potential follow-up studies might also evaluate the sustainability of the different water services as provided in Article 2.38 of WFD, such as abstraction, storage and distribution of water, and collection and treatment of used water.River basin planning may include more information on these issues in order to allow us to refine the analysis of the sustainability.
Figure 2 .
Figure 2. Outline of the methodological approach.
Figure 2 .
Figure 2. Outline of the methodological approach.
Table 2 .
Dimensions and indicators to assess the sustainability of BWA.
Table 3 .
Table of relative scores.
Table 3 .
Table of relative scores.
Table 4 .
Normalised weights for dimensions/criteria and indicators. | 7,761.6 | 2016-08-11T00:00:00.000 | [
"Economics"
] |
Metallic and nonmetallic shine in luster: An elastic ion backscattering study
Luster is a metal glass nanocomposite layer first produced in the Middle East in early Islamic times (9th AD) made of metal copper or silver nanoparticles embedded in a silica-based glassy matrix. These nanoparticles are produced by ion exchange between Cu+ and Ag+ and alkaline ions from the glassy matrix and further growth in a reducing atmosphere. The most striking property of luster is its capability of reflecting light like a continuous metal layer and it was unexpectedly found to be linked to one single production parameter: the presence of lead in the glassy matrix composition. The purpose of this article is to describe the characteristics and differences of the nanoparticle layers developed on lead rich and lead free glasses. Copper luster layers obtained using the ancient recipes and methods are analyzed by means of elastic ion backscattering spectroscopy associated with other analytical techniques. The depth profile of the different elements is determined, showing that the luster layer formed in ...
I. INTRODUCTION
Luster is a metal glass nanocomposite layer made of metal copper or silver nanoparticles embedded in a silicabased glassy matrix.2][3][4][5] Luster was discovered in the Middle East in early Islamic times ͑Iraq, 9th AD͒ when the ceramists learned the physical-chemical mechanisms involved in luster formation. 6[9] The materials and firing procedures used should be so carefully controlled that it is highly surprising that 9th AD potters managed to control the process so well, considering their limited scientific and technical knowledge.
The optical properties of the luster layers are determined by the nature of the metal nanoparticles and size distribution, their volume fraction, the thickness of the layer, and the composition of the glassy matrix. 102][13] In addition, one of the most striking properties of luster is its capability of reflecting light like a metal surface.For example, a green luster layer may show a gold-like shine, 6 but it is also possible to have a dark brown luster showing also a gold-like shine. 6Therefore, the color observed under diffuse light is different from the color observed under reflected light.The increase in the reflectivity of the layer is related to the transition from individual cluster Mie scattering ͑electromagnetic interaction of incident light with individual spherical particles͒ toward the regularly transmitted and reflected beams of geometrical optics which are explained in terms of the Fresnel equations.This change from single particle to collective behavior shown by dense cluster films, is known as the Oseen effect, explained by the Ewald-Oseen theory and evaluated by Torquato-Kreibig-Fresnel model ͑TKF model͒. 14The Oseen transition from incoherent scattering into geometric-optical transmission and reflection is due mainly to the interference among the scattered electromagnetic waves of all particles. 15Actually, the high reflectivity shown by these layers is known to be nanoparticle size and volume fraction ͑f͒ dependent. 14ecently, laboratory reproductions of medieval luster using a medieval recipe 16 found during the excavations of the 13th century AD Paterna ͑Valencia, Spain͒ workshop were performed. 8,9Several thermal paths and atmospheres ͑oxidizing, neutral, and reducing͒, luster formulas and glaze compositions were checked in order to reproduce the colors and shines of medieval lusters.Green-yellow and green-yellow golden lusters, as well as red ruby and red coppery lusters were obtained using either silver or copper containing recipes. 9Brown, dark brown, amber, and orange lusters were obtained using a mixed copper and silver containing recipe. 17lthough luster layers were obtained in the whole range of temperatures checked ͑450 up to 600 °C͒, optimal lusters were obtained at temperatures of 550 °C.The need of combined oxidizing/neutral followed by reducing thermal path was demonstrated.During the oxidizing/neutral stage the ionic exchange between the luster paint and the glaze was obtained; in the subsequent reducing process the reduction to metallic state of the Cu + and/or Ag + introduced in the glaze was obtained.
The analysis of the layers showed that, depending on the glaze composition, the size of the nanoparticles was different, being smaller for the lead free glazes.However, the total Cu/Ag content of the luster layers was similar in all the cases, provided that similar firing times were used for the first neutral/oxidizing stage when the ionic exchange took place. 8,9he composition of the glaze was found to be strikingly important.Luster layers produced on lead free glaze never showed metallic shine, conversely luster layers obtained on lead containing glazes ͑32% PbO͒ do exhibit a metallic shine.
Moreover, these studies 8,9 demonstrated that for leadfree glasses the luster layers produced at higher temperatures ͑up to 600 °C͒, and longer reducing times ͑10-20 min͒ show a similar, but more intense, color.The size of the nanoparticles increases and the total amount of Cu/Ag increases with increasing temperature and reducing time but the metal shine is not achieved.On the contrary, for the lead glaze the metal shine is always obtained provided that temperatures of 550 °C are reached, even for shorter reducing times ͑5 min͒.One possible explanation for this behavior is the presence of a higher volume fraction of metal nanoparticles in the layer.
The object of the present study is to determine the chemical composition metal volume fraction and thickness of the copper luster layers obtained on lead free and lead containing glasses/glazes. 8,9Elastic ion backscattering spectroscopy ͑EIBS͒, also known as Rutherford backscattering spectroscopy, allows the evaluation of the elemental composition of a layered structure that is a few microns thick, and in particular it may provide thickness and elemental composition of the copper luster layers.This information is processed to determine the volume fraction of metal nanoparticles in the luster layer ͑f͒.The differences in composition between the copper luster layers produced on lead free and lead containing glazes are compared.The results are discussed to explain the presence or otherwise of metallic shine.The role of lead in the formation of metal shining luster layers is also discussed.
II. MATERIALS AND TECHNIQUES
Chemical analyses were obtained by electron microprobe, using a Cameca S-50 ͑WDX͒ instrument.Experimental conditions being 1 m spot size, 15 kV and 10 nA probe current except for Na and K for which the probe current was reduced to 1 nA and the spot size increased to about 5 m.Synchrotron radiation x-ray microdiffraction ͑SR-Micro-XRD͒ was performed at SRS Daresbury Laboratory in transmission geometry, 0.87 Å wavelength, 200 m spot size, and recorded in a charge coupled device detector and also in reflection geometry 1.4 Å wavelength, 1 mm spot size, and 0.01°step size, details are given elsewhere. 8,9IBS measurements were made with the 5 MV tandem accelerator at CMAM. 18 The analysis were performed in vacuum using a 3035 keV He beam in order to take advantage of the elastic resonance 16 O͑␣, ␣͒ 16 O occurring at this energy 19 and which increases the sensitivity to detect oxygen by a factor of 23.The size of the beam was a square of 1 mm diagonal.The backscattered ions were analyzed by means of two particle surface barrier detectors, one fixed at 170°scattering angle and a mobile one, whose scattering angle can be varied, which was set at 165°.Normally the two detectors are currently used as a double check for the EIBS spectra obtained.When elastic resonance happens, the choice of the scattering angle of the mobile detector is determined from scattering angles and scattering cross sections available database.A careful quantification of the EIBS spectra was achieved using the simulation code SIMNRA. 20 The luster layers were obtained as described in Refs.8 and 9 by applying a synthetic raw luster mixture containing 60% illitic clay, 10% CuO, and 30% HgS selected after the archeological findings in the site excavations from the 13th century AD Paterna workshop. 16Historically, luster nanolayers were developed over glasses and glazed ceramics.Therefore, three different glasses and glazes were selected: a lead glaze ͑glaze-m͒ applied over a white ceramic substrate, a high Na content lead free glaze ͑glaze-a͒ which was also applied to a white ceramic substrate, the final example was actually a lead free glass coverslip from Marienfeld ͑glass-a͒.
The chemical composition measured by electron microprobe analysis and EIBS directly from the surface of the glass and glazes are given in Table I.Glaze-a is not fully lead free, but it contains a very small amount of PbO ͑Ͻ0.4 wt %͒.The luster raw mixture was applied over the glazes surface and then fired in a small furnace under controlled conditions.The firing protocols included a heating rate of 50 °C/min and 20 min dwell time in neutral atmosphere ͑Ar͒, switching to a reducing atmosphere consisting of a mixture of 5% H 2 and 95% N 2 , 5 or 10 min dwell time at the maximum temperature, and free cooling. [24]
III. RESULTS
The luster layers and some relevant data obtained from their analysis are shown in Fig. 1. 9 The three luster layers were obtained using the same production parameters, except that the length of the reducing time for the lead containing glaze ͑glaze-m͒, which was reduced to 5 min instead of 10 min.The use of a shorter reducing time also produced a good luster, which was actually of better quality.The use of a highly reducing atmosphere with lead glazes may result in the reduction of the lead to metal and a slight darkening of the glaze surface; indeed the luster obtained after 10 min showed some darkening while the luster reduced for 5 min ͑j65͒ did not apparently show this darkening.This is the reason why the luster prepared with lower reducing time was preferred for this study.
As shown in Fig. 1, the luster layers obtained on the lead free glass ͑j32͒ and lead free glaze ͑j6͒ showed a red ruby color and no metallic shine.On the contrary, the luster layer produced on the lead glaze ͑j65͒, did show a full coppery shine, like a metal copper film.
Chemical analysis of the luster layers was obtained by electron microprobe analysis of the luster surface as described in Refs.8 and 9. Considering the density and composition of the glazes, the penetration depth of the electron microprobe as determined from the Kanaka-Okayama range is of about 3 m, thus the obtained chemical composition is averaged over a region several times larger than the luster layer itself.The data given in Fig. 1 are the average and standard deviation ͑shown in brackets͒ of at least 15 measurements taken at different points of the luster.The total amount of copper is similar for the three lusters.
Chemical analysis of the luster layers demonstrated that the incorporation of copper into the glaze surface was obtained by ionic exchange of the Cu + ions from the raw luster paint with the Na + and K + ions from the glaze. 8,9Figure 2 shows the linear characteristic correlations obtained when plotting the atomic concentration of Cu vs K and Na.
The nature and size of the nanoparticles was determined by SR-Micro-XRD and is also given in Fig. 1; the particle size was obtained from the peak width by peak profile analysis of the diffraction peaks from measurements taken on dif- 2 Values for the luster surface as determined by electron microprobe analysis.The data are the average of at least 15 measurements and the standard deviation is given between brackets.The penetration depth of the electron microprobe determined from the Kanaka-Okayama range is of about 3 m and therefore this value is the average composition of a region several times larger than the luster layer itself. 3Metal copper and cuprite nanoparticles were determined in all the cases and the size of the nanoparticles was calculated from peak width analysis of SR Micro-XRD data ͑see Ref. 8͒.
FIG. 2. Electron microprobe analysis of the surface of the lead free glass luster ͑j32͒ and of the luster lead free glass ͑glass-a͒.The depletion in Na and K in the layer is related to the increase in Cu.
ferent areas of the samples as described in Refs.8 and 9.In all cases metal copper nanoparticles were determined and, in some of the measurements, a small amount of cuprite was also determined.The size of the metal nanoparticles increases from 6.4± 0.3 nm for the lead free glass luster ͑j32͒ and 12.5± 0.3 nm for the lead free glaze luster ͑j6͒ up to 39.0± 3.1 nm for the lead glaze luster ͑j65͒.In particular, extended x-ray absorption fine structure ͑EXAFS͒ and transmission electron microscopy ͑TEM͒ analysis of a luster layer prepared following the same thermal protocol on the lead free glaze, thus equivalent to j6, has been recently performed. 25The size of the nanoparticles fully matches the size obtained by SR-Micro-XRD; the luster layer has a higher density of nanoparticles in a sublayer of about 800 nm thickness and has a copper free outer layer of about 50 nm.EXAFS data determined 77± 4% copper metal, with the remainder as the cuprite.Whether the cuprite is attached to the nanoparticles or just dissolved in the glassy matrix is unclear.SR-Micro-XRD analysis of the luster layers indicates the presence of some crystalline cuprite.][13] EIBS analysis of both the glaze luster free surface and the luster layers have been performed for the three glaze compositions and corresponding luster layers, and are shown in Figs.3-5, respectively, and the corresponding composition depth profiles are shown in Fig. 6.The spectrum corresponding to the lead free glass is simulated using the chemical composition given in Table I and considered constant over the whole depth analyzed.The agreement between simulated and experimental spectra is excellent as can be seen in Fig. 3͑a͒.The high energy edges for the signal corresponding to the fitted elements are marked in the figure.It should be mentioned that for light elements, B, Na, K, the sensitivity of EIBS is very low ͑±1%͒, so it may be meaningless for this element to give percent concentration values with decimal figures.However, as the code SIMNRA accepts inputs for the atomic concentrations normalized to unity, giving precise values to the concentrations for the heavier elements, for which the sensitivity of EIBS is high, necessarily imposes values for the light elements with figures beyond the significant values.
During the luster production, the glass has to be heat treated to 550 °C under neutral and then reducing atmospheres.This process could affect the chemistry of the glass surface.Both the EIBS spectrum corresponding to the untreated glass ͑no heat treatment͒ and the heat-treated glass are shown in Figs.3͑a͒ and 3͑b͒ and are compared to the simulated EIBS spectrum using the nominal chemical composition of the lead free glass ͑Table I͒.The good agreement in the fit of both spectra indicates that, up to the EIBS sensitivity, no appreciable changes in composition are induced in the glass surface by the heat treatment.
The experimental and simulated EIBS spectra corresponding to the lead free glass luster layer, j32, are shown in Fig. 3͑c͒.Incorporation of Cu in the luster process is observed in the region between channels 600 and 900.The EIBS spectrum reveals that it has a complex chemical depth profile.The simulation of this profile was performed by using a set of successive layers ͑up to 9͒, where the atomic percent concentrations for all elements other than Na, K, and Cu were kept constant.According to the correlations shown in Fig. 2, we assumed that each Cu atom entering the glass removes an atom either of Na or K.The thickness of the layers is defined in the units pertinent to EIBS, i.e., areal density in units of 10 15 at/ cm 2 .The areal density can be transformed into a depth provided that the composition and the density of the material is known through the relationship, depth= areal density͑at/ cm 2 ͒ / density͑at/ cm 3 ͒.
For our glasses and glazes the composition is known and the density may be estimated from their composition using the expressions given in Ref. 26, the resulting densities for the three glazes are also given in Table I.Taking a density of 2.42 g / cm 3 for the lead free glass, areal density of 10 15 at/ cm 2 is equivalent to a depth of 1.43 Å. Figure 6͑a͒ shows the resulting chemical profile concentration obtained from the fits of the EIBS spectrum.
The values for the Cu content in the plateau region ͑between 1000ϫ 10 15 and 7000ϫ 10 15 at/ cm 2 -equivalent to 0.14 and 1 m depth͒ are very reliable, with an accuracy well below 1%.For layers above 8000ϫ 10 15 at/ cm 2 ͑about 1.14 m depth͒, the Cu concentration determined is less ac-FIG.3. EIBS experimental ͑gray line͒ and simulated spectra ͑black line͒ and the contribution of the different elements corresponding to ͑a͒ lead free glass ͑glass-a͒, ͑b͒ lead free glass heat treated ͑glass-a HT͒ following the same thermal protocol used during the luster production, and ͑c͒ luster developed on lead free glass ͑j32͒.The incorporation of Cu in the luster process is observed in the region between channels 600 and 900.curate as can be seen in Fig. 3͑c͒ below 730͒.In the region between channels 640 and 730 the Cu signal still does not overlap with the Si signal so accuracy below 1% can still be obtained.However, when the Cu signal overlaps with the Si signal, below channel 630, the uncertainty may be considerably higher.In our simulation the Cu free region starts at a depth 17 350ϫ 10 15 at/ cm 2 ͑about 2.5 m depth͒.
The assumed changes in the concentration values for Na and K are less accurate than for Cu.The K signal overlaps with the Cu and Zn signals which have an important contribution to the total spectrum and the Na signal overlaps with the Si signal.In the simulation we have decided to keep the K/Na ratio roughly constant in the region where Cu is present.
Finally, the average copper composition calculated from the EIBS results up to a depth of 3 m, is 2.8 at.% Cu while the average copper concentration determined by electron microprobe gives a value of 3.4͑1.4͒at.% Cu.
We can summarize the main results obtained for the lead free glass luster ͑j32͒ as follows: ͑1͒ There is a surface layer with a thickness of 50 ϫ 10 15 at/ cm 2 ͑about 7 nm͒ free of Cu. ͑2͒ The shape of the Cu profile in the lead free glaze luster FIG. 4. EIBS experimental ͑gray line͒ and simulated spectra ͑black line͒ and the contribution of the different elements corresponding to ͑a͒ lead free glaze ͑glaze-a͒ and ͑b͒ luster developed on lead free glaze ͑j6͒.The incorporation of Cu in the luster process is observed in the region between channels 750 and 850.The inset enlarges the region between channels 1000 and 1075 corresponding to the contribution of lead to the spectrum; it can be clearly seen that the surface layer is lead depleted.FIG. 5. EIBS experimental ͑gray line͒ and simulated spectra ͑black line͒ and the contribution of the different elements corresponding to luster developed on lead glaze ͑j65͒.The incorporation of Cu in the luster process is observed in the region between channels 800 and 850.FIG. 6. Elemental profile concentration obtained from the EIBS data for the three luster layers ͑a͒ lead free glass luster-j32, ͑b͒ lead free glaze luster-j6, and ͑c͒ lead glaze luster-j65.layer j32 is asymmetric, showing a sharp increase at the boundary between the copper outer layer and the copper containing inner layer and more gradual decrease inside.The region with a high Cu concentration has a thickness of about 8000ϫ 10 15 at/ cm 2 ͑1.1 m͒.
The EIBS spectra from the lead free glaze and from the corresponding luster, j6, are shown in Figs.4͑a͒ and 4͑b͒.As for the lead free glass, EIBS gives homogeneous and similar compositions for the untreated glaze and for the heat treated glaze and are given in Table I.The peak related to the presence of Cu is clearly seen in Fig. 4͑b͒.The shape of the peak indicates a sharp rise in the Cu content going deeper into the glaze surface, followed by a small and gradual decrease which may reach several microns in depth.A good fit to the experimental data is obtained, as shown in Fig. 4͑b͒, using ten layers where relative concentrations of Si, Na, K, Cu, and Pb are modified while those of the other elements remain constant.Taking a density of 2.32 g / cm 3 for the lead free glaze, 26 an areal density of 10 15 at/ cm 2 is equivalent to 1.34 Å.The concentrations and thickness of the simulated layers are shown in Fig. 6͑b͒.Again, the amount of Cu is kept equal to the total loss of Na and K.The glaze contains a small amount of Pb ͑0.065 at.%͒ which can be easily and precisely detected by EIBS due to the high atomic number of Pb.The inset in Fig. 4͑b͒ shows that a layer close to the surface is Pb depleted, due to the volatilization of Pb during the luster production; this has already been observed in the literature. 24In the region of channels higher than 870 ͑corresponding approximately to a depth of about 1.2 m͒, the lead signal begins to overlap with the Cu signal, and the lead content cannot be well determined.The average copper composition calculated for different samples for a depth of 3 m ranges from 1.5 to 2.6 at.% Cu which is in excellent agreement with the average copper concentration determined by electron microprobe 2.6͑1.2͒at.% Cu.
We can summarize the results obtained for lead free glaze luster ͑j6͒ as follows: ͑1͒ There is a surface layer free of Cu of 500 ϫ 10 15 at/ cm 2 thickness ͑about 67 nm͒.This layer is also Pb free and Si richer.͑2͒ The shape of the Cu profile in the lead free glaze luster layer j6 is asymmetric, showing a sharp increase at the boundary between the copper free outer layer and the copper containing inner layer and more gradual decrease inside.The Cu rich layer has 4200ϫ 10 15 at/ cm 2 ͑about 560 nm͒ thickness.
Both results correspond very well with the TEM image from the luster layer obtained using the same thermal protocol on lead free glaze. 25he EIBS experimental spectrum and fit corresponding to luster layer j65 developed on the lead glaze is shown in Fig. 5 The glaze is lead rich and contains 4 at.% of Pb ͑corresponding to 32 wt % PbO͒.The presence of high amounts of Pb limits the accuracy of EIBS to measure the depth profile concentrations of the elements lighter than lead.Fig. 5 shows a sharp Cu asymmetric peak overlapping the Pb signal.The concentrations and thickness of the simulated layers are shown in Fig. 6͑c͒.Taking a density of 3.01 g / cm 3 for the lead glaze, 26 an areal density of 10 15 at/ cm 2 is equivalent to 1.43 Å.The luster layer is Pb depleted with respect to the nominal composition as can be clearly observed in the EIBS spectrum.The EIBS spectrum shows that, in this case, the luster layer has a very thin copper rich region, 1000ϫ 10 15 at/ cm 2 ͑about 140 nm͒ thick.This results in a high copper nanoparticles volume fraction, higher than 10%.Therefore, there is a significant reduction in the glass volume fraction and consequently in the Si content of the layer.The EIBS simulation indicates that the depletion in Na, K, and Pb reaches a deeper region.However, the sensitivity of EIBS to these elements is heavily limited in this high lead containing glaze and the glaze composition determined, must be considered with caution.The Si enriched and Pb depleted surface layers have also been observed in the study of ancient lusters.5,24 The average copper composition calculated for a depth of 3 m, is 0.8 at.% Cu while the average copper concentration as determined by electron microprobe is 1.8͑1.2͒at.% Cu.
A summary of the analysis for lead glaze luster layer ͑j65͒ indicates: ͑1͒ The presence of a Cu free outer surface layer of a thick-ness200ϫ 10 15 at/ cm 2 ͑about 28 nm͒.This layer is also Si richer and Pb poorer than the glaze, although richer in Pb than the Cu containing region.͑2͒ The Cu concentration profile is asymmetric, showing a sharp rise up to a maximum concentration much greater ͑15 at.%͒ than in the previous cases, followed by a smooth decrease but which is sharper than in the previous cases.The Cu rich luster layer is 1000 ϫ 10 15 at/ cm 2 ͑about 140 nm͒ thick.
A comparison between the three luster layers studied shows some similarities: the presence of a copper free outer layer, the concentration of copper in a thin layer showing an asymmetric profile, steeper near the surface and smoother inside the glaze.There are also some differences: the lead glaze has a luster layer 5-6 times thinner ͑only 140 nm thick͒ and a copper concentration 3-4 times larger ͑10-15 at.%͒ than the other glazes.Moreover, the lead glaze has also bigger copper nanoparticles, 39.0± 3.1 nm in front of 6.4± 0.3 nm for the lead free glass luster ͑j32͒ and 12.5± 0.3 nm for lead free glaze luster ͑j6͒ ͑as shown in Fig. 1͒.
IV. DISCUSSION
The copper content in the luster layer as obtained from EIBS allows us to calculate the volume fraction of copper nanoparticles -f-͑taking a density of 8.9 g / cm 3 for copper and the corresponding density given in Table I for each glaze͒.EXAFS data for a lead free glaze luster ͑similar to j6͒, a luster layer produced following the same thermal protocol 25 showed that 77± 4% of the copper was in metallic state while the rest was a cuprite.However, it is also well known that due to the fact that the reducing process results from the penetration of the reducing atmosphere into the luster layer, the outer surface is always more reduced than the inner region of the luster layer.In calculation we have considered both cases: either all the copper is forming metal nanoparticles or only 80% of the copper is forming metal nanoparticles.The calculated weight percent of copper and volume fraction for the copper richest regions of the three luster layers analyzed are presented in Table II which shows that the lusters developed on lead free glazes ͑j32 and j6͒ have values of f ϳ 5% and the luster developed on the lead glaze ͑j65͒ has a larger value, f ϳ 15%.
We have also computed the average nearest-neighbor distance between the particles, which is given by the interparticle distance ratio D / d, 27 where D is the separation between centers of adjacent particles and d is the particle size.This parameter has been used to study the insulator-metal transition in metal nanoparticulated systems. 28D / d is more suited for the study of dense nanoparticulated systems than the volume fraction, because the multipolar interaction is determined by the particle size and the nearest neighbor distance.Furthermore, for a given particle arrangement ͓facecentered-cubic, body-centered-cubic, simple-cubic ͑sc͒, or randomly arranged in three dimensions ͑3D͒, and triangular or randomly arranged in two dimensions ͑2D͔͒ both f and D / d are directly related, and D / d can be calculated exactly from f and a given particle arrangement using the exact solution given by Torquato. 27It is also worthwhile noting that the use of D / d allows a comparison of the observed response to other two-dimensional ͑2D͒ and three-dimensional ͑3D͒ nanoparticulated systems already studied in the literature.Considering that TEM images corresponding to both ancient luster layers and to the luster layer equivalent to lead free glaze luster ͑j6͒ ͑Refs.1-5 and 25͒ always show a random arrangement of nanoparticles, we have calculated the interparticle distance ratio ͑D / d͒ using the exact relationship between f and D / d given by Torquato. 27The results are also shown in Table II.Typical values of D / d ϳ 1.4 are obtained for the lead free glazes lusters ͑j32 and j6͒ while a value below 1.2 is obtained for the lead glaze luster ͑j65͒.
Using the TKF model, Farbman et al. 14 calculated the reflectance of 3D concentrated metal nanoparticulated systems, showing that their reflectance increases with both particle size and volume fraction up to values similar to metal thin layers.They also concluded that a system with a lower volume fraction, but larger particles has a similar reflectance to a system with a larger volume fraction but smaller particles.Volume fractions above 10% ͑D / d = 1.2 for randomly arranged arrays͒ and particle sizes between 2 and 50 nm were considered.They reported also that reflectance depends on the spatial arrangement, being highest for sc and random arrangements.These calculations indicate that for 3D systems an increase in the reflectance is obtained when reducing D / d.They also observed that there is an increase in the reflectance with the dielectric constant of the matrix.Silver and gold were found to give higher reflectance than copper for equivalent particle size and volume fractions.Shiang et al. 28 studied the optical behavior of 2D arranged monolayers of 3 nm diameter silver nanoparticles both theoretically and experimentally.Their study showed that there is an increase in the reflectance of the layers for D / d Ͻ 1.7 and a reversible quantum insulator to metal transition for values of D / d Ͻ 1.2.The experimental data ͓metal reflectance, absorption and second order harmonic generation ͑SHG͔͒ did not agree with the theoretical classical calculations.As a consequence, they concluded that quantum effects had to be taken into account to explain the high values of the reflectance and SHG determined when this transition happens.Quantum effects were expected in this case because the distance between near neighbor particles surfaces, D − d = 6 Å, was small enough.However, quantum effects are not expected in the luster layers studied here as, in all cases, the distance between particles surfaces is well above the nanometer.
Recently, Reillon and Berthier 29 analyzed medieval silver luster layers showing a gold metal shine and modeled its reflectance, reproducing qualitatively some of the most remarkable features shown by the layers.The lack of quantitative agreement with the experimental data was related to the complex structural features of the luster layers.Computations were performed assuming a 10% volume fraction ͑D / d = 1.22͒ of silver nanoparticles, radius 5, 10, and 20 nm and a dielectric constant of 2.
In our study, of the three luster layers studied, only the lead glaze luster layer has a D / d value below 1.2 ͑Table II͒.Luster developed over lead free glazes ͑j32 and j6͒ have values of D / d ϳ 1.4.Therefore, we find here a change in the optical reflectivity of the layer from nonmetallic to metallic for D / d below 1.2, in good agreement with the literature.
The formation of a thinner copper rich luster layer when a lead rich glaze is used is responsible for the high reflectivity shown by the luster layer through the reduction of D / d.This may be related to a smaller diffusion of Cu + in the glaze after ionic exchange took place and prior to the copper reduction and formation of the metal nanoparticles.This reduced diffusivity may be a consequence of either a lower solubility of Cu + in the lead glaze or to a more complex diffusion path involving other elements from the glaze.Further studies are needed to clarify this question.
In any case, it has been established that the introduction of lead in the glaze formulation is fundamental in order to obtain a good metal like luster decoration.Actually, the glazes related to the earliest luster production, 9th AD ͑Iraq͒, contained only very low amounts of lead ͑1-5 wt % PbO͒ and the corresponding luster layers did not show metal-like shine. 6,29,30Later ͑late 9th and 10th century AD͒, the introduction of higher of lead in the glaze ͑up to 15 wt % PbO͒ resulted in the formation of gold-like silver lusters.
Figure 7 shows two samples from the Ashmolean Museum-P37 and P32-with the characteristic green silver lusters from early 10th AD, Iraq. 6The lack of golden like shine for sample P37 is clearly linked to the low lead content of the glaze unlike P32.Later luster productions from Egypt ͑Fustat, 10th-11th AD͒, Iran ͑Kashan, 12th-13th AD͒, Syria ͑12th͒, Islamic ͑Malaga, 13th century AD͒, and Christian Spain ͑Paterna 14th century AD͒ included lead in the formulation of the glazes ͑between 20 and 45 wt % PbO͒.There is only one exception in the Islamic luster productions, the copper luster layers produced in Raqqa, Syria, in the 13th AD ͑Ref.6͒ that were obtained over pure alkaline glazes.In perfect agreement with our results these copper lusters are a red color and never show metallic shine ͑Fig.7, sample Praqqa͒.On the contrary, coppery lusters were always obtained on high-lead containing glazes, as is observed in a typical luster from 17th century AD Barcelona ͑Fig.7, sample Salt-11͒.
V. CONCLUSIONS
An EIBS study of copper luster layers produced in laboratory conditions using the medieval recipes with both leadfree and high lead glazes ͑32 wt % PbO͒ has been performed.Analysis of EIBS data gave the thickness and copper content of the luster layers from which the metal particle volume fractions and D / d were obtained.The results indicate that the metal-like reflectivity shown by the luster layer produced on the high lead glaze is related to a high density of particles in the layer and bigger nanoparticles size ͑D / d ϳ 1.2͒, while the lack of metal shine of the luster layers produced on lead free glazes is related to a lower particle density and nanoparticles size ͑higher D / d͒.
The use of lead rich glazes seems one of the relevant technological parameters for the production of metal-like shining lusters.The study of early Islamic lusters agrees with this. 6,30The results obtained in this article suggest that the evolution of the luster technology during 9th and 10th centuries AD is directly related to the introduction of lead in the glaze formulation.
TABLE I .
at. % composition of the glass/glazes determined by electron microprobe and EIBS analysis of the surface.
FIG. 1. Summary of the main characteristics of the luster layers studied. 1 Calculated from the glazes composition following Ref. 26.
TABLE II .
Copper and glass weight fractions calculated from the EIBS data and copper volume fraction ͑f͒.The mean interparticle distance ratio ͑D / d͒ between copper particles is calculated from f considering a random arrangement of nanoparticles using the exact solution given in Ref.27. | 7,946.4 | 2007-05-23T00:00:00.000 | [
"Materials Science"
] |
Black Droplets
Black droplets and black funnels are gravitational duals to states of a large N, strongly coupled CFT on a fixed black hole background. We numerically construct black droplets corresponding to a CFT on a Schwarzchild background with finite asymptotic temperature. We find two branches of such droplet solutions which meet at a turning point. Our results suggest that the equilibrium black droplet solution does not exist, which would imply that the Hartle-Hawking state in this system is dual to the black funnel constructed in \cite{Santos:2012he}. We also compute the holographic stress energy tensor and match its asymptotic behaviour to perturbation theory.
Introduction
The discovery of Hawking radiation and its associated information paradox has led to a deeper understanding of quantum gravity, and formed a basis for the development of holography and the AdS/CFT correspondence [2,3,4]. Recently, there have been many attempts to use holography to further our understanding of Hawking radiation. In particular, while Hawking radiation is mostly understood for free fields on black hole backgrounds, the authors of [5,6,7] apply AdS/CFT to the study of Hawking radiation when these fields are strongly interacting.
The AdS/CFT correspondence conjectures the equivalence between a large-N gauge theory at strong coupling to a classical theory of gravity in one higher dimension. The correspondence gives us the freedom to choose a fixed, non-dynamical background spacetime for the gauge theory, which translates to a conformal boundary condition on the gravity side. For a gauge theory background B in D − 1 dimensions, this amounts to solving the D-dimensional Einstein's equations with a negative cosmological constant with a boundary that is conformal to B. For the moment, let us consider the case where B is an asymptotically flat black hole of size R and temperature T BH . Let's also suppose that far from the black hole, the field theory has a temperature T ∞ . The authors of [5] conjectured two families of solutions that describe the gravity dual. They argue that in the bulk gravity dual, the thermal state far from the boundary black hole is described in the gravity side by a planar black hole, while the horizon of the boundary black hole must extend into a horizon in the bulk. These two horizons are either connected, yielding a black funnel or disconnected, yielding a black droplet. These are illustrated in Fig. 1.
In the field theory, the difference between these families is manifest in the way the black hole couples to the thermal bath at infinity. The connected funnel horizon implies that the field theory black hole readily exchanges heat with infinity. On the other hand, the disconnected droplet horizons suggest that the coupling between the boundary black hole and the heat bath at infinity is suppressed by O(1/N 2 ). Indeed, unless T BH = T ∞ , the funnel solutions would exhibit a "flowing" geometry 1 . The droplet solutions, however, are necessarily static for a static boundary black hole. A phase transition between these two families would resemble a "jamming" transition in which a system moves between a more fluid-like phase and a phase with more rigid behaviour. Based on gravitational intuition for the stability of the bulk solution, it was conjectured in [5] that funnel phases should be preferred for large RT ∞ , while droplets should be preferred for small RT ∞ .
In order to test these conjectures, one would need to construct corresponding droplet and funnel solutions. Droplet solutions are simpler to construct when T ∞ = 0. In this case, the planar horizon in the droplets becomes the AdS Poincaré horizon. Such droplet solutions were constructed in [8] for a Schwarzschild boundary, and in [9,10] for a boundary that is equal-angular momentum Myers-Perry in 5 dimensions. There is also an analytic droplet based on the C-metric with a three-dimensional boundary black hole [11]. Static funnel solutions (that is, with T BH = T ∞ = 0) were constructed in [1], for a Schwarzschild boundary and for a class of 3-dimensional boundary black holes.
Unfortunately, none of these solutions can be directly compared with each other. The T ∞ = 0 droplets will compete with a funnel that flows to zero temperature, and the static funnels compete with a droplet solution with equal temperature horizons. Neither of these solutions have been constructed.
In this paper, we shed light on the droplet and funnel transition by numerically constructing new black droplet solutions with T ∞ = 0. As in [1,8], our boundary metric is Schwarzschild. We find that there can be two black droplet solutions for a given T ∞ /T BH . These merge in a turning point around T ∞ /T BH ∼ 0.93, which suggests that Schwarzschild black droplets in equilibrium do not exist.
We use a novel numerical method to construct these geometries. It joins three existing numerical tools: transfinite interpolation on a Chebyshev grid, patching, and the DeTurck method. This method is not only useful for the construction of the solutions detailed here, but can be used in a broader sense with modest computational resources -see for instance [12] where this method was used to construct black rings in higher dimensions. In particular, the fact that we use transfinite interpolation on a Chebyshev grid means we do not require overlapping grids for the patching procedure 2 , which in turn not only simplifies the coding of the problem but also decreases the need for larger computational resources.
In the following section, we detail our numerical construction of these solutions. In section 3, we investigate these solutions by computing embedding diagrams and the holographic stress tensor and matching our results to perturbation theory. We make a few concluding remarks in section 4.
Choosing a Reference Metric
We opt to use the DeTurck method which was first introduced in [13] and studied in great detail in [8]. This method alleviates issues of gauge fixing and guarantees the ellipticity of our equations of motion. The method first requires a choice of reference metricḡ that is compatible with the boundary conditions. One then solves the Einstein-DeTurck equation where ξ µ = g αβ Γ µ αβ +Γ µ αβ , andΓ µ αβ is the Levi-Civita connection forḡ. For the kinds of solutions we are seeking, a maximal principle guarantees that any solution to (2.1) has DeTurck vector ξ = 0, and is therefore also a solution to Einstein's equations [8].
To find a black droplet suspended over a planar black hole, the chosen reference metric must have a planar horizon, a droplet horizon, a symmetry axis, and a conformal boundary metric. Furthermore, the reference metric must approach the planar black hole metric in the right limit. Thus, the integration domain is schematically a pentagon. Most numerical methods for PDEs use grids that lie on rectangular domains, but these methods can be extended to a pentagonal domain by patching two grids together. Because of the difference in geometry between the two horizons, we will patch together two grids in different coordinate systems, each adapted to one of the horizons.
To motivate our choice of reference metric, let us first begin with AdS D in Poincaré Notice that fixing the time and angular coordinates gives us a two-dimensional space that is confomally flat. This two-dimensional space in the line element (2.2) is written in Cartesian coordinates that can be adapted to a planar horizon. We can also move to polar coordinates which are more suitable for a droplet horizon. Therefore, we now search for a reference metric with a conformally flat subspace that also contains a droplet horizon and a planar horizon.
To do this, let us first write the planar black hole in conformal coordinates. We begin with the usual line element for the planar black hole solution in D bulk dimensions: which gives us a line element of the form for some functionsg,f , and constantλ. This line element has our desired conformal subspace. For a boundary metric that is conformal to Schwarzschild, we find it numerically desirable to redefine the coordinates to which yields The planar horizon is located at the hyperslice y = 1/λ. The constant λ (orλ) sets the temperature of the black hole and can be related to Z 0 in (2.3). The functions f and g (orf andg) are smooth, positive definite, and depend on the temperature. They can be determined by integrating (2.4) and inverting the resulting Hypergeometric function 3 . To determine the integration constant, we choose g(0) = f (0) = 1. Now let us write down a line element (not necessarily a solution of Einstein's equations) that has a single droplet horizon in conformal coordinates. We search for something of the form where we have chosenf ρ to be a function of √ z 2 + r 2 in anticipation of moving to polar coordinates. The functionf ρ is determined by a choice of conformal boundary metric ds 2 ∂ . At the boundary z = 0, we must have for some conformal factor ω. For a boundary metric that is conformal to Schwarzschild, We find that it is convenient to set t = 4τ . This then uniquely specifies the functionf ρ , which together with (2.9) gives us our droplet line element in conformal coordinates. Switching to the polar coordinates gives us By construction, the droplet horizon is at ρ = 1 and its temperature (with respect to the time coordinate τ ) matches the temperature of the boundary Schwarzschild black hole. Additionally, the line element (2.14) can be used as a reference metric to reproduce the results of the solution in [8]. Now we can attempt to combine the planar and droplet line elements to create our desired reference metric. Guided by the similarities between (2.5) and (2.9), the reference metric we have chosen is where we treat g and f y as functions of the coordinate y, and f ρ as a function of the coordinate ρ. The x, y coordinates are related to the ρ, ξ coordinates through (2.6) and (2.13): The reference metric (2.16) has a regular planar horizon at y = 1/λ, a regular droplet horizon at ρ = 1, and an axis at x = 0 (or ξ = 0). Near x = 1, we recover the planar black hole metric as written in (2.7). Since g(0) = f (0) = 1, near y = 0 or ξ = 1 we have (in the ρ, ξ coordinate system) We can see that this is equivalent to Schwarzschild (2.11) by performing the coordinate transformation We have thus found a reference metric that is compatible with our desired boundary conditions. By construction, this reference metric can be written in two orthogonal coordinate systems, with all boundaries in our domain being a constant hyperslice in at least one of these two coordinate systems. Furthermore, in the λ → 0 limit, our reference metric becomes the droplet metric (2.14), which is an appropriate reference metric for a droplet without a planar black hole.
We have two parameters given by λ and R 0 , which determine the temperatures T ∞ and T BH , respectively. This system, however, only has one dimensionless parameter given by the ratio T ∞ /T BH , so we have one remaining gauge degree of freedom which we can choose for numerical convenience.
Ansatz and Boundary Conditions
With a reference metric in hand, we can now write down a metric ansatz: where T c , A c , B c , F c , and S c are functions of the Cartesian coordinates x and y, and T p , A p , B p , F p , and S p are functions of the polar coordinates ρ and ξ. Since we must demand that the metric is equivalent between these two coordinate systems, the functions are related to each other via where we used the coordinate transformations (2.17). Now let us discuss boundary conditions. At the boundary y = 0 or ξ = 1, we must recover a metric conformal to Schwarzchild. This was already done in the reference metric, so we choose Similarly, we must recover the planar black hole at x = 1 and impose The remaining boundary conditions are determined by regularity. At the planar horizon y = 1/λ, we need At the axis, x = 0 or ξ = 0, we require Finally, at the droplet horizon ρ = 1, .
Numerics
To solve the equations of motion numerically, we employ a standard Newton-Raphson relaxation algorithm using pseudospectral collocation. To choose a suitable grid, we first divide the entire integration domain into two patches, one in each coordinate system. We then place a spectral grid on each patch using transfinite interpolation on a Chebyshev grid. An example of such a grid is shown in figure Fig. 2. In addition to imposing the boundary conditions, we require the smoothness of the metric across patches. This amounts to requiring (2.22) and the equivalent expression for normal derivatives across the patch boundary. We obtained our first solution by using the reference metric as a Newton-Raphson seed.
Since it has been proven that the DeTurck vector ξ = 0 for any solution of (2.1) satisfying boundary conditions such as those appearing here [8], we can use this quantity to monitor our numerical error and test the convergence of our code. As seen in Fig. 3, our numerics converge exponentially with increasing grid size, as predicted by pseudospectral methods. All of our results presented below have |ξ| 2 < 10 −10 . We have also verified that our results do not change when we vary the location of our patch boundary or when we change λ and R 0 while keeping T ∞ /T BH fixed. 3 Results
Embedding and Distance Between the Horizons
To get a sense for the relationship between these two horizons, in Fig. 4 we plot the proper distance between the horizons along the axis of symmetry as a function of temperature. For small T ∞ /T BH , there are solutions with a large distance between the black droplet and the planar black hole. These are solutions which are close to the T ∞ = 0 solution found in [8]. As we follow these solutions with increasing T ∞ /T BH , we find that the proper distance decreases until T ∞ /T BH ∼ 0.93. At this value there is a turning point where the proper distance continues to decrease only if we decrease T ∞ /T BH . These results suggest that T ∞ /T BH ∼ 0.93 is a critical temperature above which only (possibly flowing) funnel solutions exist. In particular, the equilibrium state would be the funnel solution found in [1]. To help us understand the geometry of the solutions, we embed the two horizons in Euclidean hyperbolic space: Demanding that the pullback of hyperbolic space to a curve γ(x) = (z(x), r(x)) is equal to the pullback of our solution to the horizon gives a system of ODEs in z(x) and r(x). We solve these ODEs numerically to obtain our embedding diagram. The embeddings of the droplet horizon and planar horizon are shown in Fig. 5. The size of the droplets at the boundary is normalised to 1, and the location of the planar black hole far from the droplet is also normalised to 1. Starting at small T ∞ /T BH , the droplet horizon looks very similar to that of [8], and the planar horizon is approximately flat. As we increase T ∞ /T BH , we see that even past the turning point, the droplet horizon continues to lower itself deeper into the bulk and the centre of the planar horizon continues to rise towards the boundary. Based on the shape of these solutions from the embedding diagram, we call our two branches of droplet solutions long dropets and short droplets. Similar behaviour has been observed for black droplets in global AdS [14].
Eventually, our numerics break down and we are unable to continue the long droplets any further. We can only conjecture a number of possibilities. One scenario is that the long droplets continue to exist down to T ∞ = 0, these solutions may join with the AdS black string. In this case, one might reinterpret the naked singularity of the string as a degenerate droplet/funnel merger point.
Another possibility is that the two horizons merge at some finite temperature ratio towards a funnel. At the merger, they would reach a conical transition. Since the two horizons are not at the same temperature, this would mean a transition between a static solution to a stationary one with some amount of flow. But going a small amount across a conical merger should not change the geometry far from the cone significantly, so the amount of heat flux at infinity should be small. If this picture is correct, this would mean that there are two types of flowing funnel solutions, one with a narrow neck and small flow, and one with a wider neck with larger flow. Though, like the caged black holes [15], it is also possible that there is no stationary solution on the funnel side of the merger, and the solution necessarily becomes dynamical and possibly evolves into a wide flowing funnel.
Stress Tensor
Now we compute the boundary stress tensor. The procedure we use is similar to those of [16]. We expand the equations of motion off of the boundary in a Fefferman-Graham expansion, choosing a conformal frame that gives Schwarzschild on the boundary. We can then read off the stress tensor from one of the higher order terms in the expansion. There is no conformal anomaly in our case because we have chosen a boundary metric that is Ricci flat. boundary black hole, the stress tensor fits the form where k 0 is the boundary stress tensor for a bulk planar black hole. This R −1 behaviour was also found for the funnel solutions in [1].
In the insets of Figs. 6, and 7, we subtract k 0 from the stress tensor, take an absolute value, and plot the result using a Log-Log scale. Note that there are clearly two power-law regimes. Far from the black hole, we see a R −1 power law, similar to that of a funnel. Closer to the black hole, we see a R −5 power law, similar to that of the droplets found in [8].
This dual power-law can be explained from the bulk perspective. The presence of the droplet warps the planar horizon, making it funnel-like far away. This is most easily seen in our embedding diagrams in Fig. 5. This funnel-like behaviour gives the stress tensor a R −1 power law. Closer to the droplet, the physics near the boundary is dominated by the hotter droplet horizon rather than the planar horizon, giving a R −5 droplet behaviour. As the distance between the horizons decreases, this R −5 behaviour becomes more obscured.
In Fig. 7 we can see that both long and short droplets have the same large R behaviour, suggesting that this is universal. Indeed, we shall match this behaviour with perturbation theory in the next section. Fig. 6.) The larger red curve is the short droplet while the smaller blue curve is the long droplet.
Matching with perturbation theory
Far away from the axis of symmetry of the droplet, i.e. close to x = 1 in Eq. (2.21a), perturbation theory should be valid. This region can solely be studied using standard perturbation theory techniques around the planar black hole line element (2.3). For concreteness, we will take D = 5, even though our procedure admits a straightforward extension to arbitrary D.
We first note that the planar black hole can be written as where dE 2 3 is the line element of three dimensional Euclidean space. Following [17], we can decompose our perturbations according to how they transform under diffeomorphisms of E 3 . These can be decomposed as tensors, vectors or scalar derived perturbations. Here, we are primarily interested in scalar perturbations. Its basic building block are the scalar harmonics on E 3 , which satisfy the following simple equation Furthermore, we are interested in perturbations that do not break the 2−sphere inside E 3 , so we only have radial dependence in S. These can be computed and we find A general perturbation can be decomposed as where lower case latin indices run over {t, Z} and upper case latin indices run over coordinates in E 3 . In addition, we are interested in non-normalizable perturbations that are time independent. This means we can set f tZ = f t = 0. We are thus left with two gauge degrees of freedom, corresponding to reparametrizations of Z and R. We fix this by demanding f Z = 0 and H T = 0. We are thus left with three variables: f tt (Z), f ZZ (Z) and H L (Z). The Einstein equations automatically fix f ZZ as an algebraic function of f tt and H L : The remaining Einstein equations reduce to two first order equations in H L and f tt , which we reduce to a single second order equation in f tt : where we performed the coordinate transformation Z 2 = w and defined Z 2 0 = w 0 . Before proceeding to determine the solution, let us first discuss the boundary conditions. Recall that at the boundary we need to recover the Schwarzschild line element (2.11) expanded at large values of R. This is equivalent to demanding: This boundary condition picks α = 0, and without loss of generality we take C 2 = 2 . For this choice, Eq. (3.5) admits a simple analytic solution: where A and B are constants to be chosen in what follows. Regularity at the black hole horizon and the boundary condition (3.6) demand A = R 0 /Z 4 0 and B = −R 0 /Z 6 0 . The full metric perturbation can be reconstructed from Eq. (3.7) and is given by: 8) where we parametrize the 2−sphere in the standard way dΩ 2 2 = dθ 2 + sin 2 θdφ 2 . This metric perturbation does not seem to have a boundary metric perturbation that approaches the large R behavior of the Schwarzschild line element (2.11). However, this is an illusion of the gauge we choose to work in. If we perform a gauge transformation with gauge parameter ξ = − 2 R 0 /(2 Z 2 ) dR, we bring the metric perturbation (3.8) to which manifestly exhibits the boundary metric we desire.
It is now a simple exercise to determine the perturbed stress energy tensor in terms of the boundary black hole temperature T BH and planar temperature T ∞ : This should be the leading asymptotic behavior of the holographic stress energy tensor of the droplet solution as we approach R → +∞. This is partially confirmed by [1] where the stress energy tensor is found to be consistent with (3.10) if T ∞ = T BH = T Schwarzschild . A linear fit of our log-log plots agrees with (3.10) to less than 0.1%.
The next correction should appear at O(R −2 ) and can be computed using a similar approach, albeit with a more tedious calculation. Based on our solution at smaller R, we expect the first undetermined coefficient in the R = +∞ expansion to appear at O(R −5 ). In particular, the difference between droplets and funnels holographic stress energy tensors should only appear at O(R −5 ).
Discussion
To summarise our findings, we have numerically constructed Schwarzschild black droplet solutions suspended over a planar black hole. These solutions are dual to the "jammed" phase of a large N strongly coupled CFT. We find two branches of droplets: long and thin, and that these solutions only exist below a critical temperature T ∞ /T BH ∼ 0.93. We have computed their stress tensor and find generically two power-law regions corresponding to a droplet-like falloff of R −5 and a funnel-like falloff of R −1 .
It would be interesting to study the stability of these droplet solutions. The short droplet with T ∞ = 0 were argued to be stable in [8]. If they are, then it seems likely that short droplets for small temperature ratios are also stable. The long droplets, on the other hand, may be unstable to forming a flowing funnel, or perhaps a short droplet.
If all of our short droplets remain stable, then the critical temperature might be interpreted as a "melting" or "freezing" point. Consider a short droplet at small T ∞ /T BH . Keeping the boundary black hole fixed, suppose we slowly increase the temperature T ∞ . If we do this slowly enough, the dynamical solution should remain close to the static solution. Eventually, these static droplets no longer exist, so the system must become fully dynamical, perhaps evolving into a flowing funnel. The rigid behaviour of the droplet transitions into the more fluid behaviour of a funnel.
Unfortunately, we cannot directly compare the long and short droplets to each other. These solutions are not at equilibrium, so their free energy is not well-defined. One can in principle still compare their entropies and energies. These quantities are formally infinite, but can be regulated by subtracting the large R behaviour obtained via perturbation theory. Unfortunately, these quantities are finite only after subtracting down to an O(R −4 ) behaviour, which is beyond our numerical control.
To complete our understanding of solutions with a Schwarzschild boundary, the flowing funnels need to be constructed. These solutions would require non-Killing horizons, such as those in [18,19,20]. Additionally, In our solutions, the droplet horizon has the same temperature as the boundary black hole. It is possible to detune so that these temperatures are not equal [19].
In our study, we have focused on boundary black holes that correspond to four-dimensional Schwarzschild. These boundary black holes do not need to satisfy any field equations, so we are free to choose any metric. It would be interesting to see what changes as we vary the boundary black hole. For instance, equilibrium droplets or droplets with T ∞ /T BH > 1 may exist, particularly for boundary black holes that are small relative to their temperature. | 6,158.6 | 2014-05-08T00:00:00.000 | [
"Physics"
] |
Application of Bayesian network and regression method in treatment cost prediction
Charging according to disease is an important way to effectively promote the reform of medical insurance mechanism, reasonably allocate medical resources and reduce the burden of patients, and it is also an important direction of medical development at home and abroad. The cost forecast of single disease can not only find the potential influence and driving factors, but also estimate the active cost, and tell the management and reasonable allocation of medical resources. In this paper, a method of Bayesian network combined with regression analysis is proposed to predict the cost of treatment based on the patient's electronic medical record when the amount of data is small. Firstly, a set of text-based medical record data conversion method is established, and in the clustering method, the missing value interpolation is carried out by weighted method according to the distance, which completes the data preparation and processing for the realization of data prediction. Then, aiming at the problem of low prediction accuracy of traditional regression model, this paper establishes a prediction model combined with local weight regression method after Bayesian network interpretation and classification of patients' treatment process. Finally, the model is verified with the medical record data provided by the hospital, and the results show that the model has higher prediction accuracy.
Introduction
In recent years, with the rapid growth of medical and health care expenditure, the public health system has also exposed some problems, such as uneven distribution of medical resources, unable to meet the growing medical needs and so on [1][2][3]. Through the patient diagnosis information research disease treatment cost prediction is an important way to promote the pricing mechanism reform, control the unreasonable growth of medical costs, reduce the burden of patients, and affect the main causes of treatment costs. Factors often include the patient's condition, treatment options and treatment cycle.
For a long time, the industry has carried out extensive and in-depth research on the prediction of medical expenses, and achieved remarkable results [4][5][6][7]. The existing methods can be divided into two categories: diagnosis based and data-based prediction. According to the historical treatment cost of a large number of the same patients, the data-based prediction method approximately predicts the possible cost of current patients through machine learning method. Wang et al. [8] studied the daily hospitalization number and medical expenses of patients with mental disorders from January 1, 2011 to December 31, 2015. Time series analysis was established to estimate the total annual health expenditure, hospitalization expenses and annual medical expenses of mental disorder patients. This study shows the long-term trend of total direct medical expenses for mental illness, and forecasts the results. Chen et al. [9] Open Access *Correspondence<EMAIL_ADDRESS>1 Cancer Hospital of China Medical University, Shenyang, China Full list of author information is available at the end of the article studied the influence of the number of health workers employed in public health care sector, the number of population and the number of Inpatients Per 100 people on the total health expenditure from 2003 to 2011 in Serbia Growth. By using statistical analysis and multiple linear regression analysis, the author concluded that the growth of health workers during this period strongly promoted the growth of total health expenditure. Data based methods are usually suitable for the situation of long time series and large amount of data. At this time, data statistical methods can better find the rules of variables and the relationship between variables. However, this also limits that when the amount of data is small or the data distribution does not have a long time span, the statistical methods often have large errors. The diagnosis method is based on the disease as the starting point, through mathematical methods to explain the pathological characteristics to achieve the purpose of prediction. Compared with the data-based method, this method is more targeted, and more suitable for the case of small amount of data or small number of diseases. Qing and Liu [10] proposed that it is necessary to pay more attention to the disease itself in the study. At the same time, four kinds of predictive variables were established in the study. By using multiple regression analysis and back propagation neural network, the factors influencing the medical expenses of single disease cataract were found out, and the acceptable medical expenses were predicted by two regression models. Kim and Park [11] used medical examination data, laboratory test, self-reported medical history and selfreported health behavior data to establish high-cost user prediction model by three methods: logical regression, random forest and neural network model, and determined the characteristics of medical examination as the prediction factor of high-cost users. This method mainly aims at the cost prediction of high-cost medical users, so the data set itself is a high-cost user, which not only limits the needs of ordinary users, but also can not see the role of ordinary users in the medical structure system. At present, diagnosis based research is mostly to single disease. And the amount of data is not very huge. One reason is that the method pays more attention to the law of the disease itself rather than the change of the value. The other reason is that the research methods used in the method are more regression analysis or simple neural network [9,[12][13][14]. Therefore, the diagnosis method is to establish a model to simulate the development of the disease and determine the diagnosis and treatment plan. At present, the commonly used regression methods are logistic regression, multiple linear regression and so on. These methods focus more on the relationship between independent variables and dependent variables, so they lack the cause and effect of the disease itself.
In this paper, we propose a Bayesian network and regression analysis method to predict the cost of treatment by using the diagnosis cases. Firstly, based on the patients' cases, the disease influencing variables are extracted. And the variable transformation and missing value processing are carried out. The redundant variables are eliminated through correlation analysis, and the data set of the model is obtained. Then, the Bayesian network is used to simulate and analyze the multi category disease description variables, and the patients are divided into different treatment schemes. The local weighted LASSO regression analysis is carried out under each treatment scheme, and a higher accuracy prediction model is obtained. Finally, the model is validated by the data of colon cancer patients in a hospital, and the prediction effect of the model is compared with that of regression model and neural network model. The results show that the proposed method can better simulate the diagnosis and treatment method selection, and its prediction accuracy is better than that of traditional regression method and neural network model.
Data preprocessing
This paper intends to use mathematical thinking to deal with medical problems, so the selected variables should have a certain mathematical form on the premise of representing the patient's condition as much as possible. In this paper, we selected four kinds of disease information which can affect the cost, including patient's age, gender, surgery history, treatment plan, past pathology and disease condition, current admission condition, smoking condition, diabetes mellitus, hypertension, etc. They are: patient's explanation, history of present illness, medical history and personal history. Main complaints refers to the explanation of symptoms and personal information by patients or the question and answer between doctors and patients. History of present illness is the patient's present symptoms. It also includes the current treatment in the process of the disease. Medical history is that the patient had other operations and other chronic diseases before. Personal history represents the patient's living place, smoking, drinking and other information.
Most of these variables are written descriptions, which can not be used as formula variables for operation, so the first step in data processing is to digitize the text information. At the same time, because the case records are from different doctors, the format is not uniform, so there will be some missing variables. Therefore, in order to maximize the role of data, the missing variables should be deleted or the difference should be made according to their accuracy.
Text variable digitization
There are many descriptive variables in text variables, such as patient's gender and medical history, which can not be directly used as model input. Moreover, it is difficult to establish a unified conversion standard for these variables due to the disease and other reasons. Therefore, this paper develops a set of text numerical method suitable for the current disease through mathematical thinking, which is verified by experts.
Electronic medical record contains not only numerical variables, but also some descriptive variables. In the model, numerical variables can be used for calculation, and descriptive variables also have an important impact on the prediction of the patient's condition. Therefore, this paper first establishes a unified standard for descriptive variables in medical records. After the analysis of descriptive variables in medical records, they can be divided into two categories. One is qualitative descriptive variable, the other is degree descriptive variable. Qualitative descriptive variables, such as abdominal pain, smoking, etc., are represented by a value of 0-1. Degree descriptive variables such as mild abdominal distension and recurrent hematochezia also need to be converted into numerical type. In this paper, all data sets are traversed and all degree descriptive variables are selected. In this paper, it is considered that the description first has the property characteristics, and then has the degree description. Therefore, in the numerical conversion of the variable, the characteristics of the variable should be taken into account. In this paper, firstly, the basic value is given according to the characteristics, then the severity is divided into different levels and scored according to different levels. Finally, the basic value and severity value are weighted to get the final variable value. The conversion function is as follows.
where y is the variable value after transformation, b is whether the patient has the symptom, 0 is not exist, non-0 is exist, a is the total number of the disease degree levels, and x is the degree level value of the patient.
The numerical method is based on the guidance of doctors and experts, and is compatible with the qualitative and severity characteristics of symptom feature description. It is a conversion method defined to infer the severity of disease. This method adopts the linear formula method, which can still maintain the discrete characteristics of discrete variables. It also unifies and standardizes the numerical format, which can standardize the numerical value and avoid unnecessary deviation of the model due to the span problem after data conversion.
Missing data processing
After further analysis of the data, there is a problem of missing some features in the electronic medical record, because it is not from the same doctor at the time of recording, and there is no unified standard. For the problem of missing data processing, the commonly used method is to interpolate the global mean value [15]. This method will lead to the same interpolation of the same kind of variables, and there is a large error.
In this paper, an improved method based on KNN (K-Nearest Neighbor) proximity method is proposed to calculate the missing value by weighting the distance between adjacent points. The missing data in medical records are processed according to the missing proportion. First, check the data in the variable with serious missing information. Each medical record is set as an array X i = [X i1 , X i2 , …, X i(n+1) ]. It contains n characteristic variables and one target variable. Each array may contain some missing values. For some medical records, when the missing value exceeds 20%, there is no further consideration in the model, because too much difference will lead to inaccurate prediction information.
For the data whose missing value is less than 20%, the k nearest variable data is used, and the missing value is estimated by weighted evaluation according to the distance. If the common distance average method is used to estimate the missing value, there will be a large error, so according to the nearest point of the variable, and according to the distance to allocate the weight, the prediction error can be reduced. In this paper, the weight is learned from the overall data distribution, which contains all the information, and can also be combined with local information to get the final accurate distance estimation.
where f(x) is the distance from the test point to the cluster center, W i stands for weight, D i represents the distance between the nearest neighbor i and the test point.
The traditional difference method is based on the global average interpolation, the difference does not have individual differences, which is suitable for the large amount of data to ensure that it does not affect the trend of the interpolation method. When the number of cases is small, the error of the traditional method will be magnified, and the method in this paper can interpolate according to the data near the real variables, which ensures the rationality, but also has a certain degree of heterogeneity. The values computed by the proposed method can be closer to the real value in the verification experiment. (
Bayesian and regression fusion method
The core of the fusion method of Bayesian network and regression analysis is to classify first and then regress, so as to predict the treatment cost of patients on the basis of describing the treatment process of patients. Due to the small amount of case data, and the large difference of drug cost and dosage between different treatment schemes, if all data are used for regression analysis, the error is large. Therefore, this paper does not use the statistical method, but uses the Bayesian network which can make full use of the characteristics of the disease.
Bayesian network
Bayesian network is a directed acyclic graph composed of nodes and a group of conditional probability tables between nodes. The graph model is mainly composed of two parts: the network topology of nodes and the conditional probability table of nodes [16].
The generation of Bayesian network mainly includes two parts: one is to determine the dependence between variables, that is, to determine the network structure; the other is to determine the conditional probability between variables, that is, the weight of network nodes. The difficulty of determining the network structure is to traverse all the network structures, so the variables should be independent of each other, and the relationship between nodes should follow the Bayesian principle.
In order to classify the case variables, this paper analyzes the relationship between features based on features.
The variable x = (x 1 , x 2 , …, x n ) is defined, which has n features extracted from cases, and the class y = (y 1 , y 2 , …, y j ) refers to the J classifications extracted from cases. The purpose of Bayesian network is to explain the reason of category y with variable x. According to Bayesian principle, the formula is as follows: where P y j |x 1 , x 2 , . . . , x n is the conditional probability that a pair of variables is assigned to no category, P(y j ) is the probability value of each class, y j is the number of categories of variable X.
Based on the obtained patient feature information, each feature is discrete and independent of each other. In order to apply it to Bayesian network, the characteristic variable X is used to explain the reason of variable y after removing the high correlation. Variable X interprets class y, which is a mathematical interpretation method for (3) P y j |x 1 , x 2 , . . . , x n = P(y j )P(x 1 , x 2 , . . . , x n |y j ) P(x 1 , x 2 , . . . , x n ) (4) y j = argmaxP y j n i=1 P x i |y j patient diagnosis. The directed probability value of each variable is a process of restoring facts through mathematical methods. Therefore, in the process of training, we need to ensure the rationality of the network through multiple groups of cross validation.
The maximum likelihood estimation is a statistical method based on the maximum likelihood principle, which can be simply described as: suppose that a random trial has multiple possible results A, B, C, etc. If result A appears in a random trial, that is to say, the possibility of result a is very large, it can be considered that the test conditions are favorable for result a. The maximum likelihood estimation of Bayesian network is to calculate the value of a given parent node set, take the frequency of different values of each node as the conditional probability parameter of the node, and try to find the parameter that maximizes the likelihood function of the node.
Because the maximum point of the log likelihood function is consistent with the maximum point of the likelihood function, and the calculation is more convenient, the log likelihood function is often used to replace the likelihood function. The formula is as follows: where H and E are two random variables and L is likelihood function.
Bayesian network model can judge the treatment plan selected by patients through the case data. This part can not only provide suggestions for patients to choose treatment plan, but also make a certain classification for further regression analysis, so as to reduce the prediction error caused by the category difference of the data itself.
Local weighted LASSO regression
With the continuous development and improvement, the theory of multiple linear regression is relatively mature. It can find out the quantitative relationship between variables, describe the law of numerical change between statistical variables, and finally predict. It is an effective way to accurately learn the influence degree and direction of independent variables on dependent variables.
The linear regression equation describes how the dependent variable y depends on the independent variable and the error value ε. The equation can be written as follows: Among them, β 0 is the regression constant, β 1, β 2 . . . , β k is the regression coefficient, x 1 , x 2 , . . . , x k is the regression variable and ε is the error term.
In order to fit the data features in the model training, the error term in the model can be set as a variable. Its (5) l(H|E) = logL(H |E) (6) y = β 0 + β 1 x 1 + β 2 x 2 + · · · + β k x k + ε role is to reduce the impact of the number of data. In solving mathematical equations, the number of variables should not be greater than the number of conditions. Therefore, when the amount of data is not large enough, classification will cause the solution of the system to become unstable. Therefore, the function of the error term is to simplify the model and improve the generalization ability by setting the coefficients of some low action variables to 0 in the iterative process. In the experimental comparison model, LASSO linear regression method performs better in generalization ability. Therefore, in the setting of error term, the iteration can be set to order 1, and the iteration formula is as follows: where m is the number of samples, λ is the regularization coefficient, β j is the model parameter.
At the same time, an upper limit should be made for the coefficient of error term, so as to compress the coefficient of low action variable and simplify the parameters of the model. The qualification is as follows: when the value of σ is modified, the variable β will be amplified or compressed. When the variable β is compressed to a minimum, some variables will be infinitely close to 0. This variable can be regarded as a low action variable, and the action of this variable can also be regarded as 0. Therefore, this method can simplify the model structure and accurately fit the model when there are few variables.
A local weighting method is proposed. The weight function is as follows. The selection of weight function is based on the normal distribution of data distribution, so the iterative weight coefficient can be expressed as follows: In model training, variable X presents normal distribution characteristics according to the target variable value, so each time in model training, the weight can be adjusted according to the data distribution position, and the adjustment function is as above. In this method, the adjustment function is applied to the regular term, and the adjusted regular term is as follows: Because the random data often presents the normal distribution law, the data in this paper also has this characteristic after verification, so the local weighting method proposed in this paper can further highlight the role of important data, and can improve the prediction accuracy of the model on the premise of ensuring the complexity of the model.
Example analysis
In this paper, 240 cases of patients in the Department of Enterology in a hospital in Shenyang in March 2016 were selected as the validation data. After sample analysis and preprocessing, the prediction model of Bayesian network and regression analysis fusion was established, and the accuracy of the model was verified with the data obtained from preprocessing.
Sample data analysis
Through the preliminary analysis of the sample, the sample shows a normal distribution, without a large number of abnormal data, and the data set itself can be used for model validation. The sample distribution is shown in Fig. 1.
Each medical record consists of four parts, including patient's explanation, history of present illness, medical history and personal history. Each type of variable contains multiple sub variables. The total number of variables selected by the model is 26. Because some variables are highly correlated, LASSO model is used to select and 16 variables are left. The patient's explanation included (10) Fig. 1 Data distribution characteristics. Figure shows the cost distribution of case data. It can intuitively show the law of data distribution, so as to provide some ideas for the improvement of our model. Kernel Density Estimation (KDE) is a method to study the data distribution characteristics from the data sample itself gender A1, age A2, postoperative time A3 of rectal cancer, history of present illness including urethral symptoms B1, defecation symptoms C2, pathology B3, cancer metastasis B4, radiotherapy B5, chemotherapy B6, fever B7, medication B8, previous history including diabetes C1, hypertension C2, other surgical history C3, personal history including smoking D1, drinking D2. In cross validation, discrete random variables are selected as validation variables, which have a large distribution range and randomness. Here we select the variable age.
It can be seen from the table that the interpolation method in this paper can reduce the interpolation error to a certain extent compared with the traditional method, which is particularly important for the prediction results in the case of small amount of data.
The intermediate variable is the result of Bayesian network classification. According to the treatment plan, it can be divided into four categories: systematic treatment, primary chemotherapy, secondary chemotherapy and targeted therapy. The data distribution is shown in Fig. 2.
Analysis of prediction results
Input the processed data into the Bayesian network structure, this paper obtains the analysis structure diagram of each variable to the treatment scheme, and obtains the probability value of each node variable, and the structure is shown in Fig. 3.
In Fig. 3, the model is divided into seven layers. The first layer is the predicted target variable treatment plan. The second level is an important basic variable gender. In the result analysis, gender is an important influencing variable, which will have an important impact on the follow-up third level chemotherapy, radiotherapy, drug types and smoking. The fourth level is the upper level variables related to fever, urethral bleeding, alcoholism and other surgical history. From the relationship between the two variables, we can see that alcoholism has a certain tendency to smoking. The fifth layer is abnormal defecation, operation time of rectal cancer, metastasis and diabetes. The sixth layer is pathology and hypertension. In the relationship between hypertension and diabetes, hypertension has a certain tendency to diabetes. The seventh level is age, which shows that age is the most basic variable (Table 1).
After classifying the data for treatment scheme selection, the data will be regressed and fitted for each category to obtain the prediction accuracy of Bayesian network and LASSO regression model, and the accuracy of Bayesian network and LASSO regression model will be compared with linear regression, traditional LASSO regression and neural network model. The indexes of evaluation results are accuracy, mean square error (MSE) and R-Square (R2), as shown in Table 2 below. MSE is the average of the square of the difference between the predicted value and the true value. The range of R-squared is 0-1. The larger it is, the better the model fitting effect is.
It can be seen from the above table that the prediction accuracy of traditional linear regression model is low. The main reason is that the model is too complex and the generalization ability is poor due to too many independent variables. Therefore, in the traditional LASSO model, the coefficient of independent variable with low effect is set to 0, which greatly simplifies the complexity of the model and slightly increases the accuracy of the model. Neural network model has strong self-learning and adaptive ability, so the prediction accuracy of this model is better than linear regression model, but slightly lower than LASSO regression model. Therefore, in order to further optimize the prediction accuracy of regression model, regression analysis is carried out on the basis of Bayesian network classification. The prediction accuracy of the final model is improved to 89.14%. And it is 23.36% percentage points higher than that of the traditional LASSO regression model. The model optimization effect is significant. And the prediction accuracy is still 3.49% percentage points higher than that of the same regression model without classification. At the same time, the model can select the treatment plan for patients, and can provide more suitable plan for patients in practical application.
Conclusion
Based on the statistics and analysis of the patient electronic medical record, the characteristics of the patients' condition are extracted. Then distance weight is added to KNN method to estimate the missing value. Then, the cases were classified, and the classification results were This distribution can also reflect the distribution law of treatment plan of patients at this stage. In order to give a more detailed treatment plan and the prediction of the treatment plan to provide a certain reference modeled by the local weighted regression method. The prediction model of patients' treatment cost was successfully established, which was suitable for the situation of low amount of data and aimed at improving the prediction accuracy. Using the data provided by the hospital to verify the model, it is found that this method has higher accuracy than the traditional method, reaching 89.14%, 23.36% higher than LASSO regression model with the best effect in the comparison model, and 3.49% higher than the same regression model without classification. Based on the prediction of treatment cost, this paper can also recommend the treatment options for patients, and it is also the key to further improve the accuracy of the same method.
The establishment of the model provides a certain reference for the prediction of medical related expenses, and the processing of text medical records also provides a feasible method for the text to be used in data analysis. The next possible work will be to further improve the accuracy of classification, or reduce the prediction error in the case of wrong classification. Table 1 Cross validation of interpolation methods [17] Table 2 Comparison of accuracy of model prediction | 6,532.4 | 2021-10-16T00:00:00.000 | [
"Medicine",
"Economics",
"Computer Science"
] |
Explicit dynamics based numerical simulation approach for assessment of impact of relief hole on blast induced deformation pattern in an underground face blast
The exploitation of geo-resources is dominantly done using drilling and blasting. The breakage of rock mass by blasting has many challenges. The optimal breakage in an underground development face/tunnel blast is dependent on the relief area provided to the blast holes. This paper has discussed the impact of the number and diameter of the relief holes on the breakage pattern of the rock. The numerical simulation with varying numbers and diameter of relief hole was carried out for this purpose. The isosurface output from numerical simulation was plotted. The plot was used to compare the extent of deformation under varying conditions of relief holes. The analysis shows that the higher number of relief holes with optimum diameter gives more controlled deformation than single relief hole with larger diameter. The nearfield vibrations were also recorded by placement of seismographs. The waveform analysis of the recorded vibration was carried out. The redesigning of the blasting pattern was done using the results of numerical simulation and waveform analysis. The redesigned pattern consists of four relief holes of 115 mm diameter. It was found that the number of cut blast holes firing simultaneously should not be more than two in order to get the optimum breakage for the modelled condition. The blasting output with the revised design has resulted into the considerable improvements in the pull and reduction of overbreak. The revised pattern has addressed the issues of the socket formation at the site. The manuscript covers numerical simulation based approach for assessment of blast induced deformation in an underground face blast under different variations of diameter and numbers of relief holes. The numerical simulation based output reveals that the blast face shows more controlled deformation while using multiple number of relief holes of optimum diameter as compared to a single large daimeter relief hole. The numerical simulation output in this paper has been used to redesign the blasting pattern of the face blast of a Lead-Zinc underground mine. The manuscript covers numerical simulation based approach for assessment of blast induced deformation in an underground face blast under different variations of diameter and numbers of relief holes. The numerical simulation based output reveals that the blast face shows more controlled deformation while using multiple number of relief holes of optimum diameter as compared to a single large daimeter relief hole. The numerical simulation output in this paper has been used to redesign the blasting pattern of the face blast of a Lead-Zinc underground mine.
Introduction
The exploitation of mineral resources by underground needs the development of drives/drivages. These drivages works as the access to the orebody from the main access (shaft or decline) of the mine. The excavation in tunnel or drives of an underground metal mine is dominantly performed by drilling and blasting technique. The controlled blasting technique is practiced to optimise the blast induced deformation. The higher extent of deformation in rock strata is termed as overbreak. This overbreak leads to the irregular profile of the tunnel. On the other hand, blast designers also pose the problems of under breakage along the direction of excavation. This results in to the pull reduction and socket formation. The optimal rock breakage aims to maximize the pull and minimize the overbreak from the blast. The optimisation needs site specific investigations relating the geotechnical parameters of the rock mass. Bieniawski (1968) suggested that the study of fracture propagation characteristic of rock can lead to improve the efficiency of rock breaking using drilling and blasting.
The excavation in drivages of underground metalliferous mine dominantly follows the Burn-Cut pattern. This pattern has a set of cut holes containing empty (relief) as well as blast holes. Since the tunnel/drive excavation has free face along the single direction as compared to two free faces in case of bench blasting, the optimum explosive energy utilisation in the face blast requires additional free face (Murthy and Dey 2002;Verma et al. 2018). This requirement is accomplished by drilling the relief holes, which are kept uncharged during the process of blasting. Accordingly, the dimension of relief holes including its diameter, depth and the area of relief plays significant role in achieving the optimum rock breakage due to blasting (Singh 1995). However, the optimum relief hole dimension for a blast face is a function of rock mass properties, explosive properties, presence of geological discontinuities etc. Accordingly, the assessment of breakage pattern in a burn cut face blast with respective rock and explosive combinations will provide an idea for the optimum relief hole dimension. Sharma (2005) studied the impact of multiple relief holes on the breakage pattern of a face blast. Author found that the multiple relief holes in place of a single large diameter relief hole are more relevant in order to prevent freezing in the spongy rock mass (Sharma 2005;Allen 2014).
The pattern of induced deformation during burn cut face blasting would vary depending on the rock mass properties. The dominant rock mass properties which influences the deformation pattern includes dynamic compressive strength, dynamic tensile strength, elastic modulus, poison's ratio etc. Verma et al. (2016) found that the ultrasonic wave velocities are also important parameter influencing the strata requirement for detonation velocity of explosive. The development blast faces of the underground metalliferous mines are also under the influence of insitu stresses. These insitu stresses also have impact on the damage pattern of a face blast. Researchers have explained that the prestressed strata are prone to show higher magnitude of overbreak under similar explosive loading condition (Mandal and Singh 2009;Abdel-Meguid et al. 2003;Xiao et al. 2019). Mandal and Singh (2009) emphasized to do excavations in small sections and phases in order to minimize overbreak and reduce peripheral damages under highly stressed strata conditions. Verma et al. (2014) found that the presence of geological discontinuities also has impact on excavation profile of a face blast. The explosive parameters also influence the induced damage due to blasting. Researchers such as Mandal et al. (2005), Bullock and Rostami (2013), Himanshu et al. (2021a), Singh (2018, Vishwakarma et al. (2020) etc. discussed the impacts of detonation velocity, energy of explosive and accuracy of delay detonators on blasting outputs.
Various researchers have used statistical and numerical approach for predictions of blast induced damages. Statistical algorithms such as-neural network, genetic algorithms, colony optimisation algorithm, random decision tree, particle swarm optimisation, support vector machine etc. has been used extensively for prediction of blasting outputs under different scenarios (Rezaeineshat et al. 2020;Saghatforoush et al. 2016;Monjezi et al. 2010;Kumar et al. 2021;Zhang et al. 2020;Hasanipanah et al. 2015). These statistical algorithms are based on the data analysis for the experimental blasts carried out at the site. The numerical models however have advantage of simulating the rock mass conditions of the site with specific parametric condition. The parameters in the numerical models can be explicitly distinguished. The respective blasting outputs with the variation of any parameter can be recorded using this approach.
Researchers such as Himanshu et al. (2021b), Xia et al. (2021), Das and Singh (2021) Pan et al. (2021), Wang et al. (2018), Onederra et al. (2013), etc. have used numerical simulation approach for assessment of blast induced damages under various conditions. Mitelman and Elmo (2014) have used a hybrid finite discrete element based modelling for predicting damage induced by blasting in tunnelling operation. Xu et al. (2015) predicted blast induced rock fracture near a tunnel using numerical simulation.
Explicit dynamics
The explicit dynamics solver has been used for the numerical simulation in this study. The solver uses the Central Difference Time Integration scheme for computation of nodal acceleration. In this solver, the forces are computed at the nodes (resulting from internal stress, contact, or boundary condition), and the nodal acceleration are derived by dividing forces by mass. Once the nodal acceleration is determined, the nodal velocity and displacement at a time is further computed using integration. The dynamic loading condition has been simulated in this study. The material subjected to dynamic/high impact loading shows non-linear behaviour. In implicit modeller, the non-linear equations are solved by converting it as linear approximation using Newton-Raphson method. Accordingly, the iterations are required in each time step to achieve convergence. However, the non-linear equations are uncoupled in explicit solvers. So, no iteration or convergence check in each time step is required in such case. Accordingly, the explicit solver is more efficient for solving non-linear equations from high impact loadings. Hence, this solver has been used in this study.
Experimental site details
The outcomes of the numerical simulation were used to redesign the face blasting pattern of an experimental site. The site was located at Rajpura Dariba mine. The physico-mechanical properties of the rocks were also tested for the same mine. The mine is located in the southern extremity of Rajpura-Dariba Bethunmi metallogenic belt in Rajsamand district of Rajasthan state of India.
Dariba-Bethumni metallogenic belt comprises an assemblage of medium to high grade metamorphic equivalents of orthoquartzites, carbonates and carbonaceous facies rocks belonging to Bhilwara super group. This cover sequence is underlain by basement rocks (gneisses and schist) of Mangalwar Complex. The Lead-Zinc mineralisation occur in this belt in various sizes and grades. The orebody mainly contains calc-silicate bearing dolomite and graphite mica schist horizons. The geological map of Dariba-Bethumni metallogenic belt is shown in Fig. 1 (Sugden et al. 1990;Gupta et al.1995;Mishra et al. 2006).
The mine area mainly constitutes a sequence of meta-sediments consisting of mica schist, calcareous biotite schist and graphite mica schist (from footwall to hangingwall). Calc-silicate bearing dolomite occurs within the graphite mica schist horizon towards its contact with the calcareous biotite schist.
Existing face blasting method of the experimental site
The development face blasting at the experimental site is carried out to make drivages and cross-cuts. The drivages are made in the contact of the ore body to have the initial access to the ore body. Cross-cuts are usually driven across the ore body. Sugden et al. 1990;Gupta et al, 1995;Mishra et al. 2006) properties are shown in The ends of the specimens were flattened and the sides of the specimen were smoothened. The diameter of the specimen was measured to the nearest 0.01 mm and used for calculating the cross-section area. The height of the specimen was determined to the nearest 0.01 mm. In case of uniaxial compressive strength testing, the load on the specimen was applied continuously at a uniform loading rate such that failure occurred within 5-10 min of loading. Alternatively, the loading rate was within the limit of 0.5-1.0 MPa/ sec. Tensile Strength of rock samples was analyzed by the indirect Brazilian Tensile Strength investigation as per IS: 10,082-1981 norms (Bureau of Indian Standard, 1981). The numerical models with varying relief hole diameter as 40 mm, 70 mm, 89 mm, 105 mm and 165 mm were prepared. The diameter of the charged blast holes was kept 40 mm in each case. The number of relief holes were varied as-one relief hole, two relief holes, three relief holes and four relief holes. A view of arrangement of cut holes (including relief holes (R) and blast holes) is shown in Fig. 2. The explosive materials were modelled as detonation products in numerical simulation. Jones-Wilkins-Lee (JWL) Equation of State (EOS) is a dominant constitutive model used for the detonation products. So, JWL EOS was used for the explosives in the numerical models (Castedo et al. 2018;Artero-Guerrero et al 2017;Hu et al 2015;Pramanik and Deb 2015;Sanchidran and Lopez 2006). This EOS is represented as a relation among pressure, volume and energy. The expression for JWL EOS is shown in equation I.
where A, B, R1, R2 and x are constants.P = pressure.V = Volume.E = Energy. The JWL EOS parameters for the explosive have been used in the model from the literature. Cylindrical tests are done to evaluate the parameters of JWL EOS (Davis and Hill 2001). The parameters of this EOS used in the numerical model is given in Table 2.
Sand was used as the stemming material in the numerical model. The physical properties of ''Sand'' was taken from the library of Ansys software (Ansys Autodyn Manual v 18.0).
The explicit dynamics numerical simulation module consists of Lagrange and Euler domain. In simulation consisting of different materials, solid part unlike ore/rock is modelled under Lagrange domain. The fluid or gaseous part is modelled under Euler domain. Explosive and stemming materials were modelled under Euler domain in this numerical model.
Fixed support and impedance boundary conditions were used in this numerical model. Fixed support conditions were imposed to restrict the deformation within the modelled volume. The reflection/refraction of incoming blast waves were restricted by providing impedance boundary condition. These boundary conditions were provided in all the faces of the modelled block except the free face.
Analysis of the numerical simulation results
The combinations of blast holes and cut holes in the cut portion of the burn-cut pattern were varied in different numerical models. The deformation pattern under different conditions were plotted and compared to get the optimised pattern. The trend of variation in deformation under different parametric combinations have been studied. The deformation pattern recorded as the model output reflects only tensile deformation against the relief holes. The plot of maximum principal elastic strain from a model output is shown in Fig. 3. The analysis of maximum principle elastic strain of the rock under blast loading shows that the tensile strain is developed only along the relief holes. This is because the stress wave due to blasting propagates from the detonation point to free face, which is along stemming portion and along relief hole. The propagating stress wave in such case is compressive in nature. The stress wave gets reflected from free face under tension. Since the compressive strength of the rock mass is much larger than the tensile strength, the deformation within the rock mass takes place along tension.
The actual blasting of the cut holes includes delay sequence among the holes. However, the simulation with field delay timing in explicit dynamics is not possible. Hence, the blast holes were allowed to be fired simultaneously in the model.
The output deformation pattern from the model was analysed in terms of the pull achieved and the overbreak. The comparative deformation contour along the pull and periphery of the cut holes for two different conditions of relief holes is shown in Fig. 4.
The comparison reveals that there is minimal deformation along pull as well as periphery direction in the burn cut pattern having relief holes of 40 mm diameter as compared to the pattern having relief holes of 165 mm diameter. The deformation contour was plotted along with the scale and the attempt was made to assess the extent of deformation. However, it was realised that the plotted contour doesn't include all the area that has been deformed. The plot will show no deformation, even if there is deformation in the model but the magnitude of deformation is less than the second lowest value in the contour band. Isosurface of zero-deformation was plotted to address this issue. An Isosurface is a surface that represents points of constant value within a volume of space. Isosurface plots of non-deformed zones for burn cut face blast with four relief holes of different diameter is shown in Fig. 5. The comparison of the void spaces in the plot shows that the extent of deformation increases with increment in the diameter of the relief holes.
To further investigate the exact extent of breakage under different conditions, capped isosurfaces of zero deformation were plotted. The capped isosurface is the isosurface with capping on the void portion. The capped isosurface for all the parametric conditions were plotted along explosive charging and periphery of blast holes' directions. The analysis of capped isosurface along explosive charging direction suggests that the complete deformation in the rock mass upto charge length doesn't takes place when the relief hole diameter is 40 mm. The complete deformation is observed under all other conditions with different relief hole diameter. The comparative isosurface plot along explosive charging direction with the assessment of deformation extent under two different conditions is shown in Fig. 6.
The capped isosurface has also been plotted to investigate the extent of damage along the periphery of the cut blast holes. The plotted isosurface under different variations of relief holes is shown in Fig. 7. The extent of deformation from the plot has been 40 mm, shows the uniform deformation along blast holes as well as relief holes. This is due to the same free face provided by relief holes as well as charged holes in stemming portion. The figure shows that the tensile stress wave produces deformation at much larger extent when the diameter of relief holes is 165 mm.
The analysed extent of deformation under different variants of relief holes is shown in Fig. 8. The comparison suggests that the extent of deformation increases with the increment in the number and diameter of relief holes. The figure also suggests that the extent of deformation is larger in case of a single large relief hole than using multiple relief holes of smaller diameter. Accordingly, it can be concluded that after achieving the complete pull from the burn cut face blast, multiple numbers of smaller diameter relief holes will give more controlled deformation than the single large diameter relief hole. Jimeno et al. (1995) have also drawn similar conclusion. Authors suggested the equivalent diameter for the group of relief holes. The equivalent diameter is computed using equation II. The plot of extent of deformation in rock mass with the equivalent diameter of relief holes is shown in Fig. 9. The plot shows that the parameters are related with correlation coefficient of 0.76.
where D eq = equivalent diameter of relief hole.n = number of relief holes.D = diameter of relief holes. Further, the maximum magnitude of deformation occurred in the rock mass under different conditions has also been compared. The trend of maximum deformation under different conditions is shown in Fig. 10. The trend shows that the magnitude of deformation increases with the increment in the number and diameter of relief holes. The increment trend is sharper for the case of increase in the diameter of relief holes as compared to that of the numbers of relief holes.
Redesigning of blasting pattern using simulation results
The existing burn-cut blast design pattern practiced at the study site was redesigned using the simulation results. View of a development face blast of the study site is given in Fig. 11. The existing pattern consists of 56 charged blast holes of 40 mm diameter with 04 relief holes of 89 mm diameter. There were issues of socket formation as well as overbreak from the face blast at the site. The existing drilling and delay pattern practiced at the mine is shown in Fig. 12. The identification of sockets was done to investigate the cause of its formation. The most of the sockets were in the holes nearby of the cut portion, which reveals that there was overbreak while blasting of the cut portion. This overbreak might have caused to restrict the detonation of the nearby charged blastholes, which have resulted into the socket formation. The extent of damage output from the numerical model has been compared to estimate the optimum cut blasting pattern to reduce the deformation. The extent of deformation output from the numerical model is due to simultaneous detonation of thirteen blast holes. The numerical simulation with the practical delay timing is not possible with Ansys-Explicit Dynamics module. Hence, the simultaneous detonation was provided in the model to all the cut blast holes. Based on the dependency of rock breakage on critical peak particle velocity (Holmberg and Persson 1978), the blast induced deformation can be considered proportional to the maximum charge weight per delay. Accordingly, the deformation while firing of two blastholes of cut simultaneously will be 1/6.5th of that of the simulation results. The extent of deformation using this computation for firing of two cut holes simultaneously against four relief holes of 115 mm diameter will be 2.4 square meter. Accordingly, the deformation will be 0.1 m more than the cut boundary extent using this pattern. Hence, this pattern can be considered as optimum for the blasting face of the study site. The nearfield vibrations were monitored for the experimental blasts at the site by placement of seismographs. The waveform analysis of recorded near field vibration data has been carried out to explore the possibility of design modifications for reduction of overbreak. The recorded waveform for an experimental development face blast is shown in Fig. 13. The analysis of the recorded waveform shows two sharp peaks of vibration. One peak is due to the blast of cut holes and another peak is due to the blast of perimeter holes. The vibration peaks have been compared with the face blasting pattern shown in Fig. 12. The peak due to the blast of cut holes is due to firing of four cut holes simultaneously against the insufficient free face generated by relief holes of 89 mm diameter. The peak due to firing of perimeter holes can be considered as the main reason behind the overbreak due to the blast. The review of the existing blast design reveals that the number of holes blasted at delay no. 21 and 22 is 16 and 15 respectively, which is much larger to increase the charge weight per delay thereby increasing The increased level of vibration will result into enhanced over break. So, the design was modified to distribute the delay sequence such that the charge weight per delay along periphery holes should be reduced. The revised blast design based on the results of waveform analysis and numerical simulation is shown in Fig. 14. The blast design consists of four relief holes of 115 mm diameter. The firing of only two cut blast holes simultaneously was suggested in the revised blast design. However, maximum 8 blast holes were suggested to be fired in the blast design, thereby reducing the maximum charged weight per delay by half. The results of the experimental trials under existing and revised blast design patterns were compared. The comparison has also been made with variations in numbers of relief holes. A view of the cut blast face with three and four numbers of relief holes is shown in Fig. 15. The relief hole diameter was varied as 89 mm and 115 mm. Altogether twenty blasts were conducted to compare the results. The blasting outputs under different conditions is shown in Table 3. The outcomes have been measured in terms of pull achieved and overbreak generated. The results show that there are considerable improvements in the pull and reduction of socket formation while using four relief holes of 115 mm diameter. Although, the overbreak generation is mainly influenced by the charging pattern in periphery holes. But, the controlled movement of the face after blast of each cut also contributes in the overbreak reduction. Accordingly, the revised pattern has also addressed the issues of overbreaks.
The resulting nearfield vibration has also been reduced with the revised blast design. The waveform of nearfield vibration recorded at a distance of 30 m from the blast face with the revised blast design is shown in Fig. 16. The analysis shows that the Fig. 13. The waveform analysis also reveals that the variation in magnitude of vibration while blasting of different cuts is relatively uniform. This leads to the controlled deformation of the rock mass.
Conclusions
The number and diameter of relief holes plays pivotal role in blast induced deformation pattern of a burn cut blast. The relief hole works as the free face to ensure tensile breakage of the rock mass under the blast loading. Numerical simulation and analysis of parametric response for different combinations of relief holes is capable of optimising the burn cut blast design Fig. 13 Recorded waveform for the experimental blast featuring the delay timings of the blast holes pattern. Accordingly, the numerical simulation approach with finite element based modeller Ansys-Explicit dynamics was used in this study. The deformation prediction under different scenario has been made for a lead-zinc mine site. The model was simulated for blast of cut holes with different relief hole combinations. The model was provided with isotropic elastic rock mass condition. The explosive parameters were modelled under JWL equation of state. The model result shows that the deformation in rock increases with the increment in the number and diameter of the relief holes. The comparative analysis of the deformation shows that the larger number of relief holes with small diameter gives more controlled deformation than the single relief hole of large diameter. The numerical modelling based output has been used to redesign the existing blast design of the experimental site. The assessment of extent of deformation from numerical modelling shows that the four number of relief holes with diameter of 115 mm is optimum for the modelled rock mass condition. The number of cut blast holes firing simultaneously should not be more than two in order to get the optimum breakage. The waveform analysis of the recorded nearfield vibrations from experimental site was also carried out. Two dominant peaks of vibration were observed in the recorded waveform. The peaks were due to the blasts of cut holes and periphery holes. The peak in cut blast hole region is due to the insufficient movement of rock mass under four relief holes of 89 mm diameter. The peak due to blast of periphery holes is because of larger number of blast holes firing at a delay. The blast design pattern was revised based on the results of numerical simulation and waveform analysis. The overbreak reduction has been achieved using the revised blast design pattern. The issues of socket formation have also been addressed with the revised pattern.
The methodology used in this paper can be used to optimise blast design parameters for a face/tunnel blast. Since, the blasting output is strata dependent, the numerical simulation can be a useful tool to give the insight about the optimum blast design for a face/tunnel blast. | 6,105.4 | 2021-06-11T00:00:00.000 | [
"Materials Science"
] |
Ophelimus bipolaris sp. n. (Hymenoptera, Eulophidae), a New Invasive Eucalyptus Pest and Its Host Plants in China
Simple Summary Eucalyptus species have become one of the most commonly planted trees worldwide, including China. However, the productivity of Eucalyptus plantations has been threatened by the recent increase of invasive insect pests. Gall inducers of the genus Ophelimus (Eulophidae) are among the most important invasive species in Eucalyptus plantations. Based on the combined analysis of biological, morphological and molecular evidence, we here describe a new invasive species, Ophelimus bipolaris sp. n., from China. This wasp induces galls only on the leaf blade surface of four Eucalyptus species. It can complete a life cycle on E. urophylla in approximately 2 months under local climatic conditions in Guangzhou, China. Abstract Eucalyptus species have become one of the most commonly planted trees worldwide, including China, due to their fast growth and various commercial applications. However, the productivity of Eucalyptus plantations has been threatened by exotic invasive insect pests in recent years. Among these pests, gall inducers of the genus Ophelimus of the Eulophidae family are among the most important invasive species in Eucalyptus plantations. We report here for the first time the presence of a new invasive Eucalyptus gall wasp, Ophelimus bipolaris sp. n., in Guangzhou, China, which also represents the first species of the genus reported from China. The identity of the new species was confirmed by an integrative approach combing biological, morphological and molecular evidence. The new species is described and illustrated. This wasp induces galls only on the leaf blade surface of four Eucalyptus species: E. grandis, E. grandis × E. urophylla, E. tereticornis and E. urophylla. Our preliminary observation showed that O. bipolaris could complete a life cycle on E. urophylla in approximately 2 months under local climatic conditions (23.5–30 °C). Considering the severe damage it may cause to Eucalyptus production, further investigations of its biology and control are urgently needed in China.
Introduction
Most species of the genus Eucalyptus (Myrtaceae) are native to Australia, but have been commonly planted worldwide due to their fast growth and various commercial applications [1]. Eucalypts were first introduced to China sometime before 1894 [2], and the expansion of plantations has dramatically increased in the country since the 1980s [3]. By 2017, eucalypt plantations had been established in all provinces of China south of the Yangtze River, and these plantations amounted to 5.4 M ha [4].
In China, there are about 300 species of phytophagous insects associated with eucalypts [5], including the invasive gall-inducer, Leptocybe invasa Fisher & La Salle (Hymenoptera, Eulophidae), which was first reported from China in 2007 [6]. Although at least one molecular study [7] has suggested that L. invasa in fact comprises two cryptic species, with the female-biased population from China being genetically different from the firstdescribed thelytokous population from the Mediterranean region, no formal taxonomic act has been proposed for the species complex. Nevertheless, Leptocybe invasa is currently the only gall-forming pest of eucalypts recorded in China, and it forms galls ( Figure 1A) on the stems, petioles and midribs of leaves of a few Eucalyptus species in the sections Exertaria, Latoanulata and Maidenaria [8]. In April 2021, we found a new form of protruding galls ( Figure 1B) on the leaves (never on mid-ribs or branches) of Eucalyptus urophylla S. T. Blake on the campus of Guangdong Eco-Engineering Polytechnic, Guangzhou, China. At the site of the infected trees, we also observed that wasps belong to the family Eulophidae (Hymenoptera) were apparently laying eggs on the young leaves ( Figure 2A). On examination of the specimens collected from the leaves and reared from the galls, we found that these wasps are conspecific and belong to the genus Ophelimus Haliday.
The genus Ophelimus is native to Australia and currently contains 53 described species [9,10]. Available biological data indicate that species of the genus develop in galls on various species of Eucalyptus and are considered gall inducers [11,12]. Infestations of these gall inducers, especially those invasive species occur outside their native range, can lead to intense gall production on Eucalyptus trees and subsequently, severe defoliation, causing significant economic losses [13][14][15]. Originally described as Rhicnopeltella eucalyptis Gahan [16] based on female specimens reared from galls on Eucalyptus globulus Labill from New Zealand in 1922, Ophelimus eucalypti (Gahan) was considered the first invasive species reported outside its native Australian origin [13]. By 1987, this species had been recorded inducing galls on the midribs and branches of eucalypt species in the section Maidenaria, and no males of these populations had been observed, and therefore, populations of the species that infected the section Transversaria were later considered the 'Maid' biotype [13]. In 1987, another population identified as O. eucalypti in New Zealand was reared from eucalypt species in the section Transversaria and induced galls only on the leaf blade surface. This latter population was biparental and later was considered as the 'Trans' biotype [13]. Borowiec et al. [9] recently confirmed that O. eucalypti comprises two cryptic lineages ('Maid and 'Trans') based on the host plant, reproduction mode and morphological and molecular (28S) differences. Ophelimus eucalypti was erroneously reported in Europe [9,14], but by 2019, both lineages of O. eucalypti were only listed in New Zealand. Recently, the first report of O. eucalypti outside of New Zealand was in Sumatra, Indonesia, where it had caused serious damage to E. urophylla and hybrids of the species with Eucalyptus grandis W. Hill, but the lineage has not been determined [15]. However, currently the most widely distributed invader is Ophelimus maskelli (Ashmead), which was first described as Pteroptrix maskelli from New Zealand by Ashmead in 1900 and was transferred to the genus Ophelimus by Bouček [11]. Outside its native range, O. maskelli has been reported from the Mediterranean Basin, Southeast Asia, South Africa and North America (see references in Borowiec et al. [9] and Dittrich-Schröder et al. [15]). Ophelimus maskelli is a thelytokous (only reproduce females) species and induces blister-like galls near the petiole on the leaf blade of Eucalyptus species. Uncontrolled populations of these wasps can cause severe leaf damage and almost complete defoliation of mature trees in some cases [14,17]. Fourteen host species have been recorded for O. maskelli, with Eucalyptus camaldulensis Dehnhardt and Eucalyptus tereticornis Smith being economically important and particularly suitable species [14]. Recently, two species, Ophelimus mediterraneus Borowiec & Burks and Ophelimus migdanorum Molina-Mercader were newly described from the Mediterranean Europe [9] and Chile [10], respectively. Ophelimus mediterraneus is also a thelytokous species and induces galls on the upper surface of the leaves on Eucalyptus species from the Maidenaria section, such as Eucalyptus globulus Labill and Eucalyptus gunii J. D. Hook [9]. While O. migdanorum is a biparental species and induces galls on stems, petioles, laminae and leaf venations of E. globulus and Eucalyptus camaldulensis Dehnh [10]. Considering the economic importance of Ophelimus species to the production of Eucalyptus trees, in this study, we aim to investigate the identity of the Ophelimus species we just found in Guangzhou, China, using an integrative taxonomic approach combining biological, morphological and molecular information.
Insect Sampling
The initial survey was conducted between April and July 2021 in a small E. urophylla plantation on the campus of Guangdong Eco-Engineering Polytechnic (GEEP), Guangzhou, China. To investigate the host range of the wasp, additional surveys were conducted between early June and late July at three other localities in Guangzhou (Table 1). Wasps on the leaves of Eucalyptus trees were collected and preserved in 95% ethanol. Mature leaves with large galls of each infected Eucalyptus species were collected, labeled and placed in small plastic bags at the laboratory of Sun Yat-sen University, Guangzhou, China. Emergences were checked daily, and all emerged adult wasps were collected in 95% ethanol to allow for further molecular and morphological study. Voucher specimens are deposited in the Museum of Biology at Sun Yat-sen University (SYSBM), Guangzhou, China. During the initial survey in April at the plantation of GEEP, young leaves of E. urophylla were observed being attacked by the wasps. Such leaves were recorded and left on the tree until they were covered with large mature galls and then were collected in plastic bags as described above.
Species Identification
Morphological terminology generally follows Gibson et al. [18]. The systematics and taxonomy of Ophelimus are poorly studied [14]. However, some characters, especially the number of setae on the submarginal vein of fore wings, are thought to be diagnostically valuable [14]. Recently, Borowiec et al. [9] provided a key for some Ophelimus species of agricultural interest. To supplement morphological identifications, two molecular markers, mitochondrial DNA (mtDNA) cytochrome c oxidase 1 (COI) and nuclear 28S rRNA D1-2 (28S) were sequenced for molecular species delimitation. Genomic DNA was extracted from 13 adults and two larvae dissected from the gall using a nondestructive method as described in Taekul et al. [19]. Detailed information about the sequenced specimens used in this study is given in Table 2. Polymerase chain reaction (PCR) amplifications of the two DNA fragments were performed using Tks Gflex DNA Polymerase (Takara, Shiga, Japan) and conducted in a T100 Thermal Cycler (Bio-Rad). The primer pairs LCO1490/HCO2198 [20] and D2-3551F/D2-4057R [21] were used for COI and 28S, respectively. Thermocycling conditions were: an initial denaturing step at 94 • C for 5 min, followed by 35 cycles of 94 • C for 30 s, 50 • C for 30 s, 72 • C for 30 s and an additional extension at 72 • C for 5 min. Amplicons were directly sequenced in both directions with forward and reverse primers on an Applied Biosysttems (ABI) 3730XL (Applied Biosystems, Foster City, CA, USA) by Guangzhou Tianyi Huiyuan Gene Technology Co., Ltd. (Guangzhou, China).
Chromatograms were assembled with Geneious 11.0.3. All the amplified sequences were deposited into GenBank (Table 2). All sequences were blasted in the BOLD (Barcode of Life Database, http://www. barcod-inglife.org/index.php/IDS_OpenIdEngine, only for COI) and GenBank. The sequences generated in this study along with representatives generated by Molina-Mercader et al. [10] and Borowiec et al. [9] were aligned using MAFFT v7.470 by the Q-INS-I strategy for 28S and G-INS-I strategy for COI [22]. After removing the identical sequences, the alignments were then analyzed using RAxML as implemented in Geneious 11.0.3. Sequences of Closterocerus chamaeleon (Girault) (Hymenoptera: Eulophidae) were used as outgroups to root the trees as used by Borowiec et al. [9].
Photography
Images of live specimens and trees were taken with a Canon 5D Mark III (Tokyo, Japan) camera with a 100 mm macro lens. Images of mounted specimens were produced using a Nikon SMZ25 microscope (Melville, NY, USA) with a Nikon DS-Ri 2 (Melville, NY, USA) digital camera system. Images of the type specimen of O. eucalypti were provided by the National Museum of Natural History (NMNH), Smithsonian Institution, Washington, DC, USA. Scanning electron micrographs were produced using a Phenom Pro Desktop SEM and single montage images were generated from image stacks in the program Helicon. Images were post-processed with Adobe Photoshop CS6 Extended.
Results
Of the four sampled localities, six Eucalyptus species or hybrid species were surveyed and four of them were infected by Ophelimus wasps: E. grandis, E. grandis × E. urophylla, E. tereticornis and E. urophylla (Table 1). Galls were only found on the leaf blade surface of all the four infected Eucalyptus species. Mature galls ( Figure 2B) are 2-3 mm in diameter and protrude 1-2 mm on each side of the leaf. Each gall contains a single larva ( Figure 2C) but eggs tend to be laid close together on a leaf and develop patches of tightly packed galls. In severe cases, the entire leaf is totally covered with galls. Galls change from green to red, then to brown. A circular exit hole is left on the gall as the adult wasp emerges. In this study, a total of 1244 Ophelimus specimens were collected, and 97.2% were females. Detailed information about the collected specimens is given in Table S1.
During the survey, at the end of April at the GEEP plantation, young leaves of the upper shoots of five two-year old trees at the infested site showed no sign of galls, but by the end of May, almost each of those leaves was covered with numerous green or red galls, and a few adult wasps emerged in mid-June. According to the temperature records provided by the China Meteorological Data Service Center, the average temperatures of Guangzhou from April to July ranged from 23.5 • C to 30 • C ( Figure S1). Although biological studies of this wasp species on the host plants are still on going, our preliminary observation showed that the duration of its life cycle is approximately 2 months, at least so on E. urophylla in Guangzhou.
Both 28S and COI genes were successfully sequenced from all the 15 specimens (13 adults + 2 larvae). All the sequences of 28S (605 bp) were identical to each other and showed 99.3-99.5% identity to the O. eucalypti 'Trans' biotype and 98.1% to the O. eucalypti 'Maid' biotype in the GenBank database (Table S2). Phylogenetic analysis based on 28S sequences generated from this study together with those used by Borowiec et al. [9] showed that the Chinese Ophelimus species is sister to the O. eucalypti 'Trans' biotype ( Figure 3), which together form a clade clearly separated from other species. The 15 sequences of COI (660-678 bp) were also mostly identical, with only one sequence (MZ348610) differed by two nucleotides. These COI sequences do not show a high match with sequences in both the BOLD and GenBank databases. The closest match is Ophelimus migdanorum Molina-Mercader, with 92.28% identical pairs of bases (Table S3). When analyzed with the COI sequences of O. maskelli (Ashmead), O. mediterraneus and O. migdanorum, the two unique COI sequences of the Chinese Ophelimus species formed a clade sister to O. migdanorum but with a low support ( Figure 4). While there is no COI sequence of either biotype of O. eucalypti in the BOLD and GenBank databases, the 28S sequences well suggest that the Chinese Ophelimus species might be conspecific to the O. eucalypti 'Trans' biotype or a closely related species. Considering that the two biotypes of O. eucalypti are most likely two distinct species, as confirmed by Borowiec et al. [9] using molecular, morphological (number of submarginal vein setae) and ecological (host range) data, the Ophelimus species we found in Guangzhou should represent a distinct species different from O. eucalypti sensu Gahan. Further examination indicates that the wasps we collected in Guangzhou are morphologically identical. By comparing the holotype of O. eucalypti (based on images provided by NMNH, Figure S2) and the original description of the species provided by Gahan [16], as well as running the key compiled by Borowiec et al. [9], we conclude that the Ophelimus species we found belong to an undescribed species and we here describe it as new to science below.
Ophelimus bipolaris Chen & Yao, sp. n. Etymology: The name bipolaris refers to the gall induced by this species that protrudes from both sides of the leaves of the host plants.
Diagnosis. Submarginal vein of fore wing with 3-5 dorsal setae. Body mainly reticulate. Mesoscutal midlobe with 5 pairs of long setae. Propodeum medially longer than metascutellum. Marginal vein about 1.8 × length of stigmal vein. Postmarginal vein distinctly shorter than stigmal vein. The new species is similar to other well-known Ophelimus invasive species. The differences between the new species and other four Ophelimus species of agriculture interest are summarized in Table 3. Description: Female ( Figure 5). Body length 1.1-1.8 mm. Colour: Head and body brown with variable metallic green and orange luster, metasoma darker dorsally. Antenna brown. Coxae brown with metallic green luster, first three tarsomeres pale brown, remainders of legs dark brown to brown. Wings hyaline, with veins grayish black.
Head: Reticulate, except scrobal depression and clypeus smooth. Vertex, gena, lateral frons and ventral half of face with sparse long setae. Ocelli in a low triangle, widely separated, the posterior ocelli separated from the eye margin by about the diameter of an ocellus. Eye with short setae that are visible at high magnification. Malar sulcus shallow but visible. Clypeus small, lateral margin hardly distinguishable. Anterior tentorial pit present. Mandible bidentate, ventral tooth much larger than dorsal tooth. Antenna: Pedicel slight shorter then funicle. First four flagellomeres anellifrom, the last also transverse but much larger and bearing multiporous plate sensilla. Club longer than the other flagellmeres combined, ovate, with three distinct clavomeres, the apical one bearing a long terminal seta.
Wings: Fore wing about as long as body, about twice as long as broad. Submarginal vein with 3-5 dorsal setae. Marginal vein about 1.8 × length of stigmal vein. Postmarginal vein distinctly shorter than stigmal vein, tapering gradually from the base until lost in the margin of wing.
Metasoma: Subspherical, about as long as mesosoma or slightly shorter, apex not pointed. First tergum the longest, second to sixth terga subequal, each about half length of the first tergum. Ovipositor short, not exserted. Hypopygium reaching about 0.3 × length of metasoma. All terga reticulate.
Discussion
There are 53 described species of Ophelimus and the genus is poorly studied and needs a thorough taxonomic revision [9,14]. Therefore, synonymies are likely to be detected in future. It is possible that O. bipolaris was described previously under another name. However, considering the fact that the descriptions of most of the old species are always poor and the types are in very bad conditions [9], the task of attempting to attain the holotype of each species to rule out that possible species' identity would severely delay or even prohibit the execution of this study. The combined analyses of biological, molecular and morphological data (Table 3) presented here should permit the unequivocal identification of O. bipolaris sp. n.
Ophelimus bipolaris induces protruding galls on the leaves of E. grandis, E. grandis × urophylla, E. tereticornis and E. urophylla. The galls induced by O. bipolaris are most similar to those induced by the O. eucalypti 'Trans' biotype (Table 3). However, gall morphology is different between females and males of the O. eucalypti 'Trans' biotype, with females inducing circular, protruding galls and males inducing pit galls, while galls induced by O. bipolaris show no differences between both sexes. The host range of O. bipolaris is also similar to the O. eucalypti 'Trans' biotype, which has been reported to attack Eucalyptus species in the section Transversaria and E. urophylla [13,15]. However, one of the host plants of O. bipolaris, E. tereticornis, is in the Dumaria section of Eucalyptus.
The reproduction modes are different among Ophelimus species, but all seem to be female-biased. The O. eucalypti 'Maid' biotype [24], O. maskelli [14] and O. mediterraneus [9] have been reported as thelytokous that reproduce females only. According to Withers et al. [13], the O. eucalypti 'Trans' biotype is biparental, but the sex ratio was not clearly stated in their study, although Dittrich-Schröder et al. [15] erroneously claimed that the lineage was male-biased when citing Withers et al.'s study. Ophelimus migdnorum is also biparental, and 58.9% are females [10]. Our study indicates that O. bipolaris is female-biased, with about 97.2% of the collected specimens being females. Female-biased sex ratio occurs frequently in Chalcidoidea, and it has been associated with infection by symbiotic bacteria able to manipulate the reproduction of their host [25]. Sex ratio variations might reflect the infection by different bacterial endosymbionts, resulting in different reproduction modes of the hosts and therefore different species lineages. For example, molecular analyses suggested that L. invasa is in fact a complex of two cryptic species that are infected by two closely related strains of Rickettsia [7]. Therefore, screening for the infection of endosymbionts among Ophelimus species, especially the two biotypes of O. eucalypti and O. bipolaris, is a possible direction in clarifying the identities of these species.
Both the 28S and COI sequences showed unambiguous differentiation between O. bipolaris and four other species (Figures 3 and 4), although the 28S sequences of O. bipolaris and the O. eucalypti 'Trans' biotype are 99.3-99.5% identical, and one might suspect that O. bipolaris is conspecific with the O. eucalypti 'Trans' biotype. However, the 28S is conserved and often invariant between closely related species in Eulophidae [26,27]. The low distance of the 28S sequences among Ophelimus species is consistent with what already observed in Eulophidae [9,28]. While the divergence of COI sequences is high between O. bipolaris and other studied Ophelimus species, most of the COI sequences are identical among the specimens collected from all the four studied localities. Even the two sequences have only two different nucleotides. This reduced genetic COI variation could be due to founder effects (the reduction in genetic variation that results when a small subset of a large population is used to establish a new colony) [29] or endosymbiont infection (endosymbionts can act as reproductive manipulators and are considered responsible for the low mitochondrial genetic diversity in infected populations) [30,31]. Further studies are required to investigate the cause of this low mitochondrial genetic diversity.
The result of the morphological analysis (Table 3) was also consistent with the molecular results, indicating O. bipolaris is a distinct species. The number of setae on the submarginal vein of the fore wing was first thought be an important diagnostic character for Ophelimus species [14,32], at least O. maskelli was thought to be the only species with one single submarginal vein seta, but subsequent studies showed that this character was, however, not discriminant among Ophelimus species [9,10]. According to Molina-Mercader et al. [10], the number of submarginal vein setae is in accordance with the body size of the specimen. Ophelimus bipolaris has 3-5 submarginal vein setae, and we indeed found that smaller specimens tend to have fewer setae (Table S4). Body size was used in the key compiled by Borowiec et al. [9], but apparently this character is also not discriminant among Ophelimus species. Besides, body size is easily affected by temperature and the host plant species [33]. Nevertheless, O. eucalypti is the largest species recorded, and O. bipolaris is relatively smaller. Body color seems to be useful in separating O. bipolaris (head and mesosoma are metallic green) from O. eucalypti (head and mesosoma mainly black and only faintly tinged with metallic green or purplish). The following characters might be of diagnostic value: (1) the postmarginal vein is much shorter than stigmal vein; (2) the mesoscutal midlobe with 5 pairs of long setae; (3) the propodeum is distinctly longer than metascutellum medially.
Our preliminary observation showed that O. bipolaris on E. urophylla only took approximately 2 months to complete a life cycle in Guangzhou, under local climatic conditions (temperature: 23.5-30 • C). Obviously, its life cycle might be affected by temperature and host plant species, as has been found in O. maskelli [14]. Further studies regarding the host range and life cycle of O. bipolaris in China are required.
The origin of O. bipolaris is unknown, but undoubtedly it originates from Australia or Indonesia, since it exclusively attacks Eucalyptus species. Of its four known host plant species or hybrids, E. tereticornis and E. tereticornis are native to Australia, while E. urophylla is native to Indonesia. Therefore, O. bipolaris is an invasive species in China. As mentioned above, the low mitochondrial genetic diversity may be due to founder effects, and it may suggest that this is a recent invasion in China. Considering the severe damage and economic loss of Eucalyptus have been caused by the invasive species of Ophelimus outside their native ranges [9,10,13,14,34], eradication or control strategies against O. bipolaris is urgently needed in China.
Conclusions
Based on the result of analyzing the biological, morphological and molecular evidence, we have formally described a new invasive species of the Eucalyptus gall wasp, Ophelimus bipolaris Chen & Yao, which represents the first species of the genus present in China. This wasp induces protruding galls only on the leaf blade of Eucalytpus. Its host plants at least include E. grandis, E. grandis × E. urophylla, E. tereticornis and E. urophylla in China. Our preliminary observation showed that O. bipolaris can complete a life cycle on E. urophylla in approximately 2 months under local climatic conditions. Further studies on the life cycle, host range, geographical distribution, economical damage and management of this wasp are urgently needed in China and possible distributed countries.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/insects12090778/s1. Figure S1. Average temperatures of Guangzhou from April to July, 2021. (Data from China Meteorological Data Service Center). Figure S2. Rhicnopeltella eucalyptis Gahan, holotype, female A Habitus, dorsal view B Habitus, lateral view C Head and mesosoma, dorsal view D Head and mesosoma, lateral view E Head, anterior view F Wings. (Images are used with permission from NMNH). Table S1. Details of the sampling, host plant and number of wasps collected. Table S2. Interspecific pairwise distance of Ophelimus species based on 28S sequences (%). Table S3. Interspecific pairwise distance of Ophelimus species based on COI sequences (%); Table S4. Summarized data of body measurements (in mm) and best ratios of O. bipolaris. | 5,835.2 | 2021-08-30T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Analysis of a Retrial Queue With Two-Type Breakdowns and Delayed Repairs
This article studies an M/G/1 retrial queue with two types of breakdowns. When the server is idle, it is subject to breakdowns according to a Poisson process with rate <inline-formula> <tex-math notation="LaTeX">$\delta $ </tex-math></inline-formula> and it cannot be repaired immediately. While when the server is busy, it may break down according to a Poisson process with rate <inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula> and can be immediately repaired. Firstly, based on embedded Markov chain technique and probability generating function (PGF) method, we present the necessary and sufficient condition for the system to be stable and the PGF of the orbit size at the departure epochs. Secondly, we give the steady-state joint queue length distribution by supplementary variable method, and present some important performance measures and reliability indices. Thirdly, we provide the analysis of sojourn time of an arbitrary customer in the system when the system is in stable state. Finally, some numerical examples are presented to illustrate the effect of the some system parameters on important performance measures and reliability indices.
I. INTRODUCTION
Retrial queues with unreliable servers have been investigated extensively, due to their applications in various fields, such as telephone switching systems, call centers, computer communication and telecommunication networks, manufacturing systems etc. On one hand, retrial queues can reflect the characteristics of customer service requirements, i.e., arriving customers who find the server unavailable may join into a retrial group (orbit) and ask for their services again some time later. For the survey papers, the books, the bibliographical information and recent literatures on retrial queues, readers are referred to Falin [10], Falin and Templeton [11], Artalejo and Gómez-Corral [4], Gómez-Corral [17], Artalejo [2], [3], Gao and Zhang [15], Zhang et al. [31] and references therein. On the other hand, due to some unexpected factors in reality, such as limited lifetime of the server, external interference, malfunctions of the server, starting failures, etc., the servers may break down and need repair during idle period or busy The associate editor coordinating the review of this manuscript and approving it for publication was Roberto Sacile.
period. Severs' failures and repairs were introduced by Aissani [1] and Kulkarni and Choi [21]. Since then, related studies regarding retrial queues with unreliable servers and repairs have been carried out successively from queuing and reliability viewpoints. In earlier relevant papers, the types of breakdowns of the servers may be divided into as follows: (1) active breakdowns, i.e., the server is subject to breakdowns when it is busy. In this case, the server's life time is often assumed to be exponential distributed. Wang [23] studied both queueing characteristics and reliability issues for an M/G/1 retrial queue with server breakdowns and general retrial times. Falin [12] dealt with an unreliable M/G/1 retrial queue, in which the server's lifetime follows exponential distribution and the repair time is generally distributed. Different from classical retrial queues with only one orbit queue, the retrial queue in Falin [12] has two waiting queues, one is normal waiting queue which is formed by the arriving primary customers who find the server unavailable at their arrival epochs, the other is orbit queue which is formed by those customers whose services are interrupted by the failures of the server. Chang et al. [6] considered a multi-server retrial queue with customer feedback and impatient, in which the server's breakdown is incurred by exponentially distributed lifetime when it is working. Yang et al. [29] considered an unreliable retrial queue with J optional vacations, where the server is subject to random breakdowns and repairs when he is working. Gao et al. [13] treated an M/M/1 retrial queue with an unreliable server from the economic viewpoint.
(2) passive breakdowns, i.e., when the server is idle, the server may break down and needs immediately repair. Taleb and Aissani [22] considered the performance measures and reliability indices for a new unreliable M/G/1 retrial queue, in which persistent and impatient customers, active and passive failures and preventive maintenances are both taken into account. Performance analysis was considered by Krishna Kumar et al. [19] for a Markovian retrial queue with passive and active breakdowns.
(3) catastrophic failures, i.e., the sever breakdowns are caused by external attacks or shocks (called as negative customers). In such retrial queues, if a negative arrives at a system, it removes one or all present customers in the system at once (called as individual or complete removal) and makes the server breakdown and repair. Many studies on such retrial queues have been carried out from queuing and reliability and economic viewpoints. Interested readers are referred to Wang et al. [24], Wang and Zhang [25], Wu and Lian [26], Wu and Yin [27], Gao and Wang [14] and references therein.
(4) starting failures, i.e., when the server is idle, an arriving (new or returning) customer must start the server. If the server is successfully started with a certain probability, the customer receives service immediately. Otherwise, the server undergoes repair immediately. Yang and Li [28] presented an M/G/1 retrial queue with the sever subject to starting failures. Krishna Kumar et al. [18] addressed the performance analysis of an M/G/1 retrial queue with feed back and starting failures. Atencia et al. [5] developed a discrete-time Geo/G/1 retrial queue with general retrial times, Bernoulli feedback and starting failures. Recently, Yang et al. [30] generalized the model of Krishna Kumar et al. [18] to a multi-server retrial system with feed back and starting failures. For more retrial queues with breakdowns and repairs, the readers are referred to the recent survey paper by Krishnamoorthy et al. [20].
Most unreliable retrial queues assume that the server can be immediately repaired when it breaks down. For example, Zhang [32] studied an M/M/1 retrial queue with passive breakdowns and active breakdowns from economic point, in which whenever any type of breakdowns occurs, the sever immediately enters a repair stage and the repair times for these two types of breakdowns are identical and exponential distribution. Zirem et al. [33] dealt with a batch arrivals retrial queue with active breakdowns, where the sever can be immediately repaired when breakdown happens and reserved service schedule is considered for the interrupted customer. However, in many realistic situations, such as in the area of computer communication networks and flexible manufacturing systems, etc, it may not be possible to start the repair process immediately due to non-availability of the repairman or of the apparatus needed for the repairs or due to being undetected timely. Recently, Choudhury and Tadj [9] studied the steady-state behavior of an unreliable retrial queue with a second optional service phase and delayed repair. Choudhury and Ke [7], [8], respectively, studied a batch arrival and a single arrival unreliable retrial queue with general retrial times under Bernoulli vacation schedule, in which the server is subject to active breakdown and delaying repair, i.e., when the server's failure occurs, it can begin its repair after some delaying time. For such retrial queues, the authors obtained some important performance measures and reliability indices.
In this article, we analyze an M/G/1 retrial queue with passive and active breakdowns and delayed repairs for passive breakdowns. To the best of the authors' knowledge, studies for such retrial queue do not yet exist. The motivation of this work is that such retrial queue arises in various practical fields, such as in communication networks and manufacturing systems, it not only characterizes the retrial phenomenon of customers, but takes the delayed repairs for passive breakdowns into consideration. Moreover, another motivation for considering such retrial model is to obtain analytical solution in term of closed form expression by supplementary variables technique and evaluate the performance measures and the reliability of the considered system which may be suited to many communication networks. The basic findings of the paper and their significance are outlined as follows: • We introduce a new repairable M/G/1 retrial queue with passive and active breakdowns, in which passive active breakdowns are subject to delayed repair. Such model has potential applications in packet-switching networks.
• We give the stable condition of the system, stationary analysis of joint distribution of the orbit size and the server's state. Based on these analysis, we can give the expressions of important performance measures of the system.
• Sojourn time of an arbitrary customer can reflect the quality of service of the system, so we present the expression of Laplace transform of the sojourn time of an arbitrary customer, and prove that Little's law still hold in our model.
• Reliability indices including the steady state availability of the server, the failure frequency of the server and the mean time to first failure of the server are provided. The rest of this article is organized as follows. Section 2 gives the system description and a practical example. Section 3 presents the stable condition of the system and the steady-state analysis, and gives some system measures. Section 4 focuses on the reliability indexes of the system. Section 5 studies the distribution of the sojourn time in the system of any customer. Section 6 gives some numerical examples to illustrate the features of our model.
II. MODEL FORMULATION AND A PRACTICAL EXAMPLE A. MODEL DESCRIPTION
In this section, we consider an unreliable retrial queue with two types of breakdowns and delayed repairs due to passive breakdowns. Assumptions of the queueing system are as follows.
• Arriving process and general service times. Customers from outside arrive at the system according to a Poisson processes with rate λ. The service time B of each customer follows an arbitrary distribution with cumulative distribution function (c.d.f.) B(x), probability density function (p.d.f.) b(x), finite first two moments β 1 , β 2 . If an arriving customer finds the server idle, the customer obtains service immediately. Otherwise the arriving customer who finds the server busy or inoperative because of failures will produce a source of unsatisfied customers, who may retry several times for service. Such unsatisfied customers are said to be in ''orbit'' and form a queue according to FCFS discipline.
• Two types of breakdowns and delayed repairs. The sever is subject to passive and active breakdowns, respectively, in idle period and busy period. When the server is idle, it breaks down at an exponential rate δ (called as a passive breakdown). When the server is busy serving a customer, it breaks down at an exponential rate θ (called as an active breakdown). When an active breakdown occurs, the server can be immediately repaired and the repair time R follows general distribution with c.d.f. R(x), p.d.f. r(x), finite first two moments ν 1 , ν 2 . However, due to lack of monitoring of the server in the idle period, when a passive failure happens, the server can not obtain immediate repair and stays there until a customer arrives at the service station from outside or the orbit if any. The repair time G for a passive failure follows general distribution with c.d.f. G(x), p.d.f. g(x), finite first two moments µ 1 , µ 2 . It is assumed that, when the service of a customer is interrupted by an active breakdown, the customer in service waits there to accept its remaining service as soon as the repair is completed. While the customer who starts the repair for a passive failure doesn't leave the service facility and can immediately obtain its service after the completion of the repair.
• Constant retrial policy. Under such retrial policy, only the first customer in the orbit is permitted to apply for service when the server becomes idle and the retrial time follows exponential distribution with rate α.
• All random variables defined above are assumed to be mutually independent.
Throughout the rest of the paper, for a c.d.f. F(x), we denote Obviously, we can obtain that F * (s) = 1− F(s) s . Define the functions β(x), µ(x) and ν(x) as the conditional completion rates for service time, for repair time for an active breakdown and repair time for a passive failure, respectively, i.e.,
B. A PRACTICAL APPLICATION EXAMPLE
Besides its theoretical interest, our retrial queue has potential applications in a packet-switching network, in which messages are divided into IP packets before they are sent. For instance, most modern Wide Area Network (WAN) protocols, including TCP/IP, X.25, and Frame Relay, are based on packet-switching technologies. The router is an interconnected device over which a packet is transmitted from a source host to to a destination host in a packet switching network. If the source host wishes to send a package to a destination host, it first sends the package to the router to which it is connected, and then the package is transmitted to the destination host. Assume packages arrive at the source host from outside according to a Poisson process. Upon receiving a package, the host immediately sends it to its router. If the router is available, the package is accepted and is transmitted immediately and the transmission time is assumed to be generally distributed. Otherwise the package is blocked by the router due to limitations in the TCP/IP network path MTU (Maximum Transmission Unit) or active breakdowns, in this case, the blocked package is stored in the buffer of the source host (called as orbit) and has to be retransmitted some time later according to FCFS. Besides, due to external attacks or other technical faults, the router may break down during idle period or during the packet transmission period. We assume that the network administrator who is responsible for failure management of the network always does some secondary auxiliary jobs when the router is idle until a packet arrives at the router and always is on duty when the router is busy. If the router fails when it is transmitting a packet, it can be immediately repaired by the network administrator and resumes the transmission of the interrupted packet as soon as its repair process is completed. While if the router breaks down when it is idle, the repair may be delayed till the arrival epoch of the next packet from outsider or the orbit at which the network administrator returns and immediately begins the repair process of the router. The time interval from the epoch at that the passive failure occurs to the epoch at which the next packet arrives is called as delayed period.
Here the packet who arrives during the delayed period can be transmitted immediately after the completion of the repair for the passive failure. This scenario can be modelled as our retrial queueing system with two-type failures and delayed repairs.
III. STABILITY CONDITION AND STEADY-STATE ANALYSIS
This section focuses on investigating the stability condition of the system and deriving some steady state distributions of the system, respectively, by embedded Markov chain technique and supplementary variable method.
A. STABILITY CONDITION
Let S B be the generalized service time interval of a customer from the beginning of his service to the end of his service, with c.d.f S B (x), LST S B (s). Taking into account the possible occurrence of active breakdowns in the service process, In the following, we give some useful notations: where a k is the probability that there are k customers who enter into the orbit during the repair time for a passive failure, h k is the probability that k customers who join the orbit during the generalized service time.
then c k is the probability that k customers enter into the orbit during the passive repair time and generalized service time.
To develop the necessary and sufficient condition for the system to be stable. we first establish the embedded Markov chain of the system at departure epochs.
Let T k (T 0 = 0) be the time epoch at which the k-th customer leaves the system, N k = N (T k ) be the orbit size at the time of the kth departure, then the process {N k , k ≥ 0} is a Markov chain with state space N. Then we have the following theorem.
The same inequality is also the necessary condition for ergodicity. Assume that ρ + δ λ+α+δ ρ 1 ≥ α λ+α , which implies that x m ≥ 0 for all m ≥ 0. Furthermore, according to the one-step transition probabilities, we know that the down drift VOLUME 8, 2020 which implies that the Markov chain {N k , k ≥ 0} satisfies Kaplan's condition namely if the sequence {D m , m ≥ 0} is bounded below. Thus the Markov chain {N k , k ≥ 0} is not ergodic, and then the necessity of the ergodicity is proven.
Remark 1 (Special Case):
Suppose that no passive breakdown occurs in the retrial system, i.e., δ = 0, then our system is reduced to the M/G/1 retrial queue with active breakdowns and constant retrial times, which is a special case by taking A(x) = 1 − e −αx , x > 0 in Wang [23].
B. STEADY STATE ANALYSIS
In this subsection, we study the steady state distribution of the system by using supplementary variable method.
At time t, the state of the system can be described by the Markov process {(N (t), J (t), ξ 1 (t), ξ 2 (t), ξ 4 (t)) , t ≥ 0} , where N (t) is the number of customers in the orbit, J (t) denotes the state of the server defined as: 0, the server is idle 1, the server is busy 2, the server is under repair for an active breakdown 3, the server is during the delayed period 4, the server is under repair for a passive breakdown when J (t) = 1, ξ 1 (t) is the elapsed service time; when J (t) = 2, ξ 2 (t) is the elapsed repair time for an active breakdown; when J (t) = 4, ξ 4 (t) denotes the elapsed repair time for a passive failure.
Theorem 3.3: (1) The marginal PGF of the orbit size when the server is busy is give by
αP 0,0 .
(2) The marginal PGF of the orbit size when the server is under repair for an active breakdown is given by
) The marginal PGF of the orbit size when the server is under repair for a passive breakdown is given by
, which denotes the PGF of the number of customers in the orbit, let N S be the number of customers in the system at arbitrary time under stability condition, with PGF (z) = E[z N S ]. Then by (z) = 4 j=0 P j (z) and (z) = P 0 (z) + zP 1 (z) + zP 2 (z) + P 3 (z) + zP 4 (z), we can obtain the following Corollary.
C. PERFORMANCE MEASURES OF THE SYSTEM
Based on the results given in section 3.2, the main purpose of this subsection is to provide main performance measures of the queueing system. By direct calculation through L'Hospital's rule and routine differentiation, we can have the following Theorem 3.4.
Theorem 3.4: (1) Under the steady state condition, we have the following results:
• The Probability P 0 that the server is idle is given by • The Probability P 1 that the server is busy is given by • The Probability P 2 that the server is under repair for an active breakdown is given by P 2 = P 2 (1) = θ ν 1 P 1 .
• The Probability P 3 that the server is during delayed period is given by • The Probability P 4 that the server is under repair for a passive breakdown is given by Next, we make the analysis of a cycle of the system. A cycle of the system is defined to be the length of the period that starts at the epoch when the server completes a service and the orbit is empty, and ends at the epoch at which the server becomes idle and the orbit is empty once again. Obviously, = 0,0 + 0,1 4 j=1 j , where 0,0 is the length of the server's idle period with empty orbit, 0,1 is the length of the server's idle period with nonempty orbit, 1 is length of the server's busy period, 2 is length of possible repair period for an active breakdown, 3 is length of possible delayed period, 4 is length of possible repair period for a passive breakdown. Taking into account the possible occurrence of a passive failure in server idle period, we have that E[ 0,0 ] = 1 λ+δ . By applying the argument of an alternating renewal process, we know that Then the expressions for 0,0 , 0,1 , j , j = 1, 2, 3, 4, are given as follows: , j = 1, 2, 3, 4, . VOLUME 8, 2020
IV. RELIABILITY ANALYSIS
In this section, we aim to provide some important reliability indexes of the queueing model based on the results obtained in Section III. Suppose that the system is stable, let A be the steady state availability of the server, W f be the failure frequency of the server, then we have that (14) Next, we focus on studying the mean time to first failure MTTF of the server.
At initial time t = 0, the system is assumed to be empty and the server is idle, i.e., P 0,0 (0) = 1. Let Y be the time to the first failure of the server, then the reliability function of the server is . The expressions of U * (s) and MTTF are given in the following Theorem.
, where ζ (s) is the minimum absolute value root of the equation in the unit circle and Res(s) > 0.
(2) The expression of MTTF is given by .
Proof: To find U (t), define the failure states J = 2, 3, 4 of the server are absorbing states. For the new system with absorbing states, using the same notations as in Section 3, we know that and we have the following set of differential equations at time t: 1 (t, x), n ≥ 0, x > 0, (16) P n,1 (t, 0) = λP n,0 (t) + αP n+1,0 (t), n ≥ 0, where δ 0,n is the Kronecker's symbol.
By Rouché's theorem, the denominator of (25) has exactly one zero point z = ζ (s) inside the unit circle and it is also the zero point for the numerator of (25), which leads to .
V. ANALYSIS OF THE SOJOURN TIME IN THE SYSTEM
Sojourn time of an arbitrary customer can reflect the quality of service of the system. Based on this point, this section is devoted to discuss the distribution of the sojourn time T of any arbitrary tagged arriving customer, which is the length of the time interval from the epoch at which the tagged customer arrive at the system to the epoch at which the tagged customer leaves the system with his service completion. Let T (s) = E[e −sT ], by conditioning on the system's state at the tagged customer's arrival epoch, we have that where To derive the explicit expression of T (s), it is necessary to introduce two auxiliary random variables, one is the random variable T 1 , which denotes the length of time interval calculated from the epoch when the server becomes idle and the tagged customer is at the head of the system to the epoch when the tagged customer leaves the system; the other is the random T d , which denotes the length of time interval calculated from the epoch when a passive breakdown of the server occurs and the tagged customer is at the head of the system to the epoch when the tagged customer leaves the system.
With the help VOLUME 8, 2020 of the auxiliary variable T d , we can derive the expression of T 1 (s). Lemma 5.1: The Laplace transform T 1 (s) of T 1 and its mean value are given by as follows: Proof: For T 1 (s), by considering the order of the new arrival from outsider, the passive failure and the retrial time of the tagged customer who is at the head in the orbit, we have that Similarly, for T d (s), by conditioning on the beginning epoch of the repair for the passive breakdown whether at the epoch of the arrival from the outside or from the orbit, we have that Following from (32) and (33), we can obtain the result (30). By differentiating T 1 (s) with respect to s and then taking limit s → 0, i.e., E[T 1 ] = − d ds T 1 (s)| s=0 , we can get (31). Now we can derive the expressions of T (s) and the mean value E[T ], which are given by Theorem 5.1.
Theorem 5.1: The Laplace transform T (s) of the sojourn time T and its mean value E[T ] are as follows
where Proof: Recall that in reliability theory, if a nonnegative random X denotes a life time of a unit, with p.d.f f (x), c.d.f F(x), then the random variable X x = X − x|X > x is called residual lifetime, and the p.d.f f x (y) of X x is given by f x (y) = f (x+y) F(x) . In the following we first consider T k,1 (x; s) = E e −sT |N = k, J = 1, ξ 1 = x . Given that the tagged customer finds that the system is in the state (N , J , ξ 1 ) = (k, 1, x) at its arrival epoch, then the tagged customer joins the (k + 1)-th position of the orbit and its sojourn time is the sum of the three random variables: is the sum of k + 1 independently and identically distributed (i.i.d.) random variables with generic random variable T 1 , M is the number of active breakdowns occurring during B x , and R (M ) is the total repair times for M active breakdowns. Therefore we have that Adopting similar analysis line to the above, we can obtain the expressions for T k,2 (x, y; s) and T k,4 (x; s) as follows: Remark 3: Eq.(36) shows that the Little's law still holds in our retrial queue system, which will also be shown by the following numerical examples.
Under the stationary condition ρ + δ λ+α+δ ρ 1 < α λ+α , the base case for setting these system parameters is set below: δ = 0.25, θ = 0.95, µ = 2, and ν = 4. We assume that the values of the retrial rate α varies from 1 to 10 in the following Figures 1-4 and Tables 1-4, and each of the system parameters δ, θ, µ, and ν takes turn to change in a certain rang but keeps other system parameters fixed given in the base case. The purpose of this section is to illustrate the effect of these parameters on some important reliability indices, including steady-state availability A, the failure frequency W f and mean time to first failure of the server MTTF, and queueing measures, including the mean system length L s , the expected length of a cycle • The increase in the passive failure rate δ and active failure rate θ makes the server breakdown more frequently, and then decrease the steady-state availability A and the mean time to first failure of the server MTTF, but increase the failure frequency W f , the system length L s , the expected length of a cycle E[ ] and the mean sojourn time of an arbitrary customer E[T ], which is shown in Fig.s 1, 2 and Tables 1,2.
• The increase in µ and ν can shorten the repair time of the server and makes the server more available, which increases the steady-state availability A, but decreases the failure frequency W f , the system length L s , the expected length of a cycle E[ ] and the mean sojourn time of an arbitrary customer E[T ], which is shown in Fig.s 3, 4 and Tables 3,4. However, the changes in the values of µ and ν in the repair times of passive and active failures have no effect on the mean time to first failure of the server MTTF, because it is not calculated after the server fails for the first time, which can be seen from Tables 3,4.
VII. CONCLUSION
In this article, we have conducted an exhaustive study on an unreliable M/G/1 retrial queue with two-type breakdowns: one is passive failures with delayed repairs, the other is active breakdowns with immediately repair. Of course, such delayed repair process is different from that incurred by starting failures. The feature of starting failures is that the server may be broken down at the arrival epoch of a customer who arrives from outside or orbit and finds the server idle, in this case, the customer must start the server to receive its service. If the server is unsuccessfully started with some probability, it immediately accepts repair, otherwise, if it is successfully started with complimentary probability, it immediately renders service to the customer. However our delayed repair process is that when the server breaks down in idle period, i.e., a passive breakdown occurs, the server can begin its repair at the arrival epoch of a customer from outside or orbit. That is to say, the repair process of a passive failure is started by the next arriving customer (new or returning). For this model, we analyzed the sufficient and necessary condition for the system to be stable, the stationary queueing indexes, sojourn time in the system from the queueing viewpoint, and obtain reliability measures such as availability, server failure frequency, and mean time to first failure from reliability viewpoint. Some numerical examples were given to study the effect of some parameters on the important performance measures and reliability indices of the model. As one direction of further future research, it is very interesting to develop the discrete-time counterpart of our continuous-time retrial queue, the reason is that the discrete-time queueing system is more feasible to model computer and telecommunication systems. Another direction of future research, one can consider the equilibrium balking policy for the Markovian counterpart of our retrial queue from economic viewpoint. | 7,081.2 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Risk Assessment of Flash Flood to Buildings Using an Indicator-Based Methodology: A Case Study of Mountainous Rural Settlements in Southwest China
In southwest China, flash floods occur frequently and often cause severe damage to residential building areas, especially in mountainous rural settlements. Risk assessment is crucial in the hazard mitigation policies and measures. However, the study on the quantitative assessment of flash flood risk for buildings is still less explored. In this study, an indicator-based assessment approach is proposed to assess the risk of buildings threatened by flash floods. The flood hazard is first simulated with 1D/2D hydrodynamic model to determine the buildings exposed to the flood and flood inundation indicators. Then, a combination of virtual surveys and building census information is used to collect information on indicators of exposed buildings and their surroundings. The indicator scores are calculated using the building flash flood risk indicator system constructed in this study which includes the flood hazard and building vulnerability indicators. A flood risk index (FRI) combining flood hazard index (FHI) and building vulnerability index (VI) is developed by weighted aggregation of indicators using combination weights calculated by the game theory approach. Based on FRI, the flash flood risk of mountainous buildings is quantitatively assessed. Taken a key mountainous rural settlement in southwest China as an example, the proposed methodological framework enables the quantitative calculation and assessment of the risk of rural buildings to flash flood. The overall framework can provide an applicable approach for flood mitigation decisions in mountainous settlements.
INTRODUCTION
Flash flood is one of the main challenges to the regional water security. Especially in the rural settlements in the mountainous area, flash flood can huge casualties and building damages to the local resident community, making the flood risk assessment an important issue for flood mitigation and decision-making (Petrow et al., 2006;Totschnig and Fuchs, 2013). The flood risk assessment of buildings is a fundamental issue in the flood risk assessment which provides important information for the safety of local residents and property.
The indicator-based method is one of the typical methods in flood risk assessment. (Papathoma-Köhle et al., 2017). The method carries out quantitative risk assessment by selecting and quantifying appropriate indicators and weighting them together into a composite index. It is widely used by decision-makers worldwide because of its simplicity and efficiency (Papathoma-Köhle et al., 2019a;Fuchs et al., 2019;Malgwi et al., 2020). Although the studies based on indicators to evaluate the physical risk of buildings to natural hazards are gradually increasing, the risk of village buildings to flash flood is less explored (Romanescu et al., 2018;Leal et al., 2021). The current research on the indicator-based method mainly focuses on the vulnerability characteristics of the buildings without the consideration of the hazard intensity that plays a key role in disaster risk. Previous studies used water depth and sediment height as flood hazard intensity indicators (Dall'Osso et al., 2016;Papathoma-Köhle et al., 2019b;Papathoma-Köhle et al., 2022), which cannot describe comprehensively the risk characteristics of flash flood (such as flow velocity and solid debris) to buildings. Therefore, there exists a need to further examine the physical risks of flash floods for buildings.
The indicator method includes steps such as indicators selection, weighting, and aggregation into a final index. Indicator weighting is the most sensitive step in constructing the index, because indicator weights may have a great impact on the final index results and thus on decision making (Becker et al., 2017;Papathoma-Köhle et al., 2019b). Indicator weighting methods can be divided into two categories: subjective weighting method and objective weighting method (Zou et al., 2020). The most commonly used weighting methods in current natural disaster risk evaluation studies are subjective weights represented by the expert scoring method and analysis hierarchical process (AHP) (Beccari, 2016). The subjectivity is the biggest shortcoming of the method, which relies heavily on the experience of decision-makers (Cutter et al., 2008;Yankson et al., 2017). The different judgment criteria of decision-makers lead to large differences in indicator weights between different studies, for example, Leal (2021) considers building materials as the most important indicator affecting flood vulnerability, while Romanescu et al. (2018) assigns a higher weight to the distance of buildings from the river. The objective weight method relies on the characteristics of the objective data sample to determine the weights, which includes principal component analysis (PCA) (Thouret et al., 2014), factor analysis (Ettinger et al., 2016), etc. With the continuous application of machine learning algorithms in various fields, another new objective weighting method based on machine learning algorithms has been increasingly studied. For example, Papathoma-Köhle et al. (2019a) relied on historical storm event disaster damage data, used random forests algorithm to select key indicators, and assigned indicator weights based on factor importance analysis. However, objective weights tend to ignore the effects of the randomness of sample data and their own differences, thus yielding results that may deviate more from the actual situation. In a word, both the subjective weighting method and the objective weighting method have certain limitations, therefore, a feasible combination method should be proposed to combine the two weights to solve the above problem.
In this paper, the main purpose of this study is to quantitative evaluation of flash flood risk for buildings in mountainous rural settlements. The novelty of this paper is to develop a hybrid flood risk index (FRI) which combines the flood hazard index (FHI) and the building vulnerability index (VI). The index is obtained by weighting the flood risk indicators using the game-theoretic combination weights. Taken Jiecun Village, Southwest China as an example, the new index-based approach is used for flash flood risk of buildings. The approach provides a comprehensive risk assessment of buildings in the mountainous area and provide a scientific basis for spatial planning and flood risk management. The proposed method can be applied to other areas facing similar problems.
STUDY AREA
The Jiecun village is located in the Shouxi River basin within Wenchuan County in southwest China ( Figure 1). After the 2008 Sichuan earthquake (magnitude 8.2), flash floods in the Shouxi River basin erupted more intensely and are often accompanied by secondary hazards such as debris flows, which seriously threatened the safety of local residential areas. Especially two major flash floods occurred on 20 August 2019 and 17 August 2020 (referred to as the "8.20" flood and "8.17" flood), causing severe damage. The Jiecun village is one of the most severely affected villages in the "8.20" flood. It is a typical mountainous rural settlement in southwest China, which has a resident population of more than 600 people and 230 buildings The Jiecun village is located in the heavy rain center in the Longmen Mountain rainfall area, where heavy rainfall occurs frequently. And the village area is the confluence area of three major tributaries of Shouxi River-Xi River, Zhong River, and Heishi River, it is vulnerable to flood disasters. Meanwhile, since the village is located in Sanjiang township and the national 4 A scenic spot-Sanjiang Ecological Tourism Zone, the relatively dense distribution of houses and population also make Jiecun village face the threaten of severe flood damage. Therefore, the extreme rainfall conditions, special topographic environment, and the severity of the flood response contribute to the flash flood in this area. And there is an urgent need to carry out a quantitative assessment of the flash flood risk of buildings in this area. Figure 2 represents the flowchart of the build risk assessment in this study. The main processes include flood hazard modeling, building vulnerability analysis, and building flood risk assessment.
Flood Hazard Modeling
In this study, the hydrological-hydraulic model is used to calculate water depth and velocity for flood inundation indicators of buildings and to identify the number of buildings exposed to flooding. The 1D/2D hydrodynamic models included in the MIKE FLOOD software package are used to simulate the flood inundation scenarios in the study area. Flooding is simulated with a combination of 1D and 2D elements. The river channel is modeled as a 1D element, and the overland flooding is modeled as a 2D process. In the flood simulation, the DEM (2 m spatial resolution) generated from LiDAR data is used as the topography of the floodplain, and the building hole method is used to consider the effect of building water retention (Tsubaki and Fujita, 2010). More details of the model can be found in the Supplementary Material.
After the "8.20" flood in 2019, the dike of local river sections in this area have been renovated and the flood protection standard of the river has been raised to 20-years return period. The inundation scenarios of the study area are simulated with two flood events under the return periods of 50-and 100-year. Due to the lack of field-observed data from gauging stations, the discharge process of the selected return period was generated by the empirical formula method including the hydrologic analogy method and the rational formula method (Chin, 2019).
Building Vulnerability Analysis
A critical part of building vulnerability analysis is the creation of building vulnerability dataset. Building vulnerability dataset includes information of building features and surrounding environment. The conventional method is based on a manual survey which usually takes a lot of time. In this study, a combination of virtual survey and existing building census dataset in the study area is used to improve the efficiency. Based on the 3D real scene model of the study area and remote sensing images obtained from the survey by unmanned aerial vehicle, the virtual survey is conducted to extract information of building footprints and building attributes such as number of stories (NF), building openings (OP), rows towards river (RO), and fence type (FT). Some attributes that are difficult to identify in the 3D real scene model, such as material structure type (MS) and age (AG), can be obtained through building census information provided by the Housing and Urban Development Bureau of Wenchuan County. There is an example of building information collection in Figure 3. Building footprint and building attribute data are linked together based on unique identifiers and stored in an ArcGIS geodatabase to facilitate further analysis.
Indicator-Based Flood Risk
An indicator-based flood risk used in this study includes the three steps: 1) indicator selection, 2) indicator weighting, and 3) indicator aggregation (Malgwi et al., 2020). A hybrid flood risk index is proposed to evaluate the building risk. The specific process for the development of the flood risk index of buildings is shown in Figure 4.
Indicator Selection
The selection of relevant indicators is based on literature review, empirical observation from the field study, and expert consultation. Following the definition of risk "Risk = Hazard × Vulnerability" (UNDHA, 1992), an indicator system of the flash flood risk of buildings is proposed with 8 indicators and can be divided into two parts: 1) flood hazard (flood intensity and debris factor), and 2) building vulnerability (material structure type, age, number of floors, openings, rows towards river, and fence type), which are described in detail below.
Flood Hazard
The damage to buildings by flash floods is mainly affected by floodwater flow and the debris it carries (Leal et al., 2021). Therefore, flood intensity and debris factor are determined as indicators of the flood hazard.
3.3.1.1.1 Flood Intensity. To facilitate the use of indicators to simplify the assessment, a mathematical expression as Eq. 1 with water depth and flow velocity as independent variables is established based on Defra's recommended average size-based damage scale (DS) matrix (Defra, 2006), which is adapted from Kelman's research on building flood vulnerability (Kelman, 2002). The mathematical expression is named as the flood intensity (FI) indicator, representing the intensity of the action of floodwater flow on buildings. This indicator is based on the physical concept of flood momentum (Pistrika and Jonkman 2010), while the form of this indicator is a reference to the Defra human flood hazard formula (Defra, 2006).
Where FI is the flood intensity (m 2 /s), v is the incoming flow velocity near the house (m/s), h is the water depth near the house wall (m), and a is a constant.
Logistic fitting in the non-linear fit function of Oringin software is used to analyze the relationship between flood intensity (FI) and damage scale (DS). According to the fitting analysis ( Figure 5), when a = 3, the best fit is obtained with the fitting correlation R 2 of 0.969, and the following fitting equation can be obtained. (2) Where DS is the house damage scales (1-5) defined by Kelman (Kelman, 2002). Therefore, FI (v + 3) × h is selected as the final form of the flood intensity indicator. According to Eq. 2, based on different damage scales (DS), the FI value range of different grades is divided, and the results are shown in Table 1. 3.3.1.1.2 Debris Factor. In the Shouxi River watershed with active tectonics and poor geological conditions, debris flows are likely to occur along with flash floods. The occurrence of debris flow will greatly increase the content of solid debris materials in the flood, increasing the destructive power of the flood, which in turn will cause more serious damage to the buildings. Therefore, it is necessary to propose a new indicator-debris factor (DF)-to measure the degree of increased flood hazard of debris flow occurrence.
With the consideration of the quantity of the debris and the probability of its occurrence (Defra, 2006), the debris factor is calculated as: Debris factor quantity of debris × probability of occurrence The quantity of the debris can be expressed in terms of the bulk density of debris flow Cd, and the probability of occurrence can be expressed in terms of the probability of debris flow occurrence Pd. The debris factor (DF) can be calculated by the following equation.
DF Cd × Pd
(4) In this study, the table look-up method which is commonly used in the previous studies is used to calculate the values of Cd and Pd parameters [the readers can refer to Specification of geological investigation for debris flow stabilization (China Geological Disaster Prevention Engineering Association 2006) for the details]. According to the specification, the Cd parameter is calculated based on watershed parameters, and the Pd parameter is calculated based on precipitation in 10 min, 1 h, and 24 h. The Cd and Pd can be divided into four classes. The ordinal scale method was used to assign a score from 0 to 1 to each class of Cd and Pd ( Table 2). Then through Eq. 4 and the matrix method, the Cd and the Pd were combined to calculate the DF, and the debris factor hazard class is classified (Tables 3, 4).
Vulnerability
Referring to the Papathoma tsunami vulnerability assessment model framework (PTVA) (Dall'Osso et al., 2010;Dall'Osso et al., 2016;Papathoma-Kohle et al., 2019b), building vulnerability indicators consist of the physical characteristics of buildings (including material structure type, age, number of floors, and openings) and the environment surrounding buildings (including rows towards river, and fence type). The details are as follows.
3.3.1.2.1 Material structure type (MS). MS represents the material used in the building structure and is the main factor influencing the vulnerability of the building (Fuchs et al., 2007;Müller et al., 2011;Guillard-Gonçalves et al., 2016). According to the survey, MS in Wenchuan County mainly includes wood, adobe and wood mixed, brick and wood mixed, masonry walls with concrete, and reinforced concrete structures.
3.3.1.2.2 Age (AG). AG refers to the year of construction of the building, and the age of the building. It has an influence on the expected structural damage from flooding. The degradation and technology level of the building is related to the age of the building (Leal et al., 2021). It can be assumed that newer buildings have higher resistance to flood damage. According to the building census information, AG is divided into five categories: before 1980, 1981-1990, 1991-2000, 2001-2010, and after 2010.
Number of floors (FL)
. FL can affect building vulnerability from different perspectives. On the one hand, more floors increase the vertical load on the ground floor, which leads to a greater resistance of the building to lateral loads of floods and is less likely to be destroyed and damaged (Kelman, 2002). On the other hand, the more floors, the smaller the proportion of exposure and the smaller the vulnerability (Papathoma-Köhle et al., 2010;Papathoma-Köhle, 2016;Leal et al., 2021). FL can be divided into three categories: one floor, two floor, and three or more floors.
Openings (OP)
. OP measures the presence and location of doors, windows, or other openings. According to the postdisaster field survey in Wenchuan, it can be observed that buildings with large openings (e.g., roll-up doors, floor-toceiling windows) on the exterior walls exposed to floodwater are significantly more severely damaged than buildings with small openings (e.g., single doors or windows) or no openings. Understandably, the presence of large openings on the exterior walls facilitates the entry of water into the building, and the entry of floodwater will greatly increase the damage to the interior of the building (Fuchs, 2009;Papathoma-Köhle, 2016;Leal et al., 2021). Therefore, OP can be classified into 3 categories: large openings, small openings, and no exposed openings or openings above the flood level.
3.3.1.2.5 Rows towards river (RO). RO can affect direct damage to buildings from flooding. The study shows that the front row of buildings in a row layout has a sheltering effect on the back row of buildings (Dall'Osso et al., 2010;Dall'Osso et al., 2016). Buildings in the back row, protected by the front row of buildings, have relatively little flood damage. RO can be divided into first row, second row, and third or more rows. Dall' Osso et al., 2016). According to field research, there are mainly two types of house fences in Wenchuan County: semienclosed fences and fully enclosed fences. Obviously, the protection effect of fully enclosed fences is stronger than that of semi-enclosed ones. Therefore, according to the fence protection effect, FT can be divided into three cases: fully enclosed fences, semi-enclosed fences, and no fences.
Indicator Scaling System Development
To quantify the impact of indicator attributes on risk, an indicator scaling system should be developed. The indicator values are classified according to their influence on flood risk, and the ordinal scale method is used to assign a standard score of 10-100 to each category of indicator attributes. In the scaled system, 10 indicates the lowest contribution to flood risk and 100 indicates the highest contribution to flood risk. The scaling system is shown in Figure 7.
Indicator Weighting
Each indicator considered in this study has a different impact on the flood risk of buildings, so each indicator should be given a different weight. To correct the one-sidedness of the single weighting method, this paper first used analysis hierarchical process (AHP) and random forests (RF) method to generate indicators weight, and then adopted game theory (GT) method to integrate the two weights to determine the combined weights.
Weights Based on AHP
Analysis hierarchical process (AHP) is an important tool developed by Satty to support decision making. It constructs a judgment matrix by comparing indicators in pairs and calculates the weights (Saaty, 1988). We invited 30 experts from different professions and organizations to participate in the work of judging the importance of indicators. Among the experts, 50% 15) are researchers from research institutions, 30% 9) are engineers from front-line production units, and 20% 6) are staff from water/construction-related management agencies. The experts judged the importance of the indicators based on their understanding and experience on flood risk. Finally, the experts' feedback is aggregated using an arithmetic averaging method.
The main process of the AHP method is as follows. 1) Constructing the hierarchical model. The decision objectives are decomposed into several levels according to the different attributes of each factor. The uppermost level is the objective level, the middle level is the criterion level and subcriterion level, and the bottom level is the indicator level, as shown in Figure 6.
2) Constructing a judgment matrix, starting from the second level of the structural model, and comparing the factors in the same level in pairs according to the experts' judgment.
3) Calculating the weight vector. The eigenvectors and the maximum eigenvalue λ max of the judgment matrix are calculated. The consistency ratio (CR) is calculated using Eq. 5 to validate the AHP results, where: CI is the consistency index, n is the matrix dimension, and RI is the random index.
When CR < 0.1, the judgment matrix has satisfactory consistency; otherwise, the judgment matrix should be reconstructed until it is satisfactory.
In this study, the CRs of all pairwise comparisons are less than 0.1 (see Table 5), which shows that the judgment matrix is consistent. The weights of each indicator calculated by AHP are shown in Table 6.
Weight Based on RF
The Random Forest algorithm (RF) is a machine learning algorithm proposed by Leo Breiman that combines the bagging idea and the random selection of features (Breiman, 2001). RF can provide estimates regarding the importance of variables, which in our case can help decision-makers to assess the contribution of each indicator in representing the damage process. There are two variable importance metrics: mean We collected data from a sample of 117 buildings affected by the "8.20″ flood of 2019 and "8.17″ flood of 2020 in Wenchuan County through post-disaster field research. Each building is classified according to the classification of mountain torrent damage to the building ( Table 7) to determine the damage class (Zhen et al., 2022). Since flood hazard indicators (flood intensity and debris factor) could not be collected from the field survey, we only analyzed the contribution of building vulnerability indicators to the building damage class. Each building vulnerability indicator is assigned a standard score according to the indicator scaling system (Figure 7), which is taken as a predictor variable. Taking the damage class as the dependent variable, the importance of each indicator is obtained using random forest classification algorithm.
The random forest classification model is constructed using the R-packages randomForest. The main model parameters that need to be set include the number of trees (ntree) and the number of candidate variables randomly selected for each split in each tree (mtry). In this paper, based on the bootstrap method, the optimal parameters are obtained by comparing the Out-of-Bag (OOB) errors of random forest models with different parameter settings. The final value of the model parameters ntree is 1900, and mtry is 2. The importance (MSERi) and weights of each vulnerability indicator calculated by the RF model are shown in Table 8.
Weight Based on GT
Game theory (GT) is the mathematical modeling of strategic interactions between intelligent rational decision-makers, specifically addressing conflicts between two or more participants (Zou et al., 2020). The weights obtained by AHP and the weights obtained by RF are considered as the two participants of the game, while the combined weights are the outcome of the game. The GT approach aims at Nash equilibrium to achieve agreement and compromise between the two weights, so that the respective deviations between the combined weights and the AHP and RF weights are minimized. The steps for calculating the GT-based combination weights are as follows. Step 1 Assuming that n weights are calculated using L methods, a basic weight vector set denoted as w n {w k1 , w k2 , . . . , w km }, (k 1, 2, . . . , L). A possible combination weight vector w is achieved by w n with the arbitrary linear combination coefficient α {α 1 , α 2 , . . . , α L }: Step 2 The most satisfactory weights can be determined by finding the coordination and compromise between various weighting methods. A most satisfactory linear combination coefficient α k is sought to minimize the deviation between w and w k , to achieve a compromise among various weights. The optimization function is to minimize the deviation between w and w k by using the following equation: According to the differential properties of the matrix, the optimal first-order derivative condition equivalent to Eq. 9 is which corresponds to the following system of linear equations.
Step 4 Calculate the game theory portfolio weight vector.
Based on Eqs 8-12, the combination coefficients of AHP weights and RF weights are calculated as α * 1 0.3504 and α * 2 0.6496. The combination coefficients are substituted into Eq. 13 to obtain the combination weights. The combination weights of vulnerability indicators based on game theory are shown in Table 9.
Indicator Aggregation
The weighted linear combination (WLC) method is used to further integrate the indicator scale values and indicator weights to create a single composite index. Due to data limitations, the calculation of building vulnerability index (VI) is based on the GT combination weights, and the flood hazard index (FHI) and flood risk index (FRI) is calculated based on the AHP weights. Figure 7 illustrates the computation framework of the flood risk index.
Combining flood intensity and debris factor indicators, the flood hazard index (FHI) is calculated as follows.
FHI 0.67 × FI + 0.33 × DF The building vulnerability index (VI) is computed by weighting building indicators.
The flood risk index (FRI) is determined by combining the flood hazard index (FHI) and the building vulnerability index (VI).
To allow for comparison of buildings in different locations under different flood scenarios, buildings are classified into five risk classes based on FRI using the equal interval method: very low (10-28), low (28-46), medium (46-64), high (64-82), and very high (82-100). The flood risk class is mapped using GIS.
Flood Extent and Exposed Buildings
By hydraulic model calculation, flood inundation results (including flood extent, water depth, and flow velocity) are derived for the flood scenarios with the 50-and 100-year return periods (R T ) in the study area (See Figure 8). The analysis of these flood characteristic indicators allows us to make a preliminary judgment on the magnitude of flood events and their impact on the buildings.
Based on the flood extent, the buildings exposed to flood water are identified and the potential exposure of the buildings (the part of the building in contact with flood water) is estimated to determine how the buildings are affected by floodwater. Table 10 shows the statistics of inundation indicators of buildings in the two flood scenarios. In the 50-year flood scenario, 39 buildings in the study area are affected by flooding, of which 7 (18%) buildings are potentially exposed to more than 50%, and only 2 buildings are fully exposed. In the 100-years flood scenario, 72 buildings in the study area are affected by flooding, of which 26 (36%) buildings are potentially exposed to more than 50% and 7 are fully exposed. And this indicates that buildings will be generally severely affected by flooding in the 100-years flood scenario. In the 50year and 100-year flood scenarios, the average maximum inundation depth of flood-affected buildings is 0.38 and 0.62 m, and the average maximum flood flow velocity on the building surface is 0.86 m/s and 1.88 m/s respectively. The changes in the number of inundated buildings and building exposure as well as flood action (water depth, flow velocity) on exposed buildings indicated that compared to the 50-year flood scenario, the adverse effects of flooding on buildings in the study area would increase dramatically in the 100-year flood scenario.
Flood Hazard
The water depth and flow velocity simulation results are processed using a processing program developed by the authors to generate a maximum flood intensity (FI) layer (see Figures 9A,B). Based on the flood intensity layer, the maximum flood intensity (FI) near the building is extracted to represent the intensity of flood action on the building. The maximum FI on the building surface ranges from 0.2 to 6.5 m 2 /s in the flood scenario with 50-years R T and from 0.3 to 9.2 m 2 /s in the flood scenario with 100-years R T . According to the calculation method described in Section 3.3.1, the value of debris factor (DF) is calculated by Eq. 4. Applying the debris hazard matrix, the debris factor hazard class is determined based on the DF value. In the two flood scenarios, the DF of the buildings on both sides of the Xi River is 0.49, which belongs to the high debris factor hazard, while the DF of the buildings on both sides of the Zhong River is 0.35, which belongs to the medium debris factor hazard (see Figures 9C,D).
The calculation results of FI and DF indicators are assigned standard scores, and the flood hazard index (FHI) values are calculated according to Eq. 14. Figures 9E,F shows the distribution of FHI of the building. Based on the location of the buildings to the river, the flooded building areas can be roughly divided into four zones (see Figure 9): the left bank of the XI River (Zone I), the right bank of the XI River (Zone II), the right bank of the Zhong River (Zone III), and the left bank of the Zhong River (Zone IV). It can be seen that the distribution of FHI is basically consistent with the distribution of FI, which is also influenced by DF. The high debris factor hazard in Zone I and Zone II increases the FHI value of buildings in the area to Figure 9F) in Zone II of the Xi River for the 100-years flood scenario increase by 5, 13, and 20, respectively, compared to the FI standard score. This indicates that considering the influence of the debris factor (DF) indicator will increase the flood hazard of buildings to different degrees, which is important for a comprehensive understanding of flash flood hazard in mountainous areas.
The FHI of the building is between 20 and 75 for both flood scenarios. With reference to the flood risk classification, the FHI is also divided into the same five intervals, corresponding to five hazard classes. No buildings belong to very high hazard class, and the statistics of the number of buildings with the remaining flood hazard classes are shown in Table 11. In the flood scenarios with 50-and 100-year R T , the hazard of buildings to flash floods ranges mainly from very low to low, which account for 92% and 76% of the exposed buildings respectively. The distribution of flood hazard classes indicates that the overall flood hazard of the flooded buildings is low. Compared with the buildings in the 50-years R T flood scenario, the number and proportion of the buildings of Medium-and high-hazard class increased significantly in the 100-years R T flood scenario, with the number of Medium-hazard buildings in flash floods increasing from 2 (5%) to 9 (13%) and the number of high-hazard buildings increasing from 1 (3%) to 8 (11%). The overall flood hazard of buildings increased significantly. According to the distribution of FHI, the overall flood hazard of buildings in Zones I & II is higher than that of buildings in Zones III & IV (Figure 9 and Figure 13), indicating that buildings on both sides of the Xi River are more vulnerable to flood damage, which is also generally consistent with the building damage observed in the post-flood survey of "8.20" flood.
Building Vulnerability
Before carrying out a more detailed analysis of the vulnerability results, the statistics of the vulnerability indicators can help to better understand the building vulnerability results. With this aim in view, Figure 10 shows the frequency distribution of vulnerability indicators for exposed buildings in the 50-year and 100-year flood scenarios. In both scenarios, the material and structural types of flooded buildings are mainly masonry walls with concrete and reinforced concrete structure, and the buildings are mainly constructed after 2000, with most buildings of 3 floors and above. This is due to the fact that most of the buildings in the area were rebuilt after the Great Wenchuan Earthquake (Wang, 2008), and the housing safety was fully considered during the construction process. Therefore, the distribution of the structure type, building age and number of floors of the flooded buildings tends to be concentrated, and the buildings generally show a strong disaster resistance. However, due to the canyon topography of the area and the commercial function of the buildings (stores, restaurants, etc.), the flooded buildings are mostly large openings located in the front row of the river, which implies a potentially higher vulnerability. Also, most of the buildings in the area are open-sided, with no fence protection to reduce building vulnerability, while individual buildings having fully or semi-enclosed fence. It can be seen that building vulnerability is influenced by multiple indicators, and some indicators have even opposite effects on building vulnerability. Therefore, a comprehensive vulnerability index is FIGURE 10 | Frequency distribution of vulnerability indicators of exposed buildings (A) material structure type (B) age, (C) number of floors (D) openings, (E) rows towards river, and (F)fence type.
Frontiers in Environmental Science | www.frontiersin.org June 2022 | Volume 10 | Article 931029 established by integrating building vulnerability indicators, which can help to better reflect the comprehensive performance of building resistance to flood damage. Based on the combined weights, the vulnerability index (VI) of buildings is calculated by weighting the building attribute indicators. Figure 11 shows the distribution of the VI of the exposed buildings in two scenarios. The vulnerability of nonflooded buildings is equal to zero because flooding does not pose a threat to these buildings. The VI ranges from 44 to 74 in both scenarios. Referring to the flood risk classification, the VI is divided into the same five intervals corresponding to the five vulnerability classes. There are no buildings in the extreme classes very low or very high, and the statistics of the number of buildings in the remaining vulnerability classes are shown in Table 12. The interval distribution of VI is consistent for the 50-year and 100year flood scenarios, with medium-and-high vulnerability buildings dominating, of which medium-vulnerability buildings accounted for 56% and 50% respectively, and highvulnerability buildings accounted for 36% and 47% respectively, showing an overall high building vulnerability.
It is noteworthy that Figure 11 show that the vulnerability of buildings partially exposed to flooding in both scenarios change, which is mainly caused by the different building openings exposed to different flood levels. In the inundation level of flood with 50-years R T , buildings have no exposed openings, while in the inundation level of flood with 100-years R T , buildings have exposed large window and door openings or small openings, which proves the importance of the location of the building openings. Comparing the distribution of FHI and VI (Figure 9 and Figure 11), it can be found that some buildings with high vulnerability are also buildings with high flood hazard (e.g., 3-1 and 3-2 in Zone III, 1-1 building in Zone I and 4-1 in Zone IV), which implies extremely high potential damage, and these buildings should be the focus of attention.
Flood Risk Analysis
The flood risk index (FRI) of each building is calculated by combining the FHI and VI. The FRI of the exposed buildings is between 33 and 71 for both flood scenarios. According to the flood risk classification criteria described in 3.3.4, the flood risk of buildings in the study area in both scenarios is concentrated in three risk classes: low, medium, and high ( Figure 12). There are no buildings in the very low and very high classes, and the statistics of the number of buildings in the remaining flood risk classes are shown in Table 13. The two flood scenarios are dominated by low and medium-risk buildings, which account for more than 92% of the buildings, and the overall risk level is low. Compared to the 50-years R T flood scenario, the number and percentage of medium and high-risk buildings increased significantly in the 100-years flood scenario, with the medium risk buildings increasing significantly from 11 (28%) to 34 (47%) and the high-risk buildings increasing from 1 (3%) to 6 (8%). Thus, there is a significant increase in the overall building risk.
The overall flood hazard, vulnerability and flood risk of buildings is analyzed at zone level, based on average values of FHI, VI, and FRI in each zone ( Figure 13). For the 50-years flood scenario, the average value of FRI in each zone from high to low is Zone I > Zone II > Zone III > Zone IV. For the 100years flood scenario, the average FRI value of each zone from high to low is Zone I > Zone III > Zone II > Zone IV. The ranking of the mean FRI values for each zone in the two scenarios is consistent with the FHI values and differs significantly from the ranking of the VI values. It can be stated that the high flood risk of buildings in the study area is mainly driven by high flood hazard. And high VI values of buildings also increase the overall flood risk of buildings in the zone to some extent (e.g., Zone III and IV). The area with the highest flood risk of buildings is Zone I. According to the field research, it can be seen that there are relatively low gap sections of the dike of the river in Zone I, which has not formed a closed dike protection. When the floods overflow the dike, a large amount of floodwater enters the building area from the gap section of the dike, which will cause serious flooding and impact damage to the buildings in the area. Therefore, when implementing regional flood prevention and mitigation measures, priority should be given to improving the construction of flood prevention measures in Zone I to control the flood risk in this subzone. High risk buildings may be of greater concern in building risk mitigation efforts. As shown in Figure 12 and Table 13, there is only one high-risk building in the 50-years R T scenario, which is building 1-1 in Zone I of the Xi River. And there are six high risk buildings in the 100-years R T scenario, which are building 1-1 and 1-2 in Zone I, building 2-1 in Zone II, building 3-1 and 3-2 in Zone III, and building 4-1 in Zone IV, respectively. The building with the largest FRI value is building 1-1. When planning building mitigation measures, it is clear that priority should be given to these high-risk buildings, especially building 1-1.
DISCUSSION
The FRI is derived quantitatively using an indicator-based method, which requires awareness of its applicability, limitations, and challenges. Some key steps in the methodological process are discussed further below, including 1) selection of indicators, 2) aggregation of indicators, and 3) data acquisition.
The selection of indicators is the first step in creating a flood risk index, which depends on the characteristics of flood damage to buildings and the building features of the affected area (Kappes et al., 2012). In this study, flood intensity (FI) indicator consisting of inundation depth and flow velocity and debris factor (DF) indicator consisting of the quantity of debris and probability of occurrence are established as flood hazard indicators for mountainous areas. The determination of flood hazard indicators took into account the theoretical mechanisms of building damage caused by floodwater flow and the actual characteristics of combined disaster caused by flood and debris flow in this area. Compared with the previous studies, which directly used the inundation depth as flood risk indicators, the introduction of flood intensity (FI) and debris factor (DF) indicators is an innovation of this paper, which can better reflect the disaster-causing characteristics of flash floods on buildings in mountainous areas. As to the selection of the building vulnerability indicators, we took into account the disaster resistance characteristics of the buildings and the surrounding environment of the buildings in the study area as well as the accessibility of the indicators. After expert evaluation, six building vulnerability indicators are finally determined, including the material structure type, age, number of floors, openings, rows toward river, and fence type, which are generally accepted by many studies (Godfrey et al., 2015;Miranda and Ferreira, 2019;Chao et al., 2021;Leal et al., 2021). However, due to the lack of historical disaster data, flash flood scenarios are difficult to accurately recreate and simulate, and the validity of the indicators still needs further validation.
The weight assignment is the most critical part of the indicator aggregation and the most sensitive step in the construction of the risk index. In this paper, the weights based on analysis hierarchical process (AHP) and random forest algorithm (RF) (Figure 14) are analyzed and compared. The AHP method considers building material structure type and openings as the two relatively most important indicators with the greatest impact on flood risk, while the least important ones are age and number of floors. However, influenced by experts' experience, the AHP method has strong subjectivity and does not depend on the objective data law, which may lead to unreasonable results. According to the RF method, building material structure type (MS) and enclosure type (FT) are relatively most important, while building age (AG) and number of building stories (FL) are relatively least important. Compared with AHP method, the RF method presents polarized weight values, i.e., the important indicators have larger weight values, such as MS and FT, while the unimportant ones have smaller weight values, such as FL. The analysis results of the RF method are easily affected by the quality of collected samples. The small number of samples used in this study as well as the concentration of building types may affect the results of variable importance, which may lead to a deviation between the calculated importance of the indicators and the actual importance. The shortage of sample size is a limitation of this study, and more sample data of the affected buildings need to be collected in the future to improve the results. In addition, the field survey data set does not contain data of flood intensity and debris factor indicator, and only the importance analysis of building vulnerability indicators has been carried out. Further disaster data should be collected. The flood inundation characteristics can be obtained by accurate restoration simulation of flood scenarios, and relevant indicators data in disaster data sets can be supplemented to carry out a more comprehensive analysis.
The two methods, AHP and RF, have less divergence in judging the importance ranking of indicators, and the main differences are reflected in the specific weight values. The combination weight result obtained by the game theory method (GT) is more similar to that of RF, because the ratio of AHP to RF is determined by the weight combination coefficient reaching Nash equilibrium, and the combination coefficient corresponding to RF weight is greater than that of AHP. GT method makes some abnormal values more reasonable by increasing the weight of the MS and FT indicator of AHP, as well as increasing the weight of the AG and FL indicators of RF. Thus, the combined weights based on the GT method overcome the problem of the one-sidedness of single weights and the results are more reasonable.
In addition, the way of acquiring and processing indicator data will affect the accuracy of risk calculation results and the efficiency of the whole evaluation process. In the flood hazard modeling of this study, hydrological and hydrodynamic methods are used to obtain parameters such as inundation extent, water depth, and flow velocity in the assumed flood scenario. The flood simulation adopted high-precision terrain and generalizes the water blocking effect of individual buildings by building hole method, which can more accurately simulate the effect of flood on buildings on the microscopic scale. The modeling approach provides more accurate values for flood hazard indicators, which in turn improves the accuracy and rationality of the risk calculation results. In the process of building vulnerability dataset creation, the data of building vulnerability indicators are mainly collected through the combination of virtual survey based on unmanned aerial survey and remote sensing technology as well as building census data. Although the 3D real scene model may have the problem of the low resolution of local images and needs to be verified with the aid of some field research photos, the building information collection time is greatly reduced and the efficiency is significantly improved compared with manual field collection, which proves the feasibility of this data collection method, and is conducive to the promotion of risk assessment methods in this study.
The analysis of the flood risk results shows that the risk of flash flood to buildings can be reduced by controlling the flood hazard and reducing the vulnerability of buildings. For the first approach, flood control engineering measures such as reinforcing some substandard dike in the Xi River and Zhong River are required to reduce the extent and frequency of flood inundation of building areas. For the second method, measures to reduce the vulnerability of buildings include relocation, optimization, and improvement of building structures and construction of protective facilities attached to buildings such as fences (Wei et al., 2021) to increase the resilience of buildings to flooding.
CONCLUSION
This study proposed an indicator-based methodological framework to assess the flood risk of rural buildings in mountainous areas. The main findings are as follows.
In the flood risk indicators, the flood hazard indicators which combines flood intensity (FI) and debris factor (DF) performs better than the traditional indicator of flood hazard such as water depth used in the previous studies. The FI and DF indicators can better reflect the disaster-causing characteristics of floods to buildings in mountainous areas of southwest China. Another contribution of this paper is to combine analysis hierarchical process weights with random forest algorithm weights through game theory, which solves the subjectivity problem of previous indicators and makes the weights of indicators more reasonable.
The quantitative calculation method proposed in this study is applied to assess the flash flood risk of buildings in Jiecun Village, a typical mountainous rural settlement in southwest China. The results showed that in the flood scenarios with the 50-and 100year return period, 97% and 92% of the flooded buildings in the study area are at low and medium risk, signifying an overall low risk level. The inundation zone with the highest overall flood risk of buildings is Zone I on the right bank of the Xi River. The results highlight this method can not only achieve quantitative assessment of the overall risk of buildings in the inundation area, but also manage to identify buildings with different risks, providing reference for implementing different disaster response measures.
Despite some limitations and uncertainties in the risk assessment process, the methodological framework still has good application value for flood mitigation decisions to buildings. The effective management of flood risk of buildings will be achieved through the implementation of both hazard risk mitigation and vulnerability reduction measures, such as raising the standard of the dike, renovating building structures, and constructing building accessory protection facilities, to ensure that the FRI of buildings is below a certain threshold value.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. | 10,960.6 | 2022-06-15T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Towards Fully Jitterless Applications: Periodic Scheduling in Multiprocessor MCSs Using a Table-Driven Approach
In mixed criticality systems (MCSs), the time-triggered scheduling approach focuses on a special case of safety-critical embedded applications which run in a time-triggered environment. Sometimes, for these types of MCSs, perfectly periodical (i.e., jitterless) scheduling for certain critical tasks is needed. In this paper, we propose FENP_MC (Fixed Execution Non-Preemptive Mixed Criticality), a real-time, table-driven, non-preemptive scheduling method specifically adapted to mixed criticality systems which guarantees jitterless execution in a mixed criticality time-triggered environment. We also provide a multiprocessor version, namely, P_FENP_MC (Partitioned Fixed Execution Non-Preemptive Mixed Criticality), using a partitioning heuristic. Feasibility tests are proposed for both uniprocessor and homogenous multiprocessor systems. An analysis of the algorithm performance is presented in terms of success ratio and scheduling jitter by comparing it against a time-triggered and an event-driven method in a non-preemptive context.
Introduction
Safety-critical systems are ubiquitous in our everyday life, from medical equipment and smart vehicles to military applications. These types of systems usually imply, on the one hand, a real-time response due to direct interaction with the environment and, on the other hand, the inclusion of several critical functionalities. Providing a real-time response while carefully managing resources and providing temporal and spatial isolation for the critical applications imposes the need for carefully tailored real-time scheduling approaches.
In a special category of safety-critical applications which run in a time-triggered environment, perfectly periodical (i.e., jitterless) scheduling for certain critical tasks is needed. This need can appear from message synchronization problems [1], from signal processing applications [2][3][4][5], or simply from conditions imposed by different types of certifications [6]. Moreover, jitterless execution is desired for certain tasks in any embedded control system, as jitter only introduces difficulties in control loops [6]. As stated in [7], computer-controlled systems are designed assuming periodical sampling and zero or negligible jitter. In practice, the only jitter that can be relatively easily eliminated is sampling jitter, by using dedicated hardware. Input-output jitter is influenced by the scheduling policy. Guaranteeing the performance and stability of the controller in target systems also implies, besides a bounded response time, a guarantee that the input-output jitter is bounded within a so-called jitter margin [7].
Dealing with task execution in a time-triggered environment for classical real-time systems is done using time-triggered (clock-driven) scheduling techniques, among which the static table-driven approach stands out.
The table-driven approach is based on static schedulability analysis, generating a scheduling table that is used at run time to decide the moment when each task instance (also called a job) must begin its execution [8]. A special case of real-time systems is represented by the mixed criticality systems (MCSs), where tasks with different criticalities, categorized based on a finite set of criticality levels, share the same hardware [9]. The system is considered to run in a number of criticality modes, each mode giving a certain degree of execution time assurance [9].
While the classical real-time approach implies the construction of a single scheduling table, in MCSs, things become more complex due to the number of criticality modes. The change from one criticality mode to another corresponds to a transition from one precomputed scheduling table to another. Thus, in MCSs, there is one scheduling table per criticality level [10].
MCSs are a suitable variant that can be used with respect to providing a real-time response, while isolating the critical functionalities. If we analyze safety-critical systems in the case of MCSs, there are certain advantages of such table-driven approaches over event-driven scheduling: easier certification, given by the fact that table-driven schedulers are completely deterministic [10]; easier synchronization between tasks [1]; easier power management, as each power-mode corresponds to a criticality level, and each level uses its own table; and easier adaptation of real-time applications from different fields like automotive, avionics, etc., which already use table-driven approaches [11].
In this paper we propose an adaptation of a real-time, table-driven, non-preemptive scheduling method for MCSs which guarantees jitterless execution in a mixed criticality time-triggered environment for both uniprocessor and homogenous multiprocessor systems. We also provide a partitioning heuristic for this scheduling method for multiprocessor systems.
The main contributions of this paper are as follows: • A mixed criticality scheduling algorithm, FENP_MC (Fixed Execution Non-Preemptive Mixed Criticality), is proposed for jitterless task execution in a time-triggered environment; • An adaptation of the FENP_MC for homogenous multiprocessor systems, P_FENP_MC, is provided; • Feasibility tests are proposed for both uniprocessor and homogenous multiprocessor systems; • The algorithm performance is analyzed using the success ratio against the utilization of the task sets; • The proposed algorithm performance is compared against a time-triggered and an event-driven scheduling method in a non-preemptive context: Time-Triggered Merge (TT-Merge)/Energyefficient Time-Triggered Merge (Energy-efficient TT-Merge) [12] and Earliest Deadline First with Virtual Deadlines (EDF-VD) [13], a non-preemptive variant.
The rest of this paper is structured as follows: In Section 2 we briefly present the state of the art regarding scheduling in a mixed criticality time-triggered environment. In Section 3, we describe our proposed scheduling algorithm for uniprocessor mixed criticality systems, while in Section 4, we propose an adaptation for homogenous multiprocessor MCSs. In Section 5, we analyze the performance of the proposed algorithm in terms of success ratio and compare it against a popular one, namely, EDF-VD NP (EDF-VD in its non-preemptive form). We conclude our paper in Section 6, where we also propose some future research and development directions.
Related Work
Since Vestal's first mixed criticality model formalization [14], MCSs have attracted particular attention that has materialized in a set of scheduling algorithms that can be classified based on their Appl. Sci. 2020, 10, 6702 3 of 21 scheduling points (i.e., the moments in time when scheduling decisions are made) into three categories: event-driven, time-triggered, and hierarchical scheduling approaches.
An extensive survey on scheduling in MCSs [9] shows that, until recently, the scheduling problem was mainly focused on event-driven scheduling algorithms, despite the fact that there are also important endeavors regarding time-triggered and hybrid approaches. In event-driven schedulers, the scheduling points are defined by task completion and task arrival events. Examples of event-driven schedulers were introduced in [15][16][17][18][19]. A popular event-driven scheduling algorithm in MCSs is Earliest Deadline First with Virtual Deadlines (EDF-VD) for two criticality levels (Hi-high criticality and Lo-low criticality) [13]. The algorithm computes a virtual deadline for every Hi-criticality task if the system is in Lo mode. In Hi mode, Hi-criticality tasks are scheduled according to their real deadlines. This is done in order to balance the schedulability on different criticality levels, which results in better schedulability and run-time performance.
Due to their predictability, time-triggered approaches have become increasingly popular in the last couple of years, but the relevant works are still limited and much more could be expected in the future. Time-triggered schedulers make their scheduling decisions at predetermined points in time. Few papers tackling MC scheduling in time-triggered environments appeared only in the last decade [6,10,12,20,21]. In [20], a heuristic for constructing scheduling tables in a time-triggered environment was presented. The algorithm relies on backtracking to guide the search in a tree-based structure, and it consists of two heuristics: one for constructing the scheduling tables and the other for backtracking. Another method for constructing scheduling tables based on priority ordering is described in [10]. The technique incorporates "mode-change", which increases flexibility and system performance. In [21], a time-triggered scheduling algorithm for both independent and dependent MC jobs on an identical multiprocessor platform is proposed. Two separate scheduling tables are constructed for each processor to schedule dual-criticality tasks. Furthermore, the schedule is global, which means that jobs can be preempted in one processor and resume their execution in another processor. While the algorithms mentioned before are focused on predictability, the main goal of the algorithm proposed in [6] is to provide a low-jitter periodic schedule for mixed criticality messages in a time-triggered non-preemptive environment. Additionally, the algorithm introduced in [12] is meant to reduce the energy consumption. However, none of the algorithms mentioned above are focused on guaranteeing jitterless execution in a mixed criticality system, for all the active tasks, regardless of the system criticality mode. This is obviously a requirement for many safety-critical systems and represents the gap we aim to fill in this paper.
Hierarchical approaches combine both scheduling tables and event-driven scheduling methods, but the research on such systems is still in its preliminary stage. A hierarchical algorithm was introduced in [22] for scheduling MC real-time tasks on multiprocessor platforms. The method provides temporal isolation among tasks of different criticalities while allowing slack to be redistributed across different criticality levels. The same algorithm was implemented and tested on a standard real-time operating system (RTOS) in [23]. The experimental results showed that RTOS-related overheads are maintained at acceptable levels and the system is robust with respect to breaches of optimistic execution time assumptions.
FENP_MC: Fixed Execution Non-Preemptive Mixed Criticality
In this section we propose a scheduling algorithm for MCSs running in a time-triggered non-preemptive environment in response to the demand for jitterless task execution, with applicability to tasks used in signal processing, different types of synchronizations, control loops, etc. [1,6,7,24].
Perfectly Periodical Task Model
In real-time mixed criticality systems, periodical task execution models are based on the model originally proposed by Liu and Layland in 1973 [25]. This model imposes periodical behavior only Appl. Sci. 2020, 10, 6702 4 of 21 regarding the release time. In most of the systems based on this periodical task model, the actual execution starting time is pseudo periodical [26]. Another type of model, not very different from the one proposed by Liu and Layland, but focused on this special case of periodical real-time tasks, was firstly proposed in [24]. In this paper, the tasks are called FModXs (fixed execution executable modules). Starting from this model, we propose a simplified version of a perfectly periodical task model for real-time systems: where T i represents the period of periodical task i, D i is the time by which any job execution needs to complete, relative to its release time, C i represents the computation time, and S i gives the execution start time, relative to its release time. Following Vestal's approach to extend Liu and Layland's model to mixed criticality systems [14], we propose the following perfectly periodical task model for MCSs: where M i is a mixed criticality fixed execution task (MC-FModX), l represents the number of criticality levels, T i is the period for periodical tasks, D i is the time by which any job execution needs to complete, relative to its release time, L i represents the criticality level (1 being the lowest level), C i,L j is the computation time, and S i,L j is a vector of values-one per criticality level, for levels lower than or equal to the criticality level L i . C expresses the worst-case execution time (WCET) for each criticality level and S the execution start time, relative to its release time, for each level of criticality lower than or equal to the task criticality level L i . A task consists of a series of jobs, with each job inheriting the set of parameters of the task, (T i , D i , L i ), to which it adds its own parameters [27]. Thus, the kth job of task τ i is characterized as where a i,k represents the arrival time (a i,k+1 − a i,k ≥ T i ), d i,k is the absolute deadline (d i,k+1 = a i,k + D i ), C represents the execution time allocated by the system, which is dependent on the criticality mode of the system (for L j , C i,k = C i,L j ), s i,k gives the absolute execution start time of job k of task i which is also dependent on the criticality mode of the system, and T i , D i , L i have the same meaning as in the task model.
Perfectly Periodical Task Execution Model
Definition 1. We say that the execution of task i is perfectly periodical if for each job k of task i, J i, k , the difference between the absolute start times of jobs k and k − 1 is constant: In order to exemplify a perfectly periodical execution algorithm, let us consider the task set presented in Table 1 that needs to be scheduled on a single-processor system, with two criticality levels (Low-Lo and High-Hi). In Table 1, T i is the period of task i, D i represents the deadline, L i is the criticality level, C i,L Lo expresses the computation time for the Lo-criticality mode and C i,L Hi is the computation time for the Hi-criticality mode C i,L Hi . The start times of the tasks for the Lo-criticality case are depicted in Figure 1.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 22 In an MCS, Equation (4) must be true for all the criticality modes of the system (i.e., for all criticality levels), as shown in Figure 2, where 0 represents low criticality and 0 represents the high criticality level. In MCSs, different tasks with different criticality requirements share the same hardware; thus, in these systems, missing a deadline varies in severity from task to task [14]. In order to protect critical tasks from the interference of less critical ones, different levels of criticality are assigned to each task and different levels of assurance are provided for tasks running in different running scenarios, called criticality modes.
As explained in [9], the classical model implies that the MCS starts in the lowest criticality mode. If all jobs behave according to the level of assurance imposed by this mode, then the system stays in that mode. On the other hand, if they attempt to execute for a longer time, then a criticality mode change occurs to a higher level of assurance.
Next, we present an example where this mode change must occur and how FENP_MC treats the situation. Let us consider the task set presented in Table 2. In an MCS, Equation (4) must be true for all the criticality modes of the system (i.e., for all criticality levels), as shown in Figure 2, where P0 Lo represents low criticality and P0 Hi represents the high criticality level. In an MCS, Equation (4) must be true for all the criticality modes of the system (i.e., for all criticality levels), as shown in Figure 2, where 0 represents low criticality and 0 represents the high criticality level. In MCSs, different tasks with different criticality requirements share the same hardware; thus, in these systems, missing a deadline varies in severity from task to task [14]. In order to protect critical tasks from the interference of less critical ones, different levels of criticality are assigned to each task and different levels of assurance are provided for tasks running in different running scenarios, called criticality modes.
As explained in [9], the classical model implies that the MCS starts in the lowest criticality mode. If all jobs behave according to the level of assurance imposed by this mode, then the system stays in that mode. On the other hand, if they attempt to execute for a longer time, then a criticality mode change occurs to a higher level of assurance.
Next, we present an example where this mode change must occur and how FENP_MC treats the situation. Let us consider the task set presented in Table 2. In MCSs, different tasks with different criticality requirements share the same hardware; thus, in these systems, missing a deadline varies in severity from task to task [14]. In order to protect critical tasks from the interference of less critical ones, different levels of criticality are assigned to each task and different levels of assurance are provided for tasks running in different running scenarios, called criticality modes.
As explained in [9], the classical model implies that the MCS starts in the lowest criticality mode. If all jobs behave according to the level of assurance imposed by this mode, then the system stays in that mode. On the other hand, if they attempt to execute for a longer time, then a criticality mode change occurs to a higher level of assurance.
Next, we present an example where this mode change must occur and how FENP_MC treats the situation. Let us consider the task set presented in Table 2. In Figure 3a, two scheduling tables are provided for two criticality modes (P0 Lo for the Lo-criticality mode and P0 Hi for the Hi-criticality mode, where Lo < Hi). The system starts in the Lo-criticality mode, using the P0 Lo scheduling table, but at Moment 4, task M 2 exceeds its time budget allocated for the Lo-criticality mode, and that causes a criticality mode switch (see Figure 3b). The system continues to run according to the P0 Hi scheduling table, starting with the zeroth time instance. In this Hi mode, all Lo-criticality tasks are dropped, and only Hi-criticality tasks are scheduled according to their Hi level of assurance computation time.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 22 instance. In this Hi mode, all Lo-criticality tasks are dropped, and only Hi-criticality tasks are scheduled according to their Hi level of assurance computation time.
Theoretical Aspects
Next, we present an exact feasibility test for perfectly periodical execution in a non-preemptive context. We call this type of execution Fixed Execution Non-Preemptive (FENP). The test is analogous with that provided in [24].
Theoretical Aspects
Next, we present an exact feasibility test for perfectly periodical execution in a non-preemptive context. We call this type of execution Fixed Execution Non-Preemptive (FENP). The test is analogous with that provided in [24].
Let M = {M 1 , M 2 , . . . , M n } be a set of n independent MC fixed execution tasks (MC-FModXs), sorted in nondecreasing order of their periods. The MC tasks are characterized by the same parameters as those in Equation (2); thus, (5) Definition 2. The task set M is FENP schedulable in a mixed criticality system if, and only if, the task set M is FENP schedulable for each criticality level L j , where j ∈ 1, . . . , l. Definition 3. The task set M is FENP schedulable in a mixed criticality system for criticality level L j if all the tasks in the set M with criticality equal to or higher than L j are FENP schedulable using the next feasibility test. Only the parameters for level L j (C k,L j and S k,L j ) are considered in this case.
The feasibility tests are based on an execution mapping function, which is defined next.
Definition 4.
A fixed-execution mapping of task M k over the period of task M i is a function of the form: where τ represents a discrete time function with values between 0 and T i , GCD(T i , T k ) computes the greatest common divisor of the periods of tasks M i and M k , and M k (τ + x·T i ) represents the execution function of M k : where mod is the modulo operator and σ is the unity step function: For a certain criticality level L j , Equation (7) becomes Feasibility test: For a given criticality level L j , a subset M L j of tasks with criticality level L k ≥ L j are schedulable if, and only if, where t q is a discrete time instant between 0 and the latest possible start time of task M k and ∆ M i /M k (τ) is defined by Equation (6).
Execution Examples
For a better understanding, the execution mapping function for the task set example in Table 1 is depicted in Figure 4 for a system with two criticality modes. Table 1 in (a) Lo-criticality mode and (b) Hi-criticality mode.
Implementation Guidelines
Algorithm 1 represents the pseudocode form of the feasibility test, which is an adaptation of [24] for MCSs. Table 1 in (a) Lo-criticality mode and (b) Hi-criticality mode.
Implementation Guidelines
Algorithm 1 represents the pseudocode form of the feasibility test, which is an adaptation of [24] for MCSs.
Algorithm 1 Feasibility_test
Input: Γ q (the scheduling table o f processor q), critLevel (the system criticality mode) Output: FAILURE f or a negative f easibility test, SUCCESS otherwise 1 sort Γ q according to T i in a non-decreasing order 2
Theoretical Aspects
While event-driven scheduling approaches can only guarantee pseudo periodical execution, a carefully designed time-triggered approach can offer a solution for perfectly periodical tasks if we ignore the small jitter introduced by the criticality mode switch.
Next, we propose an adaptation to MCSs of the real-time table-driven scheduling algorithm FENP [24] for single processors and its partitioned P_FENP [28] variation for multicore systems.
The Fixed Execution Non-Preemptive (FENP) algorithm has been designed to provide maximum predictability for the execution of perfectly periodical tasks (FModXs) in a non-preemptive context.
Because the FENP algorithm follows Equation (4), each start time of job J i,k of task i can be determined knowing the start time of the previous job J i,k−1 : Moreover, s k can be statically determined in a direct manner: By designing a static scheduler based on Equations (11) and (12), we obtain jitterless task execution. The FENP_MC scheduler creates, in an offline phase, a dispatch table for each criticality level of the system based on Equation (11) and on the feasibility tests firstly proposed in [24] for real-time operation and further developed and presented in the next section for mixed criticality systems.
The dispatch table is represented by an array of structures: where Γ q is sorted in nondecreasing order of start time for each job in the system for a scheduling period. Tables 3 and 4 illustrate the Lo-criticality mode dispatch table and the Hi-criticality mode dispatch table, respectively, for the task set presented in Table 1: Table 3. Lo-criticality mode dispatch table for the task set example in Table 1.
Implementation Guidelines
The execution mapping function, presented in Algorithm 2 is used by the function for computing start times in Algorithm 3. Both algorithms are adaptations of [24] for MCSs.
Theoretical Aspects
For mixed criticality multicore systems, we propose an adaptation of the P_FENP which we call P_FENP_MC. The mapping algorithm is similar to that proposed in [28].
P_FENP_MC consists of two phases, namely, an offline phase and an online phase. The task partitioning to processors is carried out offline. A feasibility test is then conducted on each processor, followed by creating the table for that processor. Tasks are scheduled according to the dispatch tables in the online phase. The system starts in Lo-criticality mode; therefore, tasks will be scheduled according to the Lo-criticality dispatch table. Once a job executes beyond its Lo-criticality WCET, the system switches to Hi-criticality mode and tasks will be scheduled in compliance with the Hi-criticality dispatch table. For each processor dispatch table, tasks are sorted in nondecreasing order of their start times. Next, the task with the lowest start time M i is extracted from the dispatch table and its first instance J i,0 is executed. After job J i,0 finishes executing, the start time of task M i is recalculated. M i is then added to the corresponding dispatch table based on Equation (11), and the task with the lowest start time is again extracted from the sorted list of tasks and executed.
The partitioning algorithm proceeds as follows: Each processor has a scheduling table associated to it. Tasks from the task set are selected one by one and added in each scheduling table. If the scheduling table was initially not empty, two conditions are verified: The current processor utilization, which is the sum of utilizations of all the tasks from the scheduling table associated with the corresponding processor and must not exceed 1 [29]: where q = 1, . . . , m. II. The schedulability test performed for the task subset on the processor must be positive.
If the two conditions are met, the task will remain in the scheduling table, the processor utilization is updated, and the next task is removed from the ready queue and tested. If the scheduling table was initially empty, the task is added without verifying the two conditions and the processor utilization is updated.
If one of the two conditions returns FAILURE, the task is removed from the scheduling table and added in the next processor scheduling list, where the same test is performed.
Execution Examples
In order to illustrate the task partitioning method described in Section 4.1 we provide an example of six mixed criticality tasks scheduled on a dual-criticality system with two processors. Table 5 contains the timing parameters of the tasks and the processor utilization for each criticality level. In this case, P_FENP_MC provides the following results: tasks M 1 , M 4 , and M 6 are assigned to the first processor (P 0 ) with a Lo-criticality total utilization of 0.5 and a Hi-criticality total utilization of 0.5, while tasks M 2 , M 3 , and M 5 are partitioned to P 1 with a Lo-criticality total utilization of 0.445 and a Hi-criticality total utilization of 0.347. Scheduling for both the Hi-and Lo-criticality modes is illustrated in Figure 5. and a Hi-criticality total utilization of 0.347. Scheduling for both the Hi-and Lo-criticality modes is illustrated in Figure 5. It must be noted that for Condition I and for calculating the total utilization on each processor, we use Hi-criticality total utilization for the Hi-criticality WCET and Lo-criticality total utilization for the Lo-criticality WCET. Therefore, Condition I must be verified for both the Hi-criticality total utilization and the Lo-criticality total utilization. For Condition II, both the Lo-criticality WCET and the Hi-criticality WCET are considered.
Implementation Guidelines
Next, the two phases of the algorithm are described using diagrams. In the offline phase, the It must be noted that for Condition I and for calculating the total utilization on each processor, we use Hi-criticality total utilization for the Hi-criticality WCET and Lo-criticality total utilization for the Lo-criticality WCET. Therefore, Condition I must be verified for both the Hi-criticality total utilization and the Lo-criticality total utilization. For Condition II, both the Lo-criticality WCET and the Hi-criticality WCET are considered.
Implementation Guidelines
Next, the two phases of the algorithm are described using diagrams. In the offline phase, the dispatch tables for each processor are created using the mapping function and the feasibility test. A diagram of the P_FENP_MC offline phase is presented in Figure 6. The online phase uses the dispatch tables created in the previous phase and consists of the actual scheduling algorithm. On each processor, its dispatch table is used and updated dynamically. In this table, jobs are sorted in nondecreasing order of their start times and then, one by one, extracted from the set in order to be executed. Once a task instance is executed, the start time of the next instance is calculated using Equation (11) and inserted in the dispatch table so that the table remains sorted by start times. Figure 7 depicts the online phase of the P_FENP_MC algorithm. The online phase uses the dispatch tables created in the previous phase and consists of the actual scheduling algorithm. On each processor, its dispatch table is used and updated dynamically. In this table, jobs are sorted in nondecreasing order of their start times and then, one by one, extracted from the set in order to be executed. Once a task instance is executed, the start time of the next instance is calculated using Equation (11) and inserted in the dispatch table so that the table remains sorted by start times. Figure 7 depicts the online phase of the P_FENP_MC algorithm.
The online phase uses the dispatch tables created in the previous phase and consists of the actual scheduling algorithm. On each processor, its dispatch table is used and updated dynamically. In this table, jobs are sorted in nondecreasing order of their start times and then, one by one, extracted from the set in order to be executed. Once a task instance is executed, the start time of the next instance is calculated using Equation (11) and inserted in the dispatch table so that the table remains sorted by start times. Figure 7 depicts the online phase of the P_FENP_MC algorithm. The feasibility test conducted on each processor is shown in Algorithm 4, while Algorithm 5 computes the processor dispatch tables.
end for 28 return SUCCESS
Random Task Set Generation
Our experiments were conducted upon randomly-generated task sets in a dual-criticality system (Lo, Hi). A slight modification of the workload-generation algorithm introduced by Guan et al. [30] was used for the random task set generation process [31]. The parameters for each new task M i are generated as follows:
•
Criticality level: L i = Hi with probability P Hi ; otherwise, L i = Lo. • Period: T i is drawn using a uniform distribution over [10,50]. U i,L Hi (17) where π is the task set and Hi(π) is a subset of π that contains only the Hi-criticality tasks.
[U L , U U ]: The range of task utilization, with 0 ≤ U L ≤ U U ≤ 1.
[Z L , Z U ]: The range of the ratio between the Hi-criticality utilization of a task and its Lo-criticality utilization, with 0 ≤ Z L ≤ Z U .
• WCET for criticality level Lo: WCET for criticality level Hi:
Success Ratio
In this section we undertake an experimental evaluation of our algorithm P_FENP_MC by comparing it to another known scheduling method, P-EDF-VD. For the latter, task partitioning is done with regard to Condition (14) under the First-Fit Decreasing (FFD) [32] heuristic with sorting as the period. A non-preemptive version of the EDF-VD method is used. For P_FENP_MC, task mapping is done according to the heuristic described in Section 4.
The parameters used in generating the task sets are provided in the graph caption. Each datapoint was determined by randomly generating 100 task sets. In Figure 8, the task set utilization bound (x-axis) ranges from 0.2 to 0.8 times the number of processors divided by 2, in steps of 0.1, while in Figure 9, the number of processors (x-axis) ranges from 2 to 10 in steps of 2. The number of tasks in a task set will vary according to the task set utilization bound, being at least 3 times and at most 9 times the utilization bound. Thus, a higher value on the x-axis increases the number of tasks in a task set, while a lower value decreases it.
where π is the task set and Hi(π) is a subset of π that contains only the Hi-criticality tasks.
Success Ratio
In this section we undertake an experimental evaluation of our algorithm P_FENP_MC by comparing it to another known scheduling method, P-EDF-VD. For the latter, task partitioning is done with regard to Condition (14) under the First-Fit Decreasing (FFD) [32] heuristic with sorting as the period. A non-preemptive version of the EDF-VD method is used. For P_FENP_MC, task mapping is done according to the heuristic described in Section 4.
The parameters used in generating the task sets are provided in the graph caption. Each datapoint was determined by randomly generating 100 task sets. In Figure 8, the task set utilization bound (x-axis) ranges from 0.2 to 0.8 times the number of processors divided by 2, in steps of 0.1, while in Figure 9, the number of processors (x-axis) ranges from 2 to 10 in steps of 2. The number of tasks in a task set will vary according to the task set utilization bound, being at least 3 times and at most 9 times the utilization bound. Thus, a higher value on the x-axis increases the number of tasks in a task set, while a lower value decreases it. As the number of processors increases (see Figure 9), tasks are better scheduled in terms of success ratio when using our proposed algorithm. With more available resources there is a higher chance each task is partitioned on a suitable processor with regard to Conditions I and II (see Section 4.1). The FFD does not run a schedulability test when mapping each task; therefore, if a high number of tasks are partitioned on a single processor, the local scheduling algorithm may return a negative schedulability test.
Jitterless Execution-Test Case
In order to illustrate the jitterless execution of a task set scheduled with P_FENP_MC and to compare the task execution with that under other scheduling algorithms, we provide an example of three mixed criticality tasks scheduled on a dual-criticality system with one processor. Table 6 contains the timing parameters of the tasks. Scheduling for both the Hi-and Lo-criticality modes is illustrated in Figure 10. As the number of processors increases (see Figure 9), tasks are better scheduled in terms of success ratio when using our proposed algorithm. With more available resources there is a higher chance each task is partitioned on a suitable processor with regard to Conditions I and II (see Section 4.1). The FFD does not run a schedulability test when mapping each task; therefore, if a high number of tasks are partitioned on a single processor, the local scheduling algorithm may return a negative schedulability test.
Jitterless Execution-Test Case
In order to illustrate the jitterless execution of a task set scheduled with P_FENP_MC and to compare the task execution with that under other scheduling algorithms, we provide an example of three mixed criticality tasks scheduled on a dual-criticality system with one processor. Table 6 contains the timing parameters of the tasks. Scheduling for both the Hi-and Lo-criticality modes is illustrated in Figure 10. The jitter of a task is calculated as the difference between the maximum and minimum separation between two consecutive jobs of the same task M i [33] and is given by (18): where J i,k is the kth job of task M i . The jitter of a task is calculated as the difference between the maximum and minimum separation between two consecutive jobs of the same task [33] and is given by (18): where , is the kth job of task . Table 7 contains the jitter values for the task set example scheduled by P_FENP_MC, P-EDF-VD [13] (non-preemptive variant), TT-Merge, and Energy-efficient TT-Merge [12]. Table 7. Jitter values of the task set example scheduled using four algorithms: P_FENP_MC, P-EDF-VD (non-preemptive variant), TT-Merge, and Energy-efficient TT-Merge. As can be seen from the table above, three of the algorithms (P_FENP_MC, P-EDF-VD, and Energy-efficient TT-Merge) provided jitterless execution for the first task ( ), but only P_FENP_MC delivered a scheduling table for jitterless execution of all the tasks in the system. Table 7 contains the jitter values for the task set example scheduled by P_FENP_MC, P-EDF-VD [13] (non-preemptive variant), TT-Merge, and Energy-efficient TT-Merge [12]. Table 7. Jitter values of the task set example scheduled using four algorithms: P_FENP_MC, P-EDF-VD (non-preemptive variant), TT-Merge, and Energy-efficient TT-Merge. As can be seen from the table above, three of the algorithms (P_FENP_MC, P-EDF-VD, and Energy-efficient TT-Merge) provided jitterless execution for the first task (M 1 ), but only P_FENP_MC delivered a scheduling table for jitterless execution of all the tasks in the system.
Discussion
The jitterless task execution achieved by designing perfectly periodical scheduling for all the criticality levels in an MCS brings advantages in applications regarding message synchronization, signal processing, control applications, or simply different types of certifications. Having jitterless task execution and respecting the time constraints imposed by MCSs, we have full determinism and predictability regarding task execution.
Still, the tradeoff for jitterless scheduling on a uniprocessor is a lower success ratio value compared to using an event-driven method. An algorithm such as EDF-VD can reach up to 80% success ratio for a total utilization factor of 1 for the lowest criticality mode [13]. However, comparative results are harder to obtain with a time-triggered algorithm without using any resource enhancements, such as frequency scaling, for instance [10,12].
For a multiprocessor platform, the success ratio is not only influenced by the scheduling algorithm but also by the function used to map tasks to processors (see Figure 8). If we use a proper partitioned mapping function, we have comparative results between time-triggered and event-driven schedulers in terms of success ratio. As illustrated in Figure 8, our proposed scheduling algorithm obtained better schedulabitily results by using a well-tailored partitioned function in comparison to P-EDF-VD (with an FFD partitioned mapping function).
Conclusions and Future Work
As the number and complexity of safety-critical real-time applications increase, special attention needs to be paid to developing suitable and reliable scheduling techniques, especially for safety-critical systems running in time-triggered environments. In this paper we proposed a scheduling method for jitterless execution of hard real-time tasks in mixed criticality systems. Our approach is based on the real-time FENP scheduling algorithm and specifically tailored to MCS requirements. Additionally, feasibility tests were proposed for both uniprocessor and homogenous multiprocessor systems, and the algorithm performance was compared against an event-driven scheduling algorithm in a non-preemptive context, P-EDF-VD.
As future work, we intend to further investigate implementations of this scheduling methodology in RTOSs and to analyze the performance improvements of jitter-sensitive applications scheduled with P_FENP_MC in domains such as system control, robotic systems, and real-time communications. | 8,722 | 2020-09-25T00:00:00.000 | [
"Business",
"Computer Science"
] |
Gas and crystal structures of CCl 2 FSCN *
was structurally studied in the solid and in the gas phase by means of single-crystal X-ray (XRD) and gas electron diffraction (GED), respectively. In the gas phase the title molecule adopts two stable conformers, described by the FC e SC dihedral angle. The gauche - conformer (FC bond with respect to the SC bond) is more stable than the anti-conformer. In this work we present the fi rst experimental evidence for the existence of the anti -CF 2 ClSCN form. In the solid state only the most stable gauche -conformer was found. Intermolecular interactions were detected in the crystal structure and analyzed. A structural comparison of the results with those of related species as CCl 2 FSCN, CCl 3 SCN and CH 2 ClSCN is presented.
Introduction
The family of thiocyanate compounds is under permanent investigation.Some experimental IR and Raman spectroscopy data as well as theoretical investigations for this species have been reported twenty years ago by our group [1].In this work, gauche-CCl 2 FSCN was experimentally detected by means of Raman polarization measurements supplemented by computational chemistry calculations.On the other hand, the anti-conformer could only be computed by quantum-chemical calculations.Experimental support for the less stable anti-conformer is now finally provided by the gas phase electron diffraction study presented in this work.Complementarily, X-ray diffraction measurements allow determining the crystal structure of the less symmetric and more abundant gauche-conformer.Some intermolecular interactions in terms of geometrical parameters involving halogen and chalcogen atoms have been determined.Finally, a comparison of the gas and crystal-phase structures between CCl 2 FSCN, CCl 3 SCN [2] and CH 2 ClSCN [3] is presented.
Synthesis
CCl 2 FSCN was prepared by the reaction of CCl 2 FSCl with KCN in ether [4].Purification was performed by several trap-to-trap distillations [5].The identity and purity of dichlorofluoromethyl thiocyanate were verified using infrared spectroscopy.
Quantum-chemical calculations
The Gaussian 03 suite of programs [6] was used to compute DFT [7] and MP2 [8] calculations.The existence of minima on the potential hyper-surface was proved computing the corresponding harmonic frequencies after each geometry optimization.For the GED structural analyses analytical harmonic and numeric cubic force fields were calculated at the B3LYP/6-31G(d) and O3LYP/cc-pVTZ levels of the theory.These results were then used to calculate mean-square interatomic vibrational amplitudes and vibrational corrections to the equilibrium structure by the SHRINK program [9e11].Coupled-cluster CCSD and CCSD(T) [12] analytical gradient-powered geometry optimizations were performed using the Cfour program package [13].
In order to search the nature of the halogen or chalcogen intermolecular interactions, NBO computational calculations were performed by means of the NBO package contained in the Gaussian 03 program [14].The second-order perturbation stabilization energies (E) associated to the charge transfer between electron donor and acceptor orbitals corresponding to adjacent molecules were calculated at the NBO B3LYP/6-311 þ G(d) level of approximation.
Gas electron diffraction
Gas electron diffraction patterns were measured using the improved Balzers Eldigraph KD-G2 gas-phase electron diffractometer [15] at Bielefeld University.The experimental details are depicted in Table S1.Diffraction images were measured on the Fuji BAS-IP MP2 2025 imaging plates, which subsequently were scanned using the calibrated Fuji BAS-1800II scanner.The intensity curves (see Fig. S1 e S2) were retrieved from the scanned diffraction images by applying the method described earlier [16].Sector functions and electron wavelengths were calibrated as usually using benzene diffraction patterns, recorded along with the substance under investigation [17].Experimental amplitudes were refined in groups (see Tables S1 e S2).For this purpose scale factors (one per group) were used as independent parameters.The ratios between different amplitudes in one group were fixed at the theoretical values.
X-ray diffraction
Crystallography measurements were carried out at Essen-Duisburg University.A four circle Nicolet R3m/V diffractometer, with a Mo-Ka source (l ¼ 0.71073 Å) was used [18].Crystal structures were solved by the Patterson method and refined with SHELXTL-Plus Version SGI IRIS indigo Software [19].The sample was placed in a 0.2e0.3mm diameter glass capillary, which was closed at both ends.Using a coupled microscope integrated to the diffractometer, the formation of microcrystals (polycrystalline) was observed while decreasing the temperature of the sample.The sample was cooled at about 15 C below the melting point, and, with a melting zone by zone procedure and subsequent recrystallization caused by heating with an infrared laser focused to a very small area of the sample, a single crystal suitable for an X-ray diffraction experiment grew.A detailed description of this technique is reported in the literature [20].Table 1 lists the parameters of the XRD experiments.
CCDC 1021027 contains the supplementary crystallographic data for this paper.These data can be obtained free of charge from The Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif.
Computational chemistry
The structure of dichlorofluoromethyl thiocyanate (Fig. 1) was quantum-chemically computed using B3LYP/cc-pVTZ and MP2/cc-pVTZ levels of approximations.The potential-energy functions against the internal rotation around the 4(FeCeSeC) dihedral angle is shown in Fig. 2. As can be seen from Fig. 2, the gaucheconformation with a 4(FCeCS) dihedral angle around 60 is more stable than the anti-form, with a 4(FeCeCeS) dihedral angle near 180 .Maxima in the potential energy curves are observed for structures with eclipsed orientation between the halogens of CCl 2 F and the CN group of thiocyanate SCN.
Despite the qualitative agreement within both methods, the B3LYP level, predicts lower energy barriers than the MP2 method [21].
The rotational barrier computed for the gauche/anti conversion in CCl 2 FSCN with the mentioned approximation levels agrees with the computed with HF/3-21G*, HF/6-31G* and MP2/6-31G* [1].Moreover, it is very similar to that obtained in CCl 3 SCN [2] and more than twice than the corresponding barrier in CH 2 ClSCN [3].
The structures of the two minima were then fully optimized and their frequencies were computed at the same levels of theory.Table 2 lists the relative energies (DE) and Gibbs free energies (DG 0 ) along with the conformational population, which was calculated using the Boltzmann distribution taking into account the degeneration of the gauche-conformer.The concentration of the anticonformation in CCl 2 FSCN (ca.7%) is about a half than the concentration of an anti-rotamer in CH 2 ClSCN (between 15 and 18%) [3].
Furthermore, we also applied the most accurate CCSD(T) method to the CCl 2 FSCN molecule in order to compute theoretical structures for comparison with the experimental data (see Table 3).
Molecular structures
The most relevant structural parameters of CCl 2 FSCN are given in Table 3.
Molecular structure in the gas phase
An experimental determination of the gas-phase structure of CCl 2 FSCN was determined by means of gas electron diffraction (GED).Relevant geometric parameters obtained by GED, solid state X-ray diffraction (XRD) and computational calculations are listed together in Table 3 for comparison.Different theoretical models (100% gauche, 100% anti and a mixture of both conformers) were used in the refining process of CCl 2 FSCN.The radial distribution function as well as the corresponding structural R-factors obtained applying the different models are given in Fig. 3.
Even though the gauche-conformer is by far the most abundant, molecules with anti-conformation were also detected in the gas phase at the 300 and 315 K (detailed in Table S1).A straightforward estimation of the conformational composition from the radial distribution function is rather difficult in the present case.The N4eF5 contribution of the anti-conformer at r ¼ 4.8 Å is not as conclusive as in the refining of the equivalent CleN contribution in the CH 2 ClSCN molecule [3].The present contribution is difficult to observe not only due to the neighboring broad N4eCl7 contribution of the gauche-conformer but also due to experimental noise (see Fig. 3).On the other hand, all other interatomic distances are similar for the gauche-and anti-rotamers.Consequently, the value of the least-squares functional in the structural analysis is not sensitive (see Fig. S3) to the changes in conformational composition of CCl 2 FSCN.Additional difficulties in the refinement of the conformations ratio arise due to significant correlations as described in the experimental section.The finally refined ratio between gauche and anti-conformers of CCl 2 FSCN in the gas phase at 305 K was 79(11):21(11)%, respectively.The above described experimental results are generally in agreement with the computed values assuming the limited predicting power of the used approximations.Relevant geometric parameters obtained with XRD, GED and theoretical calculations (CCSD(T)/cc-pVTZ) are compared in Table 3.
Table 4 lists a comparison of the geometrical parameters of CH 2 ClSCN, CCl 3 SCN and CCl 2 FSCN.According to our previous work, simple thiocyanates CH 2 ClSCN, CCl 3 SCN and CCl 2 FSCN present similar C^N bond lengths in the gas phase [2,3].Moreover, this parameter is rather independent on the conformation.The differences between the CeCl bond lengths in this series of compounds are not significant.The longest C1eS2 bonds were observed in CCl 3 SCN (r g ¼ 1.839(2) Å) and anti-CCl 2 FSCN (r g ¼ 1.839(13) Å); this can be explained by steric repulsions of the CCl 3 or CCl 2 F with the SCN group.The gauche-conformer of CH 2 ClSCN presents the shortest C1eS2 bond.This parameter was found to be dependent on the orientation of the CeCl bond with respect to the S2eC3 bond; acquiring a higher value for the anti-conformation.This is also the case for the CCl 3 SCN molecule (see Table 4).
3.4.Molecular structure in the solid state and intermolecular contacts 3.4.1.Crystal structure of CCl 2 FSCN CCl 2 FSCN crystallizes in the orthorhombic space group P2 1 2 1 2 1 , and contains four molecules in the unit cell.The molecules in the crystal adopt the gauche-conformation with a dihedral angle d(FeCeSeC) of 61.5(4) (Fig. 4).The computed dihedral angle (62.0 ) also reproduces the crystallographic value.
The most important intermolecular interactions observed in the crystal phase are represented in Fig. 5 and listed in Table 5.Two non-bonding interaction types, N,,,S (Fig. 5, part b) and N,,,Cl (part c), determine the arrangement of the title species in the crystal.The N,,,S contact of 3.19 Å is considerably shorter than the sum of the van der Waals radii of the involved atoms (3.35 Å).The experimental value of the N$$$SeC angle of 159 together with the results of the NBO calculations computing an interaction between the nitrogen lone pair (as an electron donor) and the s*(SeC) molecular orbital (as an electron acceptor) allowed us classifying this contact as a chalcogen intermolecular interaction being the s*(SeC) the so-called "s-hole" [22].On the other side, the intermolecular contact between nitrogen and chlorine atoms is responsible for the formation of zigzag head-to-tail CCl 2 FSCN chains.Adjacent chains are alternately joined through two equivalent N,,,S chalcogen-bond type interactions.The slight a Energies are relative to those of gauche conformers.Gibbs free energies were calculated using standard "uncoupled harmonic rotator e rigid oscillator" approximation.The abundance was calculated from a free Gibbs energy at 305 K.
shortening of this bond with respect to the sum of the van der Waals radii of the involved atoms does not enable us to classify this interaction.Nevertheless, its value of 167.5 for the N$$$CleC angle and the results from the NBO calculations computing an electron donoreacceptor interaction between the nitrogen lone pair and the s*(CleC) orbital, respectively, suggest that this interaction is perhaps a weak halogen contact.As a consequence of the chalcogen intermolecular interaction previously described, the XRD refinement shows a "T" shaped coordination at the sulfur atom for this compound (Fig. 6).A similar coordination environment for the sulfur atom was reported for CH 2 (SCN) 2 [23].
Chalcogen N,,,S interactions are also present in dichloromethyl and trichloromethylthiocyanate molecules [2,3].N,,,Cl contacts are interesting to be compared as well.Evidence for this interaction is found in both, CCl 2 FSCN and CCl 3 SCN, but not in CH 2 ClSCN.
Even though crystalline CCl 2 FSCN does not show Cl,,,S interactions, these intermolecular contacts were detected in the other two thiocyanate species.In CH 2 ClSCN this interaction presents a chalcogen-bond-type behavior, while CCl 3 SCN evidences a Cl,,,S halogen contact.
Conclusion
The crystal structure of CCl 2 FSCN contains solely the gaucheconformer while in the gas phase both gauche-and anti-conformations are present in equilibrium at room temperature, with the e r e equilibrium distance between the positions of atomic nuclei corresponding to the minimum of the potential energy.f r g average internuclear distance at the temperature of the experiment.g r a distance between vibrationally averaged positions of atoms, or better: centers of electron densities.gauche-conformer being the most stable.The parameters of the gauche-structure in both, solid and gas phase, are in good agreement.The C^N bond lengths remain almost constant in CH 2 ClSCN, CCl 3 SCN and CCl 2 FSCN in the gas phase (r e ¼ 1.160(5), 1.158(9), 1.160(5) Å), but they differ markedly from the solid-state values (1.145(5), 1.144(6), 1.144(9) Å, all 3 e.s.d.s), which themselves as a group are similar.This seeming systematic solid/gas difference finds its explanation in the large anisotropy of valence electron density of the terminal triply-bonded nitrogen atoms and the consequent inability of the electron-density based X-ray diffraction method to represent the nuclear position of nitrogen.The lengths of both CeS bonds are also similar in the gas and crystalline phases.
The CeCl bond lengths are also very similar within error limits in both gas and solid state, and the same applies for the CeF bond lengths.Similarly, the angles found in gas and in solid states resembles.The qualitative relationships between different parameters are the same both in crystal and in gas, despite the fact that data were acquired with a different technique in each phase and the parameters are specifically defined for each case.In the crystal structure several intermolecular interactions, in particular halogenand chalcogen-type interactions, were observed and analyzed by means of the respective structural parameters and by quantumchemical calculations.
Fig. 1 .
Fig. 1.Molecular structure and atom numbering scheme of the gauche-and anticonformers of CCl 2 FSCN.
Fig. 3 .
Fig. 3. Experimental (circles) and model (line) radial distribution functions of CCl 2 FSCN.The difference curves for the tested models are also given.Subscript letters a and g indicate terms related only to anti and gauche conformers, correspondingly.
Table 1
Details of the X-ray diffraction experiments for CCl 2 FSCN.
Table 2
Relative total DE and free Gibbs DG energies (kcal mol À1 ) and abundancies c (%) of anti-conformer of CCl 2 FSCN.
Table 3
Experimental and theoretical structural parameters of CCl 2 FSCN.a The parameters are given in Å and deg.Threefold standard deviations are in parentheses.Superscript numbers 1, 2, …, 9 indicate groups, in which parameters were refined with fixed differences.The CCSD(T) calculation was performed with cc-pVTZ basis set.
a b Fixed parameter, see text for details.c Dependent parameter.d Conformational composition.Theoretical values were calculated at 305 K using the Boltzmann distribution and total energies. | 3,395 | 2017-03-15T00:00:00.000 | [
"Chemistry"
] |
Resolution and implementation of the nonstationary vorticity velocity pressure formulation of the Navier–Stokes equations
This paper deals with the iterative algorithm and the implementation of the spectral discretization of time-dependent Navier–Stokes equations in dimensions two and three. We present a variational formulation, which includes three independent unknowns: the vorticity, velocity, and pressure. In dimension two, we establish an optimal error estimate for the three unknowns. The discretization is deduced from the implicit Euler scheme in time and spectral methods in space. We present a matrix linear system and some numerical tests, which are in perfect concordance with the analysis.
Introduction
The nonlinear Navier-Stokes equations model the flow of a viscous and incompressible fluid such as water, air, and oil in stationary or nonstationary states. Those equations were and are the subject of a large number of research papers. The modification of any of the parameters associated with these equations (the domain on which the equations are posed, boundary conditions, nature of the data, variational formulation, time dependence, choice of the approximation method, etc.) leads to new research problems. In the initiating paper [1] the authors handle the Stokes and Navier-Stokes equations with nonstandard boundary conditions on the velocity and the pressure for a convex or regular domain. Our interest concerns the nonstationary Navier-Stokes equations with boundary conditions on the normal component of the velocity and the tangential components of the potential vector vorticity. Such a problem allows us to model, for instance, two fluids separated by a membrane or the flow in a network of pipes. The equivalent variational formulation of the Navier-Stokes equations provided with these boundary conditions admits three unknowns: the vorticity, velocity, and pressure [2][3][4][5]. This formulation has been studied in several works for the finite element discretization of the Stokes and Navier-Stokes problems in the stationary case (see [3,6]). We cite in the same context the works of Bernardi et al. [7,8], which present a posteriori error analysis of time-dependent Stokes and Navier-Stokes problems. The extension to spectral discretization has been handled in [9,10] for the stationary Stokes and Navier-Stokes problems and in [11,12] for the nonstationary case.
In this paper, we propose a discretization of such a formulation by the implicit Euler scheme in time and the spectral method in space in the square ] -1, 1[ 2 for dimension two and in the cube ] -1, 1[ 3 for dimension three. The spectral method can be easily extended to more complex geometries thanks to the arguments in [13,14]. In dimension two, we prove an optimal error estimate for the vorticity and the velocity and a quasioptimal error for the pressure, using the theorem of Brezzi, Rappaz, and Raviart [15]. However, the extension to dimension three remains a difficult problem.
We describe a numerical algorithm used to solve the discrete nonlinear problem. We also present clearly the matrices and the linear system derived from the discrete problem. This linear system is solved using the GMRES method since the global matrix is not symmetric [16].
Finally, we present some numerical experiments, which confirm a good convergence of our algorithm and the benefit of this formulation. These numerical results are coherent with the theoretical ones. This paper is organized as follows: • In Sect. 2, we present a continuous problem and some regularity results.
• Sect. 3 is devoted to a description of time and full discrete problems.
• An error estimate is derived in Sect. 4.
• In Sect. 5, we describe an iterative algorithm used to solve a nonlinear discrete problem and a linear matrix system. We conclude by presenting some numerical experiments.
A continuous problem and some regularity results
We consider an open bounded simply connected domain of R d (d = 2, d = 3) with Lipschitz and connected boundary . Let x = (x, y) for d = 2 or x = (x, y, z) for d = 3 be the Cartesian coordinates. In this paper, we mainly focus on the following nonstationary Navier-Stokes system: where v and P are the unknowns velocity and pressure, f represents the density of the body forces, ν > 0 is the viscosity, and n is the unit outward normal vector on the boundary . We define the boundary operator ς such that ς(curl v) is the boundary of curl v in dimension d = 2 or the boundary of the tangential components of the curl v in dimension We introduce the unknown vorticity θ = curl v (see [2,3]), and since v.∇v is equal to θ × v + 1 2 grad |v| 2 , system (1) is equivalent to the system The dynamical pressure p is defined as We assume that the following condition is satisfied by the initial velocity v 0 and the initial vorticity θ 0 = θ (x, 0): We define the space and its subspace H 0 (div, ) = u ∈ H(div, ); u.n = 0 on .
Let B a separable Banach space. We need to define the following space to handle the nonstationary Navier-Stokes system: which is a Banach space with the norm We also introduce the Banach space L(B) of the continuous linear functions from B to R with the norm If the data f belongs to the space L 2 (0, T; H 0 (div, ) ), where (H 0 (div, ) is the dual space of H 0 (div, ) (see [17] for more detail ), then problem (2) is equivalent to the following variational formulation: where ≺ ·, · is the duality product between H 0 (div, ) and H 0 (div, ). The bilinear forms l(·, · ; ·), b(·, ·) and t(·, · ; ·) are defined as follows: In another way, we define the nonlinear term Z(·, · ; ·) by For proving the existence of solution for problem (2), we need to define the following two kernels: the kernel of the bilinear form b(·, ·), which coincides with the space of divergence-free functions in H 0 (div, ), and W = (ϑ, ϕ) ∈ H 0 (curl, ) × K; ∀ψ ∈ H 0 (curl, ), t(ϑ, ϕ; ψ) = 0 the kernel of the bilinear form t(·, · ; ·). From the continuity of the bilinear forms b(·, ·) and t(·, · ; ·) we deduce that K and W are Hilbert spaces. If (θ , v, p) is a solution of problem (4), then (θ , v) is solution of the following reduced problem: Find (θ , v) ∈ L 2 (0, T; W) such that In dimension two, it is simple to show that problem (5) has a solution. However, in dimension three, giving a sense to the nonlinear term Z(·, · ; ·) is related to the following Assumption 1. In that case, the spaces H 0 (div, ) ∩ H(curl, ) and H(div, ) ∩ H 0 (curl, ) are compactly embedded in H 1 ( ) see ( [18], Thm 2.17).
Assumption 1
In dimension three, we suppose that the boundary is C 1,1 or convex.
We recall the uniform inf-sup condition on bilinear form b(·, ·): There exists a constant γ > 0 such that see [19] or [20, Chap. I, Cor. 2.4] for its proof. When Assumption 1 and the inf-sup condition (6) are satisfied, then problems (5) Finally, we establish some regularity properties of the solution of problem (4). These regularity results can be easily derived from [18,Chap. 2], [23], and [24] by using a bootstrap argument.
if Assumption 1 holds and there exists a constant such that where c is a positive constant independent of i.
Hereinafter, for the spectral discretization, we assume that is a square or cube. Using the same idea of Nédélec's finite elements (see [27,Sect. 2]), we introduce our discrete spaces.
Let N ≥ 2 be an integer. The velocity discrete space V N is defined as The vorticity discrete space T N is defined as Finally, the pressure discrete spaces M N are defined as We have the following inequality [28]: For continuous functions ϕ and ψ on¯ , we define the discrete scalar product Hereinafter, we suppose that f is continuous on × [0, T]. The full discrete problem is constructed from problem (7)-(8) by using the Galerkin method combined with numerical integration. If The bilinear formsl N (·, · ; ·), b N (·, ·), and t N (·, ·; ·) are defined as follows: From (11) combined with the Cauchy-Schwarz inequality it follows that the bilinear formsl N (·, ·; ·), b N (·, ·). and t N (·, ·; ·) are continuous respectively on ( N is linear and continuous on V N . As a consequence of the exactness property (10), the bilinear forms b(·, ·) and b N (·, ·) coincide on V N × M N . The discrete nonlinear term Z N (·, ·; ·) is defined as We introduce the kernel of the discrete bilinear form b N (·, ·) which is equal to the space of divergence-free polynomials in D N . We also define the discrete kernel of the bilinear form t N (·, · ; ·) We remark that the discrete kernel W N is not included in the continuous kernel W; see [10,Cor 3.2],. So the full discrete problem (12) is reduced as follows: We consider the inf-sup condition proved in [10,Lemma 3.9]. There exists a positive constant β independent of N such that the discrete bilinear form b N (·, ·) satisfies The arguments used to prove the existence of a solution of problems (13) and (12) are exactly the same as those for the continuous problems (9) and (8) where c is a positive constant independent of N and i..
Remark 1 Note that the previous existence result still holds when Z N (·, · ; ·) is replaced by Z(·, · ; ·) in problem (12). In practice, this means that a more precise quadrature formula, exact on P 3N-1 ( ), is used to evaluate the integrals that appear in the treatment of the nonlinear term. The corresponding discrete problem reads: In the same way the discrete reduced problem (13) is written as:
Error estimates
This section is devoted to the proof of the error estimate between the solution of problems (7)- (8) and (15) in dimension two since the proof is difficult in dimension three. This proof is based on the Brezzi-Rappaz-Raviart theorem [15].
2.4])
, where SL is the solution (θ i , v i ) of the following reduced problem: Let We also define the mapping G from the space X = H 0 (curl, ) × K into the dual space of H 0 (div, ) by So we conclude that problem (9) is equivalent to the problem We proceed in the same way for the discrete case. Let X N = W N × K N . Consider the discrete Stokes operator S N such that S N L is the solution (θ i N , v i N ) of the following problem: We also remind from [10] the following properties of the discrete operator S N : • The stability property • If (θ i , v i ) belongs to H s+1 ( ) × H s ( ) 2 for any 1 ≤ i ≤ I and s ≥ 1, then we have the following error: We also define the discrete mapping G N from X N into the dual space of V N by Then we conclude that the problem (13) is equivalent to the problem Let D be a differential operator. We make the following assumption.
We start by proving the following continuity property using the discrete implicit function defined in the theorem of Brezzi, Rappaz, and Raviart [15].
Lemma 1 For any
where c is a positive constant.
Proof Using the Hölder inequality for all r > 2 and s > 2 such that 1 r + 1 s = 1 2 , we obtain Then by the inverse inequality (see [29]) and the fact that the embedding of H(curl, ) into L s ( ) is continuous with a norm bounded by s 1 2 (see [30]), we get inequality (21) when s = ln(N).
Let L the space of linear operators from X into X. Proof Writing we prove the desired result in three steps.
1) The last term in the right-hand side of equality (22) 2) The second term in the right-hand side of equality (22) tends to zero as N tends to infinity. For any (ϑ, ω), we have Then from the property [9, (2.37)] we obtain We conclude that the term SDG(θ , v) · (ϑ, ω) is in H 2 ( ) × H 1 ( ) 2
and satisfies
Finally, from (20) we have (3) Using Assumption 2, take η = (Id + SDG(θ , v)) -1 L for N large enough. Then the quantity in (23) is bounded by 1 2η , which gives the desired result with the norm of the inverse bounded by 2η. Now we prove the following Lipschitz property of the operator S N .
Lemma 3
There exists a constant k > 0 such that for any (θ ,v) ∈ X, Proof For any w N ∈ K N , we have This leads to the desired Lipschitz property using the same idea as in the proof of Lemma 1 and (19).
Proof From equality (17) we derive that Based on (20), we bound the first term in the right-hand side of inequality (25). To bound the second term, we write, thanks to the definition of G and G N , The approximation properties of the operators N-1 and I N (see [28]) give the bound for this last term, which concludes the proof of (24).
Consequently, in the following theorem, we state an error estimate.
Theorem 2
Assume that the data function f belongs to the space L 2 (0, T; H μ ( ) 2 ), μ > 3 2 , and that the solution 2 × H s ( ), s > 1, and satisfies Assumption 2. Then, there exist an integer N * and a positive real number h * such that for all N ≥ N * and |h| ≤ h * , problem (15) has a unique solution. Moreover, this solution satisfies for all 1 ≤ i ≤ I the following error estimate: Proof Using Lemmas 2-4 together with the Brezzi-Rappaz-Raviart theorem (see [15]), for N large enough, we obtain that for all 1 ≤ i ≤ I, problem (16) has a unique solution (θ i N , v i N ), which satisfies Besides, using the discrete inf-sup condition (14), for all 1 ≤ i ≤ I, there exists a unique pressure p i N in M N such that Furthermore, for any q N ∈ M N , having we deduce the estimate for p ip i N L 2 ( ) from (14) and the triangle inequality. (26) is fully optimal for the vorticity and velocity, whereas it is quasioptimal for the pressure.
Resolution algorithm
Considering the result of the error estimate, we will establish numerical tests just in the two-dimensional case on the square =] -1, 1[ 2 . We elaborate the following iterative algorithm to solve problem (12). For simplicity, we omit the indices N .
Step 1. We start by solving the linear Stokes problem: Step 2. We suppose that the (k -1)th iteration (θ i k-1 , v i k-1 , p i k-1 ) is known. Then we solve the problem: We do the iterations until the following condition is satisfied: In the following, we start by presenting the linear system deduced from the discrete problem (27). We build a basis of the discrete spaces T N , V N , and M N .
We consider the Lagrange polynomial ψ p in P N (-1, 1) associated with the nodes p , 0 ≤ p ≤ N , such that where δ pq is the Kronecker symbol. We fix the integer p * between 0 and N equal to N 2 or to (N+1) 2 . We denote the set P * = {0, . . . , N} \ {p * } and consider the polynomial ψ * p ∈ P N-1 ([-1, 1]) such that Then the discrete unknowns θ i N , v i N and a pseudopressurep i N are written as The functionp i does not belong to L 2 0 ( ). However, the real pressure p i N is obtained from the formula The components of the unknowns θ i N , v i Nx , v i Ny , and p i N allow us to form the unknowns vectors denoted by i , V i x , V i y , and P i . Their dimensions are equal to (N -1) 2 , N(N -1), N(N -1), and N 2 -1, respectively. We consider V 0 = (V 0 1 , V 0 2 ), where the components of the vectors V 0 1 and V 0 2 are respectively v 0 Nx ( p , q ) and v 0 Ny ( p , q ) such that v 0 = (v 0 Nx , v 0 Ny ). Consequently, we formulate the discrete problem (27) as the following equivalent linear system: where B T is the transpose of a matrix B. The matrices , and C ω are the same as for the Stokes problem (see [10,Sect. 6]).
The matrices D 1 = (D 11 , D 12 ), D 2 = (D 21 , D 22 ), and N 1 = (N 1 , N 2 ) are made respectively from the terms Z N (θ i Since the global matrix of the linear system (28) is not symmetric, the GMRES method [16] is used for the resolution.
Numerical results
In this section, we start by studying the time convergence. We consider a given solution obtained from the formulas v = curl ϕ and θ = curl v, where ϕ is the stream function. We handle the following two cases.
Case (1): Assume that the steam function ϕ and the pressure p are C ∞ are related to the time and space so that (θ , v; p) is a solution of problem (4): ϕ(t, x, y) = e t sin(πx) sin(πy), p(t, x, y) = e -t (x + y).
Case (2): Assume that the steam function ϕ and the pressure p are less regularly related to the time and space so that (θ , v; p) is a solution of problem (4): Figures 1(a) and 1(b) represent the convergence in time for the continuous solutions defined in (29) and (30), respectively. We remark that in the two cases (regular solution or less regular solution) the time convergence order is almost equal to 1, which confirms the result of Theorem 2.
In Fig. 2(a), for the solution issued from (29), we present the spectral convergence curves on the vorticity θ in norm H(curl, ), the velocity v in norm H(div, ), and the pressure in norm L 2 ( ). These error curves are provided in semilogarithmic scales, as functions of log(N), for N varying from 5 to 30. As can be expected from Theorem 2, the convergence is exponential for the solution, and the slope for the error curve on the pressure is the same as that for the vorticity and velocity. H(div, ), and the pressure in norm L 2 ( ) in semilogarithmic scales, as functions of log(N), for the solution issued from (30). We note that the error is much larger for the singular solution (30) than that for the regular solution (29), which confirms the results of Theorem 2. Figure 3 corresponds from top to bottom and left to right to the discrete vorticity, the two components of the discrete velocity, and the discrete pressure for the data f = (f x , f y ) = txy 2 , 0 , v 0 = (0, 0), homogeneous boundary conditions v · n = g = 0 on , and N = 35. Now we handle the influence of the viscosity ν on the number of iterations. We take ξ = 10 -12 and the regular solution issued from (29). Figure 4 presents the number of iterations processed by the algorithm as a function of N as it varies from 5 to 25. This number of iterations is greater when the viscosity ν decreases.
Conclusion
This paper deals with the resolution and implementation of the implicit Euler scheme in time and spectral discretization in space of the nonstationary vorticity velocity pressure formulation of the Navier-Stokes problem with nonstandard boundary conditions. We prove using the Brezzi-Rappaz-Raviart theorem that the new discrete formulation has a unique local solution. In dimension two, we show an optimal error estimate for the vorticity and velocity and a nearly optimal for the pressure,. | 4,912.6 | 2020-10-23T00:00:00.000 | [
"Engineering",
"Physics",
"Mathematics"
] |
Invariant quadratic operator associated with Linear Canonical Transformations
: The main purpose of this work is to identify the general quadratic operator which is invariant under the action of Linear Canonical Transformations (LCTs). LCTs are known in signal processing and optics as the transformations which generalize certain useful integral transforms. In quantum theory, they can be identified as the linear transformations which keep invariant the canonical commutation relations characterizing the coordinates and momenta operators. In this paper, LCTs corresponding to a general pseudo-Euclidian space are considered. Explicit calculations are performed for the monodimensional case to identify the corresponding LCT invariant operator then multidimensional generalizations of the obtained results are deduced. It was noticed that the introduction of a variance-covariance matrix, of coordinate and momenta operators, and a pseudo-orthogonal representation of LCTs facilitate the identification of the invariant quadratic operator. According to the calculations carried out, the LCT invariant operator is a second order polynomial of the coordinates and momenta operators. The coefficients of this polynomial depend on the mean values and the statistical variances-covariances of these coordinates and momenta operators themselves. The eigenstates of the LCT invariant operator are also identified with it and some of the main possible applications of the obtained results are discussed.
1-Introduction
Linear Canonical Transformations (LCTs) are studied and used in several areas like signal processing, optics, and quantum physics [1][2][3][4][5][6][7][8][9]. In the fields of signal processing and optics, they are known to be the generalization of some useful integrals transforms such as Fourier and Fractional Fourier Transforms. In quantum theory, they can be identified as the linear transformations which keep invariant the canonical commutation relations characterizing the coordinates and momenta operators: their set can be considered as a symmetry group of these canonical commutation relations [10][11]. Some operators related to LCTs and their representations were already considered by various authors [5][6]9]. However no study has been done before regarding the identification of the invariant quadratic operator associated with LCTs, with a view to an application in relativistic quantum physics, as it is envisaged in the present work. It can be remarked that from a physical point of view, the introduction of relativistic canonical commutations relations which is needed in order to define the associated LCTs rises the problem of the existence of a time operator. However, this problem was already tackled by various authors as can be seen in the refs. [12][13][14][15][16]. The results established in the present work also show that the introduction of this operator could have interesting consequences. In this work, the general case of LCTs corresponding to an N-dimensional pseudo-euclidian space is considered. Most of the LCTs currently used in the fields of signal processing and optics can be viewed as LCTs associated to an euclidian space with a dimension equal to 1 (monodimensional case). This monodimensional case is particularly studied in the section 2. An example of a well-known quadratic operator that can be considered as invariant under the action of some particular LCTs, in the framework of non-relativistic quantum theory, is the Hamiltonian operator of a harmonic oscillator. In fact, the Hamiltonian operator of a harmonic oscillator of mass and angular frequency is It can be noticed through the next sections that the general invariant quadratic operator identified through this work has a similarity with this Hamiltonian operator. Because of this similarity, the formalism that is considered in the identification of the eigenstates of this general invariant quadratic operator is analog to the well-known formalism associated with the theory of harmonic oscillator. These eigenstates share themselves some similarities with what are called coherent states, generalized coherent states and squeezed states in the literature [17][18][19][20]. It follows that the formalism considered in the present work can also be considered as an extension of the theory of quantum harmonic oscillator in a more general relativistic framework with an establishment of a link between it and a general theory of linear canonical transformations.
The LCT invariant operator identified through this work can also be considered as a generalization of a linear combination of the reduced dispersion-codispersion operators, denoted ℶ + , introduced in our previous work [9]. The results that are established here generalize and bring also more clarifications to the formulation developed and used in the references [9][10][11].
In order to simplify the presentation of the calculations and results associated with the identification of the LCT invariant operator in the next sections, we recall in this introduction section the definition and some of the main properties of the reduced dispersion operator ℶ + established in [9]. The notations that are used are not exactly the same as those utilized in [9] (especially for the mean values) but this change in notations is more suited to the current purpose. Some natural extensions which can be very easily deduced from the results obtained in [9] are also considered. Let us consider a multidimensional theory corresponding to a pseudo-Euclidian space with a dimension equal to ( , = 0, 1, … − 1). It was established in [9] that some equivalent expressions of a reduced dispersioncodispersion operator ℶ + are (natural unit system, with the reduced Planck's constant ℏ = 1 and speed of light = 1, is used) in which: being the Kronecker symbols and are the covariant components of the metric tensor of the considered space. is characterized by its signature ( + , − ) with + + − = .
〈 〉 and 〈 〉 are mean values of the operators and and ℬ , are statistical dispersioncodispersion (statistical variance-covariance). Both of these mean values and statistical variance-covariances are defined as corresponding to the eigenstates, denoted |{〈 〉, 〈 〉 , , ℬ }⟩, of an operator ℶ + . The parameters and are the multidimensional generalization of the statistical standard deviations. Explicitly, we have the following relations [9] (without a summation on the index in the first relation) The wavefunction corresponding to a state |{〈 〉, 〈 〉 , , ℬ }⟩ in coordinate representation is and are the reduced momenta and coordinates operators. Their expressions are The operators and † are given by the relation The commutation relations in (7) shows that the operators and † have the properties of ladder operators. It can be verified that the state |{〈 〉, 〈 〉 , , ℬ }⟩ corresponding to the wavefuntion in (5) is also an eigenstate of the operator defined by = ( + 2 ℬ ) (8) the corresponding eigenvalue equation is |{〈 〉, 〈 〉 , , ℬ }⟩ = 〈 〉|{〈 〉, 〈 〉 , , ℬ }⟩ (9) with 〈 〉 = 〈 〉 + 2 ℬ 〈 〉. Because of the relation (9) and for the sake of simplicity, the notation |{〈 〉}⟩ may be used instead of |{〈 〉, 〈 〉 , , ℬ }⟩ for this eigenstate in (9).
The operators and † acts as the ladder operators on the states |{ , 〈 〉}⟩ : they permits the increasing and decreasing of the values of . For the case of a monodimesional space ( = 1) with the metric 00 = 1. Index takes the unique value 0 and we have the relations The wavefunctions, in coordinate representation, corresponding to the states | 0 , 〈 0 〉⟩ are the harmonic Hermite-Gaussian functions that were introduced and studied in our previous works [9,21].
It can be verified that the operator ℶ 00 + in (13) is invariant under the action of a set of particular LCTs which include the fractional Fourier transforms. In the present work, these particular LCTs are explicitly defined by the relation (36) in the section 2 below. All the relations associated to this operator are then covariant under the action of these particular LCTs. Through the next sections, the generalization of this operator which is invariant under the action of any LCT will be identified. The main relations, analog of (14), (15), (16) and (17), which are associated with this general invariant operator and are covariant under the action of any LCT will also be established. The multidimensional generalization of the obtained results will also be considered. Some main possible applications of these results are discussed in the conclusion section. Through this work, boldface type is used for the quantum operators.
Definition of the LCTs in quantum theory
In quantum theory, Linear Canonical Transformations can be defined as the linear transformations of coordinates and momenta operators which keep invariants the canonical commutations in the relation (2). In the case of a general pseudo-Euclidian space as considered in the previous section, they can be defined through the following relations. If we denote respectively , , , and the × matrices corresponding respectively to the parameters , , , and , the relation (20a) is equivalent to the matrix relation The relation (20b) means that the 2 × 2 matrix ( ) belongs to the pseudo-symplectic group (2 + 2 − ).
For the case of a monodimensional space ( = 1) with the metric 00 = 1 (which means in particular that we have 0 = 0 and 0 = 0 ) the relations (19) and (20a) are reduced to In view of the multidimensional generalization, the index 0 is kept through all calculation that are performed for this monodimensional case.
Laws of transformation of wavefunctions and relation with integral transforms
Let | ⟩ be a random general state and let us denote ⟨ 0 | ⟩ and ⟨ ′ 0 | ⟩ the wavefunctions respectively in the 0representation and ′ 0 -representation. It can be established from the relation (21) that these two wavefunctions are linked by the following integral transform The relations (22) show explicitly the equivalence between the operator transformation (21) and the integral transforms which are currently considered in the fields of signal processing and optics ( being a constant) [1][2][3][4][5][6][7][8].
In our framework, i.e. quantum theory, these integral transforms can be identified as the laws of transformation of wavefunctions. In fact, the relations (22) show that under the action of an LCT, the state | ⟩ itself can be considered as an invariant but it is the wavefunction which changes. Explicitly, we may write The relation (22) and (23) show explicitly that the LCT corresponds to a basis change between the basis {| 0 ⟩} and | ′ 0 ⟩.The wavefunctions related by the relation (22) being the components of the state vector | ⟩ respectively in each of these basis. This basis change can be defined explicitly by the relation
Laws of transformation of mean values and statistical variance-covariance
Let be respectively 〈 0 〉 = ⟨ | | ⟩ and 〈 〉 = ⟨ | | ⟩ the mean values of the coordinate operator and momentum operator corresponding to a general state | ⟩. Their laws of transformations can be deduced easily from the relation (21) We have then also the relation It can be remarked that the relation (26) remains the same even if instead of the Linear Transformation (21) an Affine Transformation is considered i.e. a Linear Transformation combined with translations: (27) in which and are constant parameters defining the translations. Let us define and consider the following operators The mean values of 00 , 00 , 00 ⋉ , 00 ⋊ , and 00 × in the state | ⟩ corresponds respectively to the momentum statistical dispersion, the coordinate statistical dispersion and the momentum-coordinate codispersion (statistical variance-covariances). They will be respectively denoted 〈 00 〉 , 〈 00 〉 , 〈 00 ⋉ 〉 , 〈 00 ⋊ 〉 and 〈 00 The law of transformations of the operators in (28) can be deduced from the relation (26), we obtain The laws of transformation of the statistical variances-covariances considered in (29) can be deduced easily from (30)
LCT invariant quadratic operator and invariant scalar
Taking into account the relation 0 0 0 (26), it can be deduced from the relations (30) and (31) that we have, for any state | ⟩, the relations The relations (32) and (33) mean that the quadratic operator is an LCT invariant operator and the quantity ⟨ | 00 | ⟩⟨ | 00 | ⟩ − [⟨ | 00 × | ⟩] is an LCT invariant scalar. The notation ℶ 00 + is chosen for the invariant operator in (34) because, as it will be seen, it can be considered as a generalization of the reduced dispersion operator in the relation (13). In fact, if we consider for instance the particular case | ⟩ = |〈 0 〉⟩ (|〈 0 〉⟩ being the state in the relation (17) The operator in (34) is equal to the operator in (13) in this particular case. It means as said that the LCT invariant operator in (34) is a generalization of the operator in (13). This generalization can be also highlighted regarding the following fact: if the transformations (31) and the relation (35) are considered, it can be remarked that the operator in (13) is explicitly invariant if and only if the following relations hold: The conditions and parameterizations in (36) corresponds to a fractional Fourier-like transforms. The operator in (13) is invariant under the action of particular LCTs corresponding to (36) while its generalization (34) is invariant under the action of any LCT. The main difference in the expressions of the operators in (13) and in (34) is the presence of the term ⟨ | 00 × | ⟩ 00 × in (34). The relation (59) in the paragraph 2.7 below show also, explicitly, that the operator in (34) is the LCT invariant generalization of the operator in (13).
The expression of the LCT invariant operator (34) then becomes (for | ⟩ = | ⟩) in the 0 coordinate representation, we have for the operators 00 , 00 and 00 So the coordinate representation ב 00 + of ℶ 0 0+ = ℶ 00 + is a second order differential operator. Its form suggest us to suppose that the wavefunction, in coordinate representation, corresponding to | ⟩ is of the form: The expression of the parameters , , and are to be determined. If we denote the eigenvalue of ℶ 0 0+ = ℶ 00 + corresponding to | ⟩, we have the eigenvalues equations is a real number (it doesn't depend on the variable 0 but may depend on the mean values 〈 0 〉 and 〈 0 〉). It follows that the most explicit general expression of the coordinate wavefunction ⟨ 0 | ⟩ associated to a state | ⟩ is It can be shown that the corresponding momentum wavefunction ⟨ 0 | ⟩ which is the Fourier transform of ⟨ 0 | ⟩ is Remark: It may be noted that the codispersion (i.e. statistical covariance) 〈 00 ⋊ 〉 and 〈 00 ⋉ 〉 are complex numbers while 〈 00 × 〉 is a real number. Their explicit expressions and the relations between them are The wavefunctions ⟨ 0 | ⟩ and ⟨ 0 | ⟩ can be putted in the form But unlike the case in the relations (14), (15) and (18), the parameters 00 and ℬ 00 in (48) are not real numbers but complex numbers. However, their product is equal to a real number. We have the following explicit relations: in the limit 00 × = 0, 00 and ℬ 00 become real numbers and the particular case corresponding to the relations (14) is obtained.
Ladder operators and general eigenstates of the LCT invariant operator
As the LCT invariant operator ℶ 0 0+ = ℶ 00 + in (37) is a generalization of the operator in (13), the state | ⟩ can be seen as a generalization of the state |〈 0 〉⟩ in (17). To make it more explicit and to obtain the eigenstates of the LCT invariant operator ℶ 0 0+ corresponding to higher eigenvalues, the generalization of the operator 0 in (15) and the ladder operators 0 , † in (16) are expected to be introduced.
Let us consider the expression of ⟨ 0 | ⟩ in (48), it can be deduced from this expression that we have the relation The ℬ 00 in this expression is given in the relation (49) such that we have more explicitly: It can then be shown that the following commutation relation holds The eigenvalue equation in (51) suggest the use of the notation |〈 0 〉⟩ for the eigenstate | ⟩. This state | ⟩ = |〈 0 〉⟩ is a common eigenstate of the operators and ℶ 00 + : according to (51), it is an eigenstate of with the eigenvalue equal to 〈 0 〉 and according to (41) and (42) it is an eigenstate of ℶ 00 + with its lowest eigenvalue equal to The relations (55) and (56) which are analogs to the relations characterizing a linear harmonic oscillator suggest us to denote | 0 , 〈 0 〉⟩ a general eigenstates of ℶ 0 0+ = ℶ 00 + with 0 a positive integer. Explicitly, we have the relations { 0 |〈 0 〉⟩ = 〈 0 〉|〈 0 〉⟩ ℶ 00 The relations in (57) are exactly the generalization of the relations in (17). The particular case (17) corresponds to the limit 00 × = 0. On one hand, the reduced dispersion operator in (17) is invariant under the action of the particular LCTs corresponding to relation (36) then the relation (17) is covariant under the action of these particular LCTs. On the other hand, the operator ℶ 0 0+ = ℶ 00 + in (56) and (57) is invariant under the action of any LCT: it follows that the relation (57) which generalizes (17) is also covariant under the action of any LCT. In other words, the LCT group is a symmetry group for the relations (56) and (57).
In our previous work [9], the particular reduced dispersion operator corresponding to the relations (17) and its eigenstates were implicitly used to define the phase space representation. Following the above results, it can be remarked that a more general phase space representation which is explicitly LCT-covariant can be obtained if this particular reduced dispersion operator and its eigenstates used in these formulations are replaced by their generalizations which correspond to the relations (56) and (57).
Laws of transformations of the reduced and ladder operators
The adequate generalization of the reduced operators and can be identified from the relation (54) taking into account (49) Let us consider the LCT (26) in which the state | ⟩ is taken to be the common eigenstate |〈 0 〉⟩ of 0 and ℶ 00 as defined by the relation (57) :| ⟩ = |〈 0 〉⟩. We have then: The laws of transformation of the statistical dispersion-codispersion can be deduced from the relations (31) We may remark that the relation (64) describes a transformation which has a similarity with Bogolioubov transformations [22][23]. It can be deduced from (64) that we have for the transformation of the operator 0 and its mean value (and eigenvalue) 〈 0 〉
3-Matrix formalism
Let us introduce three parameters , and defined from the statistical variance-covariance of coordinate and momentum through the following relations Using the relation (71), the relations between the momentum and coordinate operators , and their reduced forms and can be written in the matrix form The matrix √2 ( 0 ) is the inverse of the matrix √2 ( 0 − ). Explicitly, we have the relation From the relation (71) and (43), it can be deduced that we have the relation The relations in (75) can be written in the matrix form the matrix ( 〈 00 〉 〈 00 × 〉 〈 00 × 〉 〈 00 〉 ) is a momentum-coordinate statistical variance-covariance matrix.
The law of transformation in (61) can be written in the matrix form From the relation (77), the invariance of the LCT scalar 〈 00 〉〈 00 〉 − 〈 00 × 〉 2 can be interpreted as the invariance of the determinant of the statistical variance-covariance matrix ( 〈 00 〉 〈 00 × 〉 〈 00 × 〉 〈 00 〉 ). The relation (77) show that this invariance is a direct consequence of the relation 0 0 0 0 − 0 0 0 0 = 1. The matrix formalism considered here can also be used to provide a proof for the relation (62). Let us write the transformation of the reduced operator in the form On one hand, taking into account the relations (72), (73) and (74), it can be deduced that we have the relation On the other hand, it can be deduced from the relations (76) and (77) that we have
4-Multidimensional generalization 4.1 Momenta-coordinates statistical variance-covariances and matrix formalism
Let us now consider the multidimensional case. Let |{〈 〉}⟩ be the state corresponding to the following wavefunction (in coordinate representation) The relation (83) is a multidimensional generalization of (45). is a real number which may depend on the mean values 〈 〉 and 〈 〉. The following relations which are associated to this wavefunction can be verified we have also the following eigenvalue equations and relations which justify the notation |{〈 〉}⟩ for these state itself The matrix form of the relation (90) is It can be deduced, using the relation (89) that the relation (91) is equivalent to the following relation
Reduced operators
Taking into account of the relations (84), (87) The law of transformation of the reduced operators defined in (94) can be written in the form In which Π, Ξ, Θ and Λ are the × matrices corresponding to the parameters Π , Θ , Ξ and Λ . It can be deduced from the relation (92), (95), (97) and (98) that we have the relation The relation (99) is the multidimensional generalization of the relations (62) it can be deduced from the relation (100) that we can have the parameterization The law of transformations of the ladder operators † and can be deduced from the relations (94), (98) and (100). We obtain The transformations in (102), which are the multidimensional generalizations of (64) share also some similarities with the Bogolioubov transformations [22][23].
Invariant quadratic operator
It can be deduced from the relations (98) and (100)
Invariant quadratic operator and spinorial representation of LCTs
A spinorial representation of LCTs can be established from the pseudo-orthogonal representation defined through the relations (98), (100) and (101). An explicit study of this spinorial representation and its applications in particle physics is considered in [11]. The establishment of this spinorial representation is based on the introduction of the operator = + (106) in which the and are the generators of the Clifford algebra ℓ(2 + , 2 − ). They verify the following anticommutation relations From the relations (103), (106) and (107) and the commutation relations[ , ] = , it can be established that we have between the invariant quadratic operator ℶ + and the operator the relation
5-Discussions and conclusions
The expression of the general invariant quadratic operator associated with any LCT that we are looking for is given by the relations (34) or (59) for the monodimensional case and by the relations (103) or (104) for the multidimensional one. Their eigenstates and corresponding eigenvalues are respectively given by the relations (57) and (105). These LCT invariant operators are second order polynomials of the momenta and coordinates operators. The coefficients of the polynomials depend on the mean values and statistical variances-covariances of these momenta and coordinates operators.
It can be remarked that these LCT invariant operators are generalizations of the operator ℶ + = ℶ + associated respectively with the reduced dispersion operators in the relations (13) for the monodimensional case and in (1) for the multidimensional one.
The main difference between the reduced dispersion operators (1) and their generalization in (103) and (104) can be seen if one compares the expressions of the reduced and ladders operators (93) and (94) with (6) and (7): there is in (93) The operator ℶ + = ℶ + associated with the particular reduced dispersion operators defined in (1) is invariant under the action of a set of particular LCTs which include Lorentz transformations and fractional Fourier transforms while its generalization given through the relations (103) and (104) is invariant under the action of any LCT.
Through our previous works (in [9] for instance), the eigenstates of the operator associated with the particular reduced dispersion operators in (1) were implicitly used to construct the called phase space representation of quantum theory. Taking into account the results established through this paper, it is clear that these states are to be replaced by their generalization in (105) which are the eigenstates of the general LCT -invariant operator (103). In this case, we obtain a fully and explicitly LCT covariant phase space representation. This LCT covariant phase space representation may be used to formulate an LCT covariant relativistic quantum thermodynamic. In fact, it is known, from the kinetic theory of gases for instance, that thermodynamic variables can be linked with the statistical variances of particles speeds and then with the statistical variances of their momenta. And it can be remarked, in the relations (86), (88) and (94) for instance, that the statistical variances-covariances of momenta and coordinates is expected to be at the core of the formulation of this LCT covariant phase space representation. It is also known, as considered in [24] for instance, that the obtention of phase space distribution may be exploited to establish a link between quantum theory and thermodynamics. This phase space representation can also possibly be used in the study of the relation between quantum and classical theories. The possibility of considering an LCT group as a symmetry group in relativistic quantum physics is considered in [11]. The results established in the present work have an important implications in this framework given the well-known importance of symmetry and invariance in physics [25][26][27]. An LCT-covariant wave equation corresponding to a scalar wavefunction or an equation for a scalar field can be for instance obtained using the LCT invariant operator ℶ + itself. An LCT covariant spinorial wave equation or an equation for a spinor field may also by established using the spinorial representation of LCTs that can be deduced from the pseudo-orthogonal representation defined through the relations (98), (100) and (101). This spinorial representation can also lead to a natural classification of the elementary fermions of the Standard Model of particle physics as it is shown in [11].
The results established through this work can also be used and applied in all areas concerned by the LCTs. | 6,211.8 | 2021-02-01T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Understanding the Hydrothermal Formation of NaNbO3: Its Full Reaction Scheme and Kinetics
Sodium niobate (NaNbO3) attracts attention for its great potential in a variety of applications, for instance, due to its unique optical properties. Still, optimization of its synthetic procedures is hard due to the lack of understanding of the formation mechanism under hydrothermal conditions. Through in situ X-ray diffraction, hydrothermal synthesis of NaNbO3 was observed in real time, enabling the investigation of the reaction kinetics and mechanisms with respect to temperature and NaOH concentration and the resulting effect on the product crystallite size and structure. Several intermediate phases were observed, and the relationship between them, depending on temperature, time, and NaOH concentration, was established. The reaction mechanism involved a gradual change of the local structure of the solid Nb2O5 precursor upon suspending it in NaOH solutions. Heating gave a full transformation of the precursor to HNa7Nb6O19·15H2O, which destabilized before new polyoxoniobates appeared, whose structure depended on the NaOH concentration. Following these polyoxoniobates, Na2Nb2O6·H2O formed, which dehydrated at temperatures ≥285 °C, before converting to the final phase, NaNbO3. The total reaction rate increased with decreasing NaOH concentration and increasing temperature. Two distinctly different growth regimes for NaNbO3 were observed, depending on the observed phase evolution, for temperatures below and above ≈285 °C. Below this temperature, the growth of NaNbO3 was independent of the reaction temperature and the NaOH concentration, while for temperatures ≥285 °C, the temperature-dependent crystallite size showed the characteristics of a typical dissolution–precipitation mechanism.
■ INTRODUCTION
Hydrothermal synthesis is a low-temperature environmentally friendly route to a variety of functional oxides reducing challenges with evaporation, agglomeration, and coarsening, which often takes place at higher temperatures. 1−6 Still, the development of the method has been mostly achieved through a trial-and-error approach as the conventional autoclave design, not easily penetrable by X-rays, makes it inherently challenging to study the synthesis in real time. Thus, the nature of the reactions taking place inside the reaction vessel is not completely understood. NaNbO 3 has gained attention due to its many potential applications in high-density optical storage, enhancing nonlinear optical properties, as hologram recording materials, etc., 7,8 It is also an end-member of the K x Na 1−x NbO 3 solid solution, a promising lead-free replacement for lead zirconate titanate (PZT). 9,10 Moreover, NaNbO 3 nanowires formed by hydrothermal synthesis and subsequent calcination have proven useful in lead-free piezoelectric nanogenerator applications. 11 Ex situ studies of the hydrothermal synthesis of NaNbO 3 12 − 18 The sodium hexaniobate then transforms into Na 2 Nb 2 O 6 ·H 2 O, which in turn transforms into perovskite NaNbO 3 , displaying a wide range of morphologies including cubes 19−21 and various agglomerated structures. 17,22,23 The crystal structures of these phases are significantly different from each other, as seen in Figure S1 in the Supporting Information, and it is not clear how the structures evolve from one phase to the next or how they affect the growth mechanism of NaNbO 3 . Some attempts have been made to understand these growth mechanisms by ex situ studies, including the effect of the precursor 19 and some intermediate structures, 20 but as recent in situ studies have shown the presence of several more intermediate phases than previously reported, 24,25 the proposed growth mechanisms may not give a full depiction of the resulting effects on the NaNbO 3 growth. More work is therefore needed to understand how these reaction schemes depend on temperature and mineralizer concentration and how the product is consequently affected.
Here, we present an in situ X-ray diffraction (XRD) study of hydrothermal synthesis of NaNbO 3 , shedding light on the entire reaction scheme for a wide range of synthesis temperatures and NaOH concentrations, commonly seen in the literature. 14,[20][21][22]26 We determine how the reaction scheme is affected by reaction temperature and NaOH concentration. Knowledge about the kinetics during formation of NaNbO 3 , which is affected by the reaction mechanism, is obtained and useful for the optimization of reaction rate and resulting crystallite size. Further, as most literature on hydrothermal synthesis of NaNbO 3 presents data at temperatures below 250°C, we investigate the reaction at higher temperatures. In combination, the acquired knowledge provides the ability to speed up the reaction while still being able to achieve the desired reaction product, valuable for production at industrial scales.
■ EXPERIMENTAL SECTION
Orthorhombic T-Nb 2 O 5 powder 27 was synthesized by precipitation from (NH 4 )NbO(C 2 O 4 ) 2 ·5H 2 O (Sigma-Aldrich, 99.99%) dissolved in water by adding aqueous ammonia solution (25 wt %, Emsure) before drying and then calcining at 600°C for 12 h, as described by Mokkelbost et al. 25,28 Highly concentrated suspensions were made by mixing T-Nb 2 O 5 powder with 9 or 12 M NaOH aqueous solutions, giving a Na/Nb ratio of 9.5 or 13.2. The suspensions were stored in PET bottles and injected with a plastic syringe into a custom-made in situ cell, making sure to fill the entire volume of the cell. The cell, which has been previously described, 25 consisted of a sapphire capillary with inner and outer diameters of 0.8 and 1.15 mm, respectively, which was fixed to an adjustable aluminum frame by graphite ferrules and Swagelok fittings. A High Pressure Liquid Chromatography (HPLC) pump connected to the dead-ended cell provided a stable pressure. The mid 1/3 of the capillary's length was heated by a hot-air blower, and the temperature was calibrated by refining the unit cell expansion of boron nitride. 29 The blower was ramped up to reaction temperature while being directed away from the capillary and was remotely swung into position only after the desired pressure was achieved and data acquisition had been initiated, providing quasi-instant heating (see temperature profiles in Figure S2 in the Supporting Information). Temperatures in the range of 160− 420°C were studied, and the pressure was set to 250 bar.
In situ powder X-ray diffraction (PXRD) data were collected at the Swiss-Norwegian Beamlines (BM01) at the European Synchrotron Radiation Facility (ESRF) using a monochromatic beam with a wavelength of 0.6776 Å. The diffraction signal was detected by a Pilatus 2M detector 30 with acquisition times of 0.1 or 5 s depending on the experiment. The as-recorded data were treated with the Pilatus@SNBL platform, 30 and the refinements were performed using TOPAS (version 5) in launch mode using JEdit with macros for TOPAS. 31 Batch refinements were made possible by launching TOPAS with Jupyter Lab/Notebook. 32 The diffraction patterns were compared to structure files from the Inorganic Crystal Structure Database (ICSD), Crystallographic Open Database (COD), and International Centre for Diffraction Data (ICDD). Phases with adequate signal/noise ratio, which could not be fitted successfully to any known structures, were indexed by a grid search using McMaille 33 on the 20 most intense diffraction lines. The choice of a proper unit cell was based on a high figure of merit, provided that all of the reflections were identified. The least symmetric space group was then chosen to avoid extinction of any reflections.
The instrumental resolution function, wavelength, and detector distance were found and calibrated by refining an NIST 660a LaB 6 standard. The diffraction patterns of the product phases were summed (25−60 s total acquisition time) to enhance statistics, and the default approach was a Rietveld refinement, refining the unit cell parameters, the Gaussian and Lorentzian isotropic size parameters, isotropic temperature factors, scale factor, and Chebychev background parameters. The Lorentzian and Gaussian isotropic size parameters were used to extract the integral breadth to give volume-weighted mean crystallite size. The intermediate phases HNa 7 Nb 6 O 19 ·15H 2 O and Na 2 Nb 2 O 6 ·H 2 O were refined with the space groups Pmnn 34 and C2/c, 35 and the NaNbO 3 product was refined with the space groups Pbcm 36 or Pnma. 37 Atomic positions in all of the structures were fixed to literature values. For the time-resolved Rietveld refinements, the same approach as described above was used, but with additionally fixing the isotropic temperature factors. The phase fraction evolution of NaNbO 3 was assumed to correspond to the normalized timeresolved scale factor. Structures were visualized with VESTA. 38 Information about the kinetics of the NaNbO 3 growth mechanism was extracted by fitting the refined phase fraction over time to the Johnson−Mehl−Avrami (JMA) equation. 39,40 Simultaneous in situ small-angle X-ray scattering (SAXS) and PXRD were performed at beamline 7.3.3 at the Advanced Light Source in Berkeley, California, using monochromatic X-rays of 1.2398 Å. The PXRD and SAXS signal were acquired with a Pilatus 300K-W detector and a Pilatus 2M detector, respectively. The instrument geometry configuration was calibrated using a silver behenate (CH 3 (CH 2 ) 20 COOAg) standard. The two-dimensional data were reduced using the Nika Igor Pro analysis software package. 41 The data were plotted and analyzed in Jupyter Lab 32 with the Scipy tool packages Numpy, Matplotlib, and Pandas. 42 The suspensions made for this purpose had 50 wt % of T-Nb 2 O 5 compared to the in situ PXRD experiments performed at the ESRF (as described above) to enhance the X-ray transmission. The same in situ cell as described above was used, but with a splash protection cage of aluminum bars and Kapton films around it.
Total scattering measurements on unheated suspensions of T-Nb 2 O 5 powder in NaOH solutions at ambient pressure were performed at beamline BL08W at SPring-8/JASRI in Hyogo, Japan, using an a-Si flat panel area detector. 43 The wavelength (0.1077 Å) and instrumental parameters were calibrated with an NIST 660a CeO 2 standard. Similar suspensions to those for the in situ PXRD experiments, with 9 and 12 M NaOH solutions, fresh and aged for 1, 10, and 24 h were injected into 0.5 mm Kapton capillaries. The transmission signal was detected with 1 s acquisition time, collecting a total of 10 images per sample. The data were background-subtracted and converted to reduced structure functions, F(Q), and then Fouriertransformed to pair-distribution functions (PDF), G(r), 44 using xPDFsuite 45 and analyzed using the Diffpy-CMI software using a Q max of 16.5 Å −1 and a Q min of 1.2 Å −1 . 46
■ RESULTS AND DISCUSSION
All of the experiments were performed using the same T-Nb 2 O 5 solid precursor suspended in 9 and 12 M NaOH aqueous solutions. All of the suspensions were hydrothermally treated at 250 bar in the temperature range 160−285°C. An additional reaction with 9 M NaOH was monitored under supercritical conditions (250 bar, 420°C). The effects of NaOH concentration and reaction temperature on the phase evolution, crystallite size, unit cell volume, and reaction kinetics are discerned in the following sections. Note that the datasets for 9 and 12 M NaOH at 215°C and 9 M NaOH at 420°C have been published previously. 25 Effects of Reaction Conditions on the Phase Evolution. Figure 1 shows the diffraction patterns of all of the phases observed during the performed experiments, numbered 1−10, for different heating time and/or temperature, leading to the formation of NaNbO 3 at 160−420°C. Figure S1 in the Supporting Information. The patterns in the gray areas are pH variants at equivalent times in the reaction. The diffraction lines of T-Nb 2 O 5 (no. 1 in Figure 1) are seen in the diffraction pattern of the unheated precursor suspension with the addition of three diffraction lines at very low Q (0.65, 0.71, 0.76 Å −1 ). The presence of these lines seemed to be dependent on the time since the suspensions were made, which will be investigated, along with their origin, in later paragraphs. All of the diffraction patterns except for the product, NaNbO 3 (nos. 9 and 10 in Figure 1), had diffraction lines at similarly low Q-values, demonstrating the large unit cells of phases present.
In agreement with our previously reported paper, 25 the T-Nb 2 O 5 precursor (no. 1 in Figure 1) is transformed into Figure 1), before several intermediate phases form (nos. 3−5 in Figure 1), ending with the formation of Na 2 Nb 2 O 6 ·H 2 O (no. 6 in Figure 1) and NaNbO 3 (nos. 9 and 10 in Figure 1). In this work, the temperature dependency of this phase evolution has been identified and is presented in Figure 2, where the data for an expanded temperature region (160−420°C) for suspensions with 9 M NaOH are shown (the equivalent data for 12 M NaOH at 160−285°C are presented in Figure S3 in the Supporting Information). Contour plots for the end temperatures are shown at each side. The colored region in the middle section of the figure shows bar plots representing the recorded phase evolution at certain temperatures, with logarithmic interpolations between them. The reaction scheme appearing in the 9 and 12 M NaOH solutions are quite similar at similar temperatures, and the reaction rate increases in a comparable manner for increasing temperature for both concentrations. All of the reactions finished with a full conversion to NaNbO 3 , except in 12 M NaOH at 160°C, where the experiment was prematurely stopped after an almost complete conversion to NaNbO 3 (contour plots of both experiments at 160°C are also shown with a linear y-axis in Figure S4 in the Supporting Information). A full conversion to NaNbO 3 would probably have occurred given enough time, as others have succeeded in producing phase-pure NaNbO 3 under similar conditions after a longer time. 26 Despite many similarities between the reaction schemes in the two NaOH concentrations, a few differences are still observed; first, the stability of HNa 7 Nb 6 O 19 ·15H 2 O is higher in the 9 M solution at all temperatures, which is consistent with the literature, 14 as the transformation from T-Nb 2 O 5 takes place earlier and the lifetime of the phase increases. Second, the opposite trend Figure 1. X-ray diffraction patterns for all appearing phases, with the main structural element for the previously known phases 27,34−37 shown on the top. Gray areas indicate phases appearing at the same step in the reaction, but for different NaOH concentrations. Each diffraction pattern was taken from the slowest proceeding reaction (i.e., lowest temperature) and lowest possible NaOH concentration, where the phase was present, to optimize statistics. Inorganic Chemistry pubs.acs.org/IC Article seems apparent for the following phases forming, resulting in a later onset of NaNbO 3 formation in 12 M NaOH, especially at lower temperatures. In all of the experiments, regardless of temperature and NaOH concentration, a transient phase (no. 3 in Figure 1) appears directly after the formation of HNa 7 Nb 6 O 19 ·15H 2 O. This phase could not be matched with any structure file in the Inorganic Crystal Structure Database (ICSD), Crystallography Open Database (COD) or International Centre for Diffraction Data (ICDD), as specified in Table S1 in the Supporting Information. The first reflection (0.71 Å −1 ) of the HNa 7 Nb 6 O 19 ·15H 2 O phase has the index (011) and seems to remain in this transient phase, while the two next major reflections (0.79 and 0.81 Å −1 ), indexed (101) and (110) Figure 1) for NaOH concentrations of 9 and 12 M, respectively. These two phases could not be well matched with any structure in the ICSD, COD, or ICDD, as specified in Table S1 in the Supporting Information. As they both form from the same starting point and also evolve into the same structure in the next transformation (Na 2 Nb 2 O 6 ·H 2 O), they most probably consist of the same building blocks. The diffraction patterns of the two phases have several similarities in the higher Q-range, but show a distinct difference at lower Q-values. The presence of the low-Q diffraction lines shows that these phases have large unit cells, which are typical for polyoxoniobate clusters. For the polyoxoniobate forming in 12 M NaOH, a line at a very low Q appears (0. 51 The indexing of these two polyoxoniobate phases both resulted in monoclinic crystal systems, as seen in Table S2 in the Supporting Information. For temperatures below 285°C for both NaOH concentrations, both of the polyoxoniobates forming in 9 and 12 M transform into Na 2 Nb 2 O 6 ·H 2 O, previously observed under similar conditions. 13,16,20,22 This phase can be described as staircase-like chains where each step is a [Nb 4 O 16 ] 12− unit, and the chains are separated by water and Na + . For temperatures ≥285°C, Na 2 Nb 2 O 6 ·H 2 O is replaced by far less crystalline phases (nos. 7 and 8 in Figure 1). The diffraction patterns of these phases for the two NaOH concentrations are very similar and have similarities with both Na 2 Nb 2 O 6 ·H 2 O and NaNbO 3 . Indexing of the version of the phase present in 9 M NaOH gave a triclinic crystal system, as seen from Table S2 in the Supporting Information. The broad reflection around 0.9 Å −1 (indexed in Na 2 Nb 2 O 6 · H 2 O, representing the plane, which cuts through rows of neighboring chains) could originate from a partial collapse of the Na 2 Nb 2 O 6 ·H 2 O unit cell, where the distance between chains becomes more disordered. This fits well with certain chains moving closer together as a result of water leaving the structure, which can be described as the formation of Na 2 Nb 2 O 6 ·xH 2 O (x < 1). Diffraction patterns from the literature for a dehydrated version of Na 2 Nb 2 O 6 ·H 2 O resembles the ones shown here. 12 Additionally, it has been reported that Na 2 Nb 2 O 6 ·H 2 O dehydrates and forms microporous Na 2 Nb 2 O 6 as an intermediate phase before the formation of NaNbO 3 , at 282−290°C, 12,35 possibly explaining why this phase is observed in the temperature region ≥285°C . 12 To shed more light on the origin of the unassigned low-Q diffraction lines in the T-Nb 2 O 5 precursor diffraction pattern in Figure 1 and illuminate their dependence on the time since the suspensions were prepared, ex situ total scattering data were obtained for unheated suspensions of T-Nb 2 O 5 in 9 and 12 M NaOH aqueous solutions, aged for various times. Figure 3 presents the PDFs at a local range obtained from the total scattering data of fresh and aged (1, 10, 24 h) suspensions with 12 M NaOH. Longer r-range PDFs for suspensions with both 9 and 12 M NaOH are given in Figure S5a in the Supporting Information, along with the reduced structure functions, F(Q), in Figure S5b. From the bond lengths and trends included in Figure 3, we observe that the peak at approximately 2.0 Å, which represents the bond length between Nb and O in equatorial positions of octahedra or pentagonal bipyramids, is narrowing upon aging time. The peak at approximately 2.5 Å, originating from the long Nb−O bond in a tetragonally distorted octahedron, is increasing in intensity. Combined, these two observations indicate that the coordination of O around the Nb is becoming more defined with aging, likely forming more octahedra at the expense of pentagonal bipyramids. This is supported by the significant growth of peaks at 3.35 and 4.75 Å, coming from Nb−Nb distances of edge-and corner-sharing Nb−O octahedra. The two more subtle features at 3.8 and 4.2 Å, which correspond well with corner-sharing Nb−Nb distances involving at least one Inorganic Chemistry pubs.acs.org/IC Article pentagonal bipyramid, seem to shift to higher r-values pointing to stretching of the bonds, before shrinking, further supporting the breaking up of pentagonal bipyramids. Such a breakup of the structure would require more oxygen entering the structure for the coordination around the Nb atoms to be maintained. This negative charge is likely to be neutralized by Na + entering the structure, possibly explaining the increase of a peak at 2.85 Å, as this is the hydrogen-bond length expected between Na− O octahedra, presented by a gray line separating two Na−O octahedra in the inset (b) in Figure 3. When looking at the reduced structure functions, F(Q) in Figure S5b, the longrange order of the fresh suspensions matches well with the T-Nb 2 O 5 structure, and there is a significant contribution from this structure even after 10 h of aging for both NaOH concentrations. Figure 1.
To get a clearer view on how the different intermediate phases nucleate and grow into particles, simultaneous in situ SAXS/WAXS measurements were obtained. Figure 4 presents the in situ SAXS data for the hydrothermal synthesis of NaNbO 3 at 220°C in a 9 M NaOH solution. The wide-angle X-ray scattering (WAXS) data (presented in Figure S7 in the Supporting Information) were used to determine the phases present during the reaction, and these phases are specified in the right panel of Figure 4. The plot in the inset shows the slope of the curves in the low Q-range (0.0045−0.0055 Å −1 ), extracted by a fitting straight line to the double-logarithmic measured data in this low Q-range, for 4−16 min of heating. The SAXS data from the unheated precursor contains a broad distinct feature, which is assumed to originate from the T-Nb 2 O 5 precursor particles or an amorphous phase. This feature disappears quickly upon heating and directly after its disappearance, a weak sign of another feature at approximately 0.01−0.03 Å −1 appears. This second feature is interpreted as the transient presence of a new set of particles, but the crystal structure(s) cannot be unequivocally identified from the WAXS data in Figure S7 due to the low time-resolution and limited Q-range. Even so, several reflections are appearing at similar Q-values (1.68, 2.05, 2.35, 2.40, 2.70, 2.80 Å −1 ) as for the two intermediate polyoxoniobates in Figure 1. The next phase identified with WAXS is Na 2 Nb 2 O 6 ·H 2 O, and it can be seen that the SAXS slope at low Q increases steadily during the growth of this phase, before stabilizing at the same time (13 min) as the transformation from Na 2 Nb 2 O 6 ·H 2 O to NaNbO 3 completes.
To summarize, several intermediate phases are observed during the hydrothermal synthesis of NaNbO 3 . To understand the structural evolution from one phase to the next, one approach is to visualize the NbO x units of each phase, as seen in the proposed reaction scheme in Figure 5. The precursor T-Nb 2 O 5 consists mostly of octahedra [NbO 6 ] 7− and pentagonal bipyramids [NbO 7 ] 9− with occasional tetrahedra [NbO 4 ] 3− . As the PDFs of the unheated T-Nb 2 O 5 suspensions in Figure 3 shows, [Nb 6 O 19 ] 8− units form at local scales rather quickly upon submerging the solid T-Nb 2 O 5 in concentrated NaOH solutions, resulting in more [NbO 6 ] 7− at the expense of [NbO 7 ] 9− units, as soon as Na + (along with charge-balancing oxygen atoms) and water enter the structure. Na + and water could, for instance, enter the cavities of the T-Nb 2 O 5 structure, resulting in the stretching and breaking of bonds between corner-sharing pentagonal bipyramids. This gradual change in the local environment is the foundation for forming HNa 7 Nb 6 O 19 ·15H 2 O, with a Na/Nb ratio of 7 / 6 . The Na/ Nb ratio in Na 2 Nb 2 O 6 ·H 2 O is 1, and thus the intermediate phases appearing between these two phases should have a ratio between 1 and 7 / 6 , as a gradual expulsion could be expected. The water/Nb ratio should decrease successively from 15 The transformation of Na 2 Nb 2 O 6 ·xH 2 O (x < 1) to NaNbO 3 expels the final water in the structure, causing the octahedra to become corner-sharing instead of edge-sharing for the charge to be distributed more evenly through the structure when water is not screening the charges any longer.
Effects of Reaction Conditions on Crystallite Size and Unit Cell Volume. Figure 6a shows the refined crystallite size of the final NaNbO 3 product at various temperatures in 9 and 12 NaOH aqueous solutions. The refinements showed that the space group Pbcm gave a good fit for NaNbO 3 formed below 340°C, while for 340°C and above, Pnma gave a better fit. . The phases specified in the right panel were determined by simultaneously recorded WAXS data as presented in Figure S7 in the Supporting Information.
Inorganic Chemistry pubs.acs.org/IC Article This temperature is slightly lower than previously published bulk values for this phase transition, which predicts a transition upon heating around 370−400°C. 37,52 This suppression of the phase-transition temperature can be explained as a finite-size effect, often observed for ferroelectric oxides. 53 Two different regimes are apparent for temperatures above and below ≈285°C . Below this temperature, the crystallite size appears fairly temperature-independent with values of 35−50 nm, being slightly smaller for the experiments in 12 M NaOH solutions. Above ≈285°C, the crystallite size is larger and seems to decrease with increasing temperature. The refined crystallite size of Na 2 Nb 2 O 6 ·H 2 O in Figure 6b shows an increasing trend with increasing temperature and NaOH concentration, and the pseudo-cubic unit cell volume in Figure 6c increases with temperature and is affected by the NaOH concentration. The increase in the unit cell volume for higher temperatures is probably due to the elevated temperatures at which the measurements were performed. It is not likely to originate from a finite-size effect, as such an effect has previously shown to give an opposite trend for this materials class (i.e., smaller crystallites gives a larger unit cell). 53 The difference in temperature effect on the crystallite size of NaNbO 3 for reaction temperatures below and above ≈285°C in Figure 6 shows that there is a difference in the growth mechanism for the two regimes. The features in the in situ SAXS signal in Figure 4 suggests that only one set of particles forms during the hydrothermal synthesis of NaNbO 3 at 220°C , as there is only one new feature to appear in the higher Qrange. This suggests that the particles formed in the beginning of the synthesis are converted directly into the next phases and not through a dissolution−precipitation mechanism at this temperature. The consistently larger crystallite size of Na 2 Nb 2 O 6 ·H 2 O compared to NaNbO 3 could thus be explained through a gradual conversion of the Na 2 Nb 2 O 6 · H 2 O particles to NaNbO 3 , by the expulsion of water from the structure.
Effects of Reaction Conditions on the NaNbO 3 Reaction Kinetics. The effects on the kinetics involved in the final step in the reaction scheme as well as the nucleation and growth of NaNbO 3 are presented in Figure 7. The kinetics were quantified using the Johnson−Mehl−Avrami (JMA) equation α = 1 − e −(Kt) n , where α is the phase fraction, K is a rate constant, and n depends on the transformation mechanism. 39,40 The JMA slope n and intercept K were calculated by transforming the phase fraction with the Sharp− Hancock method. 54 The JMA slopes (n) in Figure 7a are in the range of 2−4, with a clear difference between the reactions at temperatures above and below ≈285°C. Below ≈285°C, the n values are strongly dependent on the NaOH concentration, having larger values for 12 M solutions. Such a difference in the n value could imply that there is a more restricted nucleation and/or growth of NaNbO3 in 9 M compared to 12 M suspensions. 54 Above ≈285°C, the difference between the two NaOH concentrations appears to decrease, although this should be confirmed with more experiments. The JMA intercepts (ln K) are presented in an Arrhenius plot in Figure 7b and seem to be independent of NaOH concentration. Again, a clear difference is seen between the two regimes below and above ≈285°C, with significantly different Arrhenius slopes, from which the activation energy can be calculated. A significantly lower activation energy can be seen above ≈285°C (23.4 kJ/mol), compared to below ≈285°C (146.7 kJ/mol). This could be related to the less crystalline preexisting phase in the hightemperature region, giving a larger surface area and less rigid species for nucleation and growth.
By comparing the phase evolution in Figure 2 with the crystallite size in Figure 6, it is interesting that the three reactions resulting in the largest crystallite size were the reactions going through the poorly crystalline dehydrated Na 2 Nb 2 O 6 ·xH 2 O, x < 1, phase on their way to NaNbO 3 . These reactions also have a lower activation energy and JMA slope compared to the reactions where the highly crystalline Na 2 Nb 2 O 6 ·H 2 O phase is present. The low crystallinity of the dehydrated phase opens for a dissolution−precipitation-based transformation to NaNbO 3 , resulting in a decreasing crystallite size with decreasing temperature, which is what is observed.
The complex temperature-dependent reaction scheme and its effect on the kinetics and product crystallite size presented here underline the importance of in situ studies during the hydrothermal synthesis of oxides. The revealing of two distinctly different growth regimes may offer important insight when untangling the growth mechanisms, leading to the various sizes and morphologies resulting from the hydrothermal synthesis of NaNbO 3 .
■ CONCLUSIONS
The entire reaction scheme, including several known and unknown intermediate phases, was observed during the hydrothermal synthesis of NaNbO 3 , and through observations over a large temperature range for two different NaOH concentrations, the relationship between them has been established. Ex situ PDF indicated that the T-Nb 2 These fragments are the most likely building blocks for the subsequent formation of polyoxoniobates, whose structure depended on the NaOH concentration. Following these polyoxoniobates was Na 2 Nb 2 O 6 ·H 2 O, which appeared in a dehydrated form at temperatures ≥285°C, before converting into the final phase, NaNbO 3 . The total reaction rate increased with decreasing NaOH concentration and increasing temperature, due to the increased stability of the intermediate polyoxoniobate phases. The final NaNbO 3 particles had an orthorhombic structure with the Pbcm space group <340°C, and Pnma ≥ 340°C, showing suppression of the phasetransition temperature due to finite-size effects. Thermal expansion of the unit cell was observed, probably due to the elevated temperatures at which the measurements were performed.
Two distinctly different growth regimes for NaNbO 3 were observed, based on the observed phase evolution and the resulting growth kinetics of NaNbO 3 , for temperatures below and above ≈285°C. Below this temperature, the resulting crystallite size of NaNbO 3 was independent of the reaction temperature and the NaOH concentration due to NaNbO 3 growing at the expense of a highly crystalline intermediate phase, Na 2 Nb 2 O 6 ·H 2 O. A high activation energy of 146.7 kJ/ mol and pH-and temperature-dependent n-values were observed. When NaNbO 3 grew at the expense of a less crystalline dehydrated intermediate phase for temperatures ≥285°C, the resulting crystallite size was larger and showed a temperature-dependent trend typical for the dissolution− precipitation mechanism. The activation energy was significantly lower in this regime (23.4 kJ/mol) with n-values of ≈2.0−2.5. Inorganic Chemistry pubs.acs.org/IC Article the two reactions at 160°C with linear y-axis; pairdistribution functions and reduced structure functions from the in situ total scattering experiments; fits of niobate and hexaniobate structures to selected pairdistribution functions; and WAXS data recorded simultaneously with our SAXS data (PDF) NaNbO 3 _285C_12M (MP4) | 7,401.8 | 2021-03-23T00:00:00.000 | [
"Materials Science"
] |
Importance of N-Glycosylation on CD147 for Its Biological Functions
Glycosylation of glycoproteins is one of many molecular changes that accompany malignant transformation. Post-translational modifications of proteins are closely associated with the adhesion, invasion, and metastasis of tumor cells. CD147, a tumor-associated antigen that is highly expressed on the cell surface of various tumors, is a potential target for cancer diagnosis and therapy. A significant biochemical property of CD147 is its high level of glycosylation. Studies on the structure and function of CD147 glycosylation provide valuable clues to the development of targeted therapies for cancer. Here, we review current understanding of the glycosylation characteristics of CD147 and the glycosyltransferases involved in the biosynthesis of CD147 N-glycans. Finally, we discuss proteins regulating CD147 glycosylation and the biological functions of CD147 glycosylation.
Alteration in glycans of glycoproteins and glycolipids is a significant characteristic of tumor malignant transformation, and is closely associated with the adhesion, invasion and metastasis of tumor cells [28]. CD147 is post-translationally modified through N-glycosylation. Investigations into CD147 glycosylation have clarified its role in numerous physiological and pathological events. This review focuses on recent progress on the structural and biological characteristics of CD147 glycosylation and recapitulates glycosyltransferases involved in the biosynthesis of CD147 asparagine-linked oligosaccharides (N-glycans) to seek future therapeutic strategies for CD147-associated diseases.
Structure of CD147
CD147 consists of a 21 amino acid (aa) signal peptide, a 185 aa extracellular domain (ECD), a 24 aa transmembrane domain, and a 39 aa cytoplasmic domain [5]. Four cysteines (41, 87, 126, and 185) located in the extracellular region form two typical IgSF domains [6], which share deep homology to the IgVκ domain and the β-chain of MHCII antigens [29]. The N-terminal domain is responsible for counter receptor activity and protein oligomerization [30,31]. The C-terminal domain is responsible for association with caveolin-1 [32], integrins (α3β1 and α6β1) [24,33], and annexin II [34]. The transmembrane domain possesses a series of conserved hydrophobic amino acids except a rarely occurring charged glutamic acid (218) in transmembrane proteins [5]. The transmembrane domain exhibits affinity toward other proteins, such as cyp60 [35], CD43 [36], and syndecan [37], thus, eliminating the high-energy charge. Moreover, it possesses a typical leucine zipper motif containing three leucines (206, 213 and 220) and a phenylalanine (227) appearing at every seventh residue, which facilitates membrane-protein associations and diverse cellular signal pathways [9,13]. The highly conserved intracellular domain of CD147 plays a pivotal role in association with MCTs (MCT1, MCT3 and MCT4) [38], although it has not been well explored.
Alternative splicing and alternative promoters result in four isoforms of CD147. Among them, basigin-1 is a retina specific CD147 containing an additional unglycosylated domain [39,40]; basigin-3 or basigin-4, less expressed in normal and tumor human tissues, contains a single extracellular domain (IgI), and basigin-3 serves as an endogenous inhibitor of basigin-2 via hetero-oligomerization. Both basigin-3 and basigin-4 have HG (highly glycosylated) and LG (lowly glycosylated) forms as observed in basigin-2 [41]. However, the knowledge of glycosylation of the above-mentioned scarce isoforms is limited. Given that the ubiquitously expressed basigin-2 mediating matrix metalloproteinases (MMPs) production is profoundly explored, we will concentrate on the glycosylation of basigin-2 in the following discussion.
The crystal structure of the ECD of CD147 (Figure 1) was revealed by X-ray analysis [42]. CD147 crystallizes in space group with four monomers in the asymmetric unit. Each monomer consists of a typical N-terminal IgC2 set immunoglobulin domain 1 and a typical C-terminal IgI set immunoglobulin domain 2 (107-205), which are connected by a 5 aa flexible linker responsible for diverse inter-domain angles within the four monomers. This unique C2-I domain arrangement distinguishes CD147 from all other IgSF proteins with known structures. With edge-by-edge packing and association of β-sheets, monomer interaction leads to two types of dimerization: C2-C2 dimerization (BC, AC and DD' dimers) symbolizing a trans-cellular homophilic interaction between two CD147 molecules on neighboring cell membranes and C2-I dimerization (AD dimer) representing a heterophilic interaction between CD147 and other IgSF proteins. These dimers further adhere to each other by sharing some conserved β-strands at either edge of the β-barrels [42]. A further structure analysis of domain 1 illustrated that it formed a dimer through the exchange of its β-strand (strand G) [43]. Oligomerization contributes to CD147's functions, including counter-receptor binding, association with other proteins, and MMPs induction [44,45]. The N-terminal domain is a typical C2 set immunoglobulin domain consisting of a β-barrel formed by the sheets EBA and GFCC' and a conserved disulfide bond between strands B and F. The C-terminal domain is a typical I set immunoglobulin domain formed by the β-sheets DEBA and A'GFCC' and a disulfide bond between Cys126 and Cys185 connecting strands B and F together. One N-linked glycosylation site, Asn44, lies at the end of strand B, i.e., the outermost position of the EBA sheet. The other two sites, Asn152 and Asn186, locate at the middle of C'D loop and strand F, respectively, with their lateral chains protruding oppositely from A'GFCC' and DEBA sheets [42]. The figure is generated using the GlyProt software program [46,47] and we select oligomannoses on behalf of the potential diverse glycan structures to create the 3D protein structure of CD147.
The Glycosylation Characteristic of CD147
The overwhelming majority of studies showed that CD147 is a N-linked glycosylated protein except one study by Fadool et al., which demonstrated that chicken 5A11/HT7 antigen of neural retina and epithelial tissues contains both N-linked and O-linked oligosaccharides [13]. Members of CD147 family, EMMPRIN, basigin, 5A11/HT7 for instance, from different species, tissues or cells appear as diverse glycosylated forms, with large variation in molecular weight [32,[48][49][50]. In this review we focus on the N-glycosylation of CD147. The unglycosylated CD147 has a molecular weight of 27 kDa, whereas the glycosylated form has a molecular weight between 43 and 66 kDa [7,15,[48][49][50]. Treatment with different endoglycosidases indicates that N-glycans contribute to almost half the size of the mature molecule [7,51].
Combining with site-specific mutagenesis study, the sequence alignment demonstrated that there were three conserved Asn glycosylation sites across species in the ECD of CD147 [5,32,42]. Mutation of three N-glycosylation sites (N44Q, N152Q, and N186Q) caused an approximately equal decrease in the molecular weight of HG-CD147 and LG-CD147, suggesting that they make comparable contributions to CD147 glycosylation [32]. The study unraveling the crystal structure of CD147 provided the proof of the spatial position of three glycosylation sites: Asn44 at the end of strand B, Asn152 and Asn186 at the middle of C'D loop and strand F (Figure 1), respectively [42].
HG-CD147 and LG-CD147
A distinct feature of CD147 from various cells and tissues is that based on the degree of glycosylation, two bands were shown on the result of Western blotting, suggesting that CD147 exists in two forms: HG-CD147 (~40-60 kDa) and LG-CD147 (core-glycosylated CD147, ~32 kDa). HG-CD147 contains complex-type carbohydrate that is sensitive to Peptide N Glycosidase F (PNGase F), whereas LG-CD147 contains high-mannose carbohydrate that is sensitive to Endoglycosidase H (endo H) [32]. In terms of the general process of protein glycosylation [52] and the characteristic of CD147 glycosylation [32,53], we may conclude that nascent peptide of CD147 receives preliminary glycosylation in the ER (Figure 2), forming an immature high mannose form (LG-CD147), and in the Golgi complex CD147 is further modified to form more complicated branching carbohydrate chains and specific terminal structures by a series of glycosyltransferases. Subsequently, the fully glycosylated mature CD147 (HG-CD147) is translocated to plasma membrane. In this context, LG-CD147 is the precursor of HG-CD147 in the ER, which requires an additional modification in the Golgi prior to express on the cell surface [32]. Different cell types have different HG/LG ratio and it is reported that both HG-CD147 and LG-CD147 could be detected on the plasma membrane [32], but there are also studies revealing that only fully glycosylated CD147 could be found on plasma membrane in hepatoma tumor cells [53] and COS-7 cells [30]. As a transmembrane protein, HG-CD147 on the plasma membrane is considered to be the biological functional form. Comparatively, whether LG-CD147 stably existing within the hepatoma tumor cells participates in other cellular physiological functions remains to be further investigated.
Figure 2.
Intracellular biosynthesis and trafficking of glycosylated CD147. Immature high-mannose form of CD147 is modified in the ER, during which 1 glycans on the Asn152 are essential for quality control. Misfolded proteins without Asn152 glycosylation are degraded through ERAD pathway [53]; 2 A part of LG-CD147 then enter the Golgi while 3 the majority of newly produced LG-CD147 are degraded by the proteasome via the OS-9/SEL1L/Hrd1 pathway [54]. In the Golgi complex, LG-CD147 is further modified by many glycosyltransferases including GnT-III, GnT-IV, GnT-V and FuT-8 to form more complicated branching carbohydrate chains [53,55]. Subsequently, terminal modifications such as sialic acids are added to CD147 [56]; 4 Caveolin-1 binds to LG-CD147 in the Golgi, inhibits its maturation and escorts it into the cell membrane.
LG-CD147 on the membrane fails to self-associate and induce MMPs [32]. However, it is also reported that caveolin-1 facilitates CD147 maturation [57]; 5 Then, HG-CD147 translocates to plasma membrane during which cyp60 in the Golgi is one of chaperones facilitating the translocation of CD147 [35]. Mature CD147 on the cell membrane form oligomers and a small fraction of transmembrane CD147 are shed and released into the extracellular matrix to act on neighbouring cells. Both forms of mature CD147 are capable of inducing MMPs; 6 MCT is one of ancillary proteins that accompany CD147 during its maturation in the ER and they form CD147-MCT complex on the membrane bearing the double roles of MMPs induction and lactic acid exportion [58,59].
The Structure of the Oligosaccharides of CD147
CD147 is a transmembrane glycoprotein expressed on various tumor cells. Disclosing the structure of the oligosaccharides of CD147 from tumor tissues will provide valuable clues for the development of novel therapeutic modalities against tumor. However, due to the difficulties in purifying enough native transmembrane proteins from tumor tissues, determining the N-glycan profiles of CD147 by mass spectrometry analysis is a challenge. In a recent study, native CD147 was purified from lung carcinoma tissue specimen from a patient by immunoaffinity chromatography using mAbHAb18, and the structures of N-glycans of CD147 have been characterized by means of Nanospray Ionization-Linear Ion Trap (NSI-MS) [53]. The results showed that purified CD147 exhibited both high-mannose type and bi-antennary complex-type oligosaccharides, which was in accordance with the glycosidases treatment results of Yu et al. [51]. Moreover, the presence of β1,6-branched oligosaccharides on CD147 was confirmed by lectin blotting carried out with Phaseolus vulgaris Leukoagglutinin (L-PHA) [32,53]. Fan et al. found that Phaseolus vulgaris Erythroagglutinin (E-PHA) also bound to CD147 immunoprecipitated from mouse hepatocarcinoma cells, indicative of bisecting structures in N-glycans of CD147 [55]. In addition, these glycans can be fucosylated and sialylated. It is noteworthy that native CD147 from human lung cancer tissue contained a high percentage of core fucosylated structures (28.8%) [53]. Miyauchi et al. discovered that Lotus tetragonolobus agglutinin (LTA) bound to CD147 from embryonal carcinoma cells and Kato et al. found that CD147 served as a ligand for E-selectin which recognizes sialylated glycans, such as Lewis X (sLex), both implying the sialyl Lewis X structure, namely, Galβ1, 4Fucα1,3GlcNAc, in N-glycans of CD147 [27,29]. In addition, a further study by Yang et al., who identified sialoglycoproteins in the cell surface of prostate cancer cell ML-2 by mass spectrometry analysis, also revealed that CD147 was one of the metastasis-related sialylated proteins [56]. Thus far, the existence of β1,2-branching structures in CD147 glycosylation has not been reported. In-depth mass spectrometry analysis, such as to characterize glycans on all the three N-glycosylation sites on CD147 and to disclose the differences of N-glycosylation between the CD147 from normal tissues versus tumor tissues will improve our understanding of the biological role of aberrant N-glycans on CD147 during cancer progression.
Glycosyltransferases Involved in the Modulation of CD147 N-Glycans
Branched N-glycans are biosynthesized by glycosyltransferases, such as GnTs (N-acetylglucosaminyltransferases), Futs (fucosyltransferases), GalTs (galactosyltransferases) and STs (sialytransferases) in the ER and the Golgi apparatus. Based on the N-glycan profiles of CD147 described above, many glycosyltransferases have been considered to play important roles in the biological functions of CD147 ( Figure 3).
The absence or redundancy of glycosyltransferases may produce abnormal carbohydrate chains. The modulation of the N-glycans of cancer-associated proteins by these enzymes alters cell behaviors, such as cell signaling and cell adhesion, which are implicated in tumor invasion and metastasis [60]. In terms of CD147's functions during tumor progression, the colorectal carcinoma progression is owing to the up-regulation of CD147 without the alteration of its glycosylation [61]; however, in other conditions, researchers observed the anomalous glycosylation or the combination of the two changes [57,[62][63][64]. Considering this, both the quantity and the quality of CD147 should be taken into consideration, and the aberrant glycosylation of CD147 by corresponding enzymes may deserve more attention during tumor metastasis. Figure 3. Potential glycan structures of CD147 and corresponding enzymes. In the medial-Golgi compartment, GnT-IV catalyzes β1-4 branch on complex N-Glycan structures, while GnT-III and GnT-V catalyze the formation of bisecting structure and β1-6 branch, respectively. The core fucose structure is catalyzed by FUT8. Then CD147 enters the trans-Golgi apparatus and receives sialic acid modification by sialyl transferase [53,55,56].
GnTs
As key glycosyltransferases regulating the formation of periphery multi-antennary structures, members of GnT family facilitate the formation of the N-linked oligosaccharides from the high mannose type to the complex type through the hybrids type by adding N-acetylglucosamine (GlcNAc) antennaes in the medial-Golgi apparatus [65]. The roles of GnT-III, GnT-IV, and GnT-V in CD147 glycosylation will be discussed in the following part, but it has not been reported yet whether GnT-I and GnT-II both catalyzing β1,2GlcNAc branch formation, GnT-VI catalyzing β1,4GlcNAc branch formation, and GnT-IX catalyzing β1,6GlcNAc branch formation [52] participate in CD147 glycosylation.
GnT-V, located in the medial/trans Golgi, catalyzes the formation of β1,6GlcNAc branch on the trimannosyl terminus of N-glycans, the product of which can be further extended with poly-N-acetylgalactosamine (GalNAc) chains and then terminally modified with sialylated structures [66]. Overexpression of GnT-V in tumor cells leads to aberrant β1,6-branching, which contributes to tumor progression [67]. To be specific, increased β1,6-branching of N-linked glycans is highly associated with various biological functions of some molecules, thereby affecting cancer metastasis. E-cadherin, integrins, matriptase and TIMP-1 (tissue inhibitor of matrixmetalloproteinase-1) are representative molecules glycosylated by GnT-V [68][69][70][71]. In a recent study, it was evidenced that GnT-V is crucial for the function of CD147 in SMMC-7721 cells. Functional studies in GnT-V over-expressing cells showed a significant increasing in MMP-2 activity. Moreover, the results also indicated that CD147 is a target protein of GnT-V through which GnT-V promotes tumor metastasis [53].
GnT-III catalyzes the addition of bisecting GlcNAc structures to N-glycans via β1,4-linkage, the product of which suppresses the action of GnT-V, thus preventing the metastatic capability [72].
A previous study suggested the existence of bisecting structures in N-glycans of CD147 in mouse hepatoma cells, indicating that GnT-III may be involved in the glycosylation of CD147 [55], so its role in the biological functions of the protein merits further exploration.
GnT-IV transfers the β1,4GlcNAc branch on the core structure of N-glycans, the product of which is a substrate for GnT-III and GnT-V [73,74]. Both hepatoma and choriocarcinoma tissue represented an up-regulated GnT-IV activity, and human chorionic gonadotropin (hCG) from choriocarcinoma exhibited aberrant β1,4GlcNAc branch, suggestive of the role of GnT-IV during tumorigenesis [75][76][77]. Fan et al. found that up-regulated expression of GnT-IVa (an isoenzyme of GnT-IV) in Hepa1-6 cells increased the antennary branches and reduced bisecting branches of the N-glycans of many proteins, thus enhancing tumor migration. Overexpression of GnT-IVa also increased the HG/LG ratio of CD147 and changed the antennary oligosaccharide structures on CD147 in mouse hepatoma cell lines, suggesting that CD147 may be a target protein through which GnT-IVa modulates tumor metastasis [55].
FUT8
Core fucosylation (α1,6-fucosylation) is catalyzed by fucosyltransferase 8 (FUT8) which adds a fucose residue to the reducing terminal GlcNAc of the core structure on N-glycans via α-1,6 linkage. Core fucosylated proteins play an essential role in tumorigenesis, tumor invasion and angiogenesis [78,79]. The α1,6-fucosylation is essential for integrin α3β1 and E-cadherin mediated cell migration and enhances epidermal growth factor receptor (EGFR) mediated cell invasion by promoting its dimerization and phosphorylation [80][81][82]. The aberrant α1,6-fucosylation of molecules, such as CK8, annexin I, and annexin II, is involved in the metastasis of hepatocellular carcinoma [83]. The results from NSI-MS analysis of the N-glycans of CD147 revealed a high percentage of core fucose structure in human non-small cell lung cancer (NSCLC) tissue [53], suggesting a plausible role of fucosylated CD147 in tumor invasion, which could be a potential indicator for the prognosis of NSCLC.
Sialyltransferase
Sialic acid is a kind of acidic monosaccharide typically found at the terminus of N-glycans, catalyzed by sialyltransferase in the trans-Golgi apparatus. Sialyltransferase catalyzes structures of numerous antigens, such as Tn (sTn), polysialic acid (PSA), and sLex, which have been adopted as an effective indicator in the clinical diagnosis of tumor [84][85][86]. As the ligand of selectin, sialyl acid modified antigens mediated the adhesion between tumor cell and other cell types, such as platelet, leukocyte, and vascular endothelial cell [87]. As mentioned above, N-glycans of CD147 contain sialyl Lewis X structure [27,29,56]. Functional importance of CD147 sialylation and fucosylation in cancer progression should be further explored.
Proteins Regulating the Glycosylation of CD147
The unique structure characteristic of CD147 facilitates its interactions with various proteins such as cyclophilins, MCTs, presenilins, and caveolin-1. Some proteins have been well accepted as regulators in the process of CD147 maturation and translocation to cell surface ( Figure 2). As a scaffolding protein, caveolin containing cholesterol and glycosphingolipid components within the plasma membranes mediates processes such as caveolae biogenesis, transmembrane transport, signal transduction, and tumorgenesis [88]. Intriguingly, its role in regulating the conversion of LG-CD147 to HG-CD147 is inconsistent. By binding to the IgI domain of CD147, caveolin-1 associates with LG-CD147 during glycosylation process in the Golgi apparatus and escorts it to the plasma membrane, thus inhibiting the conversion of LG-CD147 to HG-CD147 and CD147 oligomerization at the cell membrane [32,89]. Furthermore, caveolin-1 associates with GnT-III and regulates its localization within the Golgi complex, which enhances GnT-III's activity and, hence, prevents the action of GnT-V [90]. However, there is no direct evidence indicative of functional interaction between caveolin-1 and CD147 in normal and bleomycin-induced rat fibrotic alveolar cells [91]. In addition, Jia et al. has demonstrated that caveolin-1 enhances the HG/LG ratio and invasive ability of mouse hepatoma cells [57], suggesting the dual character of caveolin-1 in tumor migration. Apart from enhancing β1,6-branching in complex and hybrid N-glycans [57], caveolin-1 also up-regulates α-2,6-sialyltransferase I (ST6Gal-I) expression and then promotes the α2,6-sialylation of integrin, thus increasing tumor cell adhesion to extracellular matrix (ECM) [88,92].
Apart from caveolin-1, MCTs are also regarded as regulators of the glycosylation and trafficking of CD147. Tumor cells exhibit a high rate of glycolysis under both oxygen deficit and enriched circumstances to guarantee continuous energy supply and immoderate tumor growth, respectively. The metabolic byproducts of glycolysis, for example, lactic acid, accumulate in the cytoplasm and trigger apoptosis. MCTs mediate proton-coupled transportation of monocarboxylic acids and glycolytic byproducts out of the cells. The secreted lactates contribute to an acid microenvironment, which promotes invasion, metastasis and drug resistance of tumor cells [93]. As a chaperone, CD147 tightly binds to MCTs (MCT1, MCT3, and MCT4) in the ER during their trafficking to the cell surface, and forms a functional complex with them on the membrane by which MCTs mediate the transportation of monocarboxylic acids [58,[94][95][96]. On the other hand, CD147 maturation is also dependent on its association with MCTs. Knocking down MCT4 in breast cancer cells and MCT1 in intestinal epithelial cells both led to the reduction in the expression of fully glycosylated CD147 and the accumulation of core-glycosylated CD147 in the ER, implicating that MCTs (MCT1, MCT4) regulate the maturation and trafficking of CD147 [58,59]. Above all, MCTs and CD147 cooperate with each other to enhance tumor progression through creating acid microenvironment and the degradation of ECM.
In addition, it is reported that cyp60, a member of the cyclophilin family serving as receptors for the immunosuppressive drug cyclosporin A (CsA) and regulating protein trafficking [26], is a chaperone during the transportation of CD147 from the lumen of Golgi to the plasma membrane by binding to the Pro211 at the interface between the transmembrane and extracellular domains of CD147 [35,97].
Amyloid β-peptide (Aβ) sedimentation is significantly implicated in the progression of Alzheimer's disease (AD). It is produced from amyloid precursor protein (APP) after sequential proteolytic processes by β-and γ-secretase. γ-Secretase is a multimeric aspartyl protease consisting of at least four subunits, among which presenilin-1 or -2 (PS1 or PS2) provides the catalytic aspartyl residue [98]. Recent studies showed that as a γ-secretase associated protein, CD147 was up-regulated in several brain tissues of AD patients. Moreover, intracellular trafficking of CD147 was affected by PS2 [99,100]. The results of immunofluorescence staining suggested that in PS2-deficient cells, CD147 located around the nucleus instead of expressing on the cell surface, which was involved in the mechanisms of AD [100]. On one hand, the inhibition of CD147 maturation may reduce the production of MMPs and subsequent clearance of Aβ by proteolysis; on the other hand, since CD147 is a regulating subunit of γ-secretase, immature CD147 may attenuate γ-secretase activity and lead to Aβ sedimentation [99,100]. Detailed mechanisms underlying CD147's association with γ-secretase in AD remain to be investigated.
The Implication of HG/LG Ratio in Physiological and Pathological Processes
As an inducer of MMPs, CD147 participates in numerous physiological processes, and the glycosylation level of CD147 is regulated by the rhythm of hormones secretion. The HG/LG ratio significantly increases in chorio-decidua and amnion during term labor compared with nonlabor stage, with the total amount of CD147 remaining unchanged. CD147 together with subsequent MMPs production facilitates the placenta and fetal membranes to separate from the maternal uterus [101]. During the menstrual cycle, the expression and the glycosylation of CD147 in human endometrium exhibit a cyclical fluctuation and are enhanced by progesterone to degrade endometrial ECM in the secretion phase, which is an essential mechanism of menstrual endometrium remodeling [102].
Researchers have been concerned about its role in non-tumor diseases. The glycosylation of CD147 mediates IL-13 induced MMPs expression in epithelial airway cells through interaction with caveolin-1, triggering the development of asthma [103]. Different glycosylated forms of CD147 produce different types of MMPs, thus, determining the stability degree of atherosclerotic plaque, and HG-CD147 is associated with unstable plaque phenotype [104]. HG-CD147, together with MMP-1 expression, is also up-regulated in chronic periodontitis tissue [105].
Apart from non-tumor diseases, the HG/LG ratio also carries significant implications in neoplastic disease. Jia et al. found hepatoma carcinoma cell lines with higher lymphatic metastasis ability exhibited a higher HG/LG ratio than those with low or no lymphatic metastasis ability [106]. Moreover, Beesley and co-authors also found that HG-CD147 was closely related to acute lymphoblastic leukaemia and its relapse [62]. Aberrant glycosylation of CD147 is also involved in the multidrug resistance in human leukemia [107].
CD147 Glycosylation and MMPs Induction Activity
The role of N-glycosylation in CD147-dependent MMP production is controversial. Both purified glycosylated recombinant CD147 from CHO cells and purified native CD147 from tumor cells directly promoted MMPs production [31,108]. In addition, Sun et al. found that purified deglycosylated CD147 by tunicamycin treatment from HT1080 cells failed to produce MMP-1 and MMP-2 [31]. However, in contrast to Sun's result, the unglycosylated recombinant CD147 obtained by Belton could bind to the CD147 on the surface of uterine fibroblasts, and then induce MMPs expression. This homo-interaction of CD147 was not dependent upon the glycosylation of CD147 ligand [109]. In a recent study, we compared the efficacy of glycosylated and unglycosylated CD147, and found that both produced MMPs, but eukaryotic native CD147 stimulated MMPs production more efficiently than prokaryotic recombinant CD147, convincing that carbohydrates do contribute to CD147's activity [53].
The synthesis technique of peptide thioester carrying N-linked core pentasaccharide by Toole BP and co-authors provided an effective way to elucidate the role of CD147 glycosylation [110,111] and they demonstrated that IgC2 synthesized by the thioester method substituted with a chitobiose unit, IgC2-(GlcNAc) 2 , instead of IgC2 alone or the chitobiose unit alone, mimicked CD147's MMP-2 induction capability in human fibroblast cells, with the underlying assumption that the hydrogen bonds between amino acids and the chitobiose unit may help preserve an active molecular conformation [112]. Toole also suggested another possible mechanism through which the glycosylation of CD147 engaged in MMPs production, that is, carbohydrate lateral chains of CD147 may be involved in its binding to the fibroblast receptor and subsequent signal transmission into the cell [113]. A recent study performed by Papadimitropoulou et al. comparing the MMP-2 induction ability of ECD, domain 1 and domain 2 of CD147 in both glycosylated and unglycosylated forms demonstrated that only glycosylated forms were able to stimulate MMP-2 production, further verifying N-glycosylation is a prerequisite for the activity of CD147 [114].
CD147, like other Ig-containing molecules, interacts homotypically. The role of glycosylation in the oligomerization of CD147 remains unsettled. Previous studies indicated that HG-CD147 instead of LG-CD147 became self-associated, which was demonstrated by anti-CD147 mAb immunoprecipitation, caveolin-1 treatment and covalent cross-linking agent treatment [32,89]. However, Yoshida et al. believed that N-glycosylation was not involved in homophilic cis-interaction of CD147 [30]. The crystal structure resolved by our lab revealed that the recombinant CD147 in crystal formed oligomers and the three glycosylation sites were distant from the dimer interface [42], suggesting a rare possibility that glycosylation participates in the oligomerization process. We further proved that Lys63 and Ser193 instead of the glycosylation sites were essential to CD147 dimerization [45]. Furthermore, the recombinant prokaryotic CD147 in solution were also oligomers [109]. However, Schlegel et al. demonstrated that extracellular domains of CD147 were monomeric in solution [115]. The results in our previous study proved that although prokaryotic CD147 could form oligomers in a glycan-independent manner at a low level, glycosylation could enhance the oligomerization of eukaryotic CD147 and all the native eukaryotic CD147 in solution formed oligomers [53]. The mechanism how glycosylation enhances the oligomerization of CD147 is unknown, and we reason that glycans stabilize the advanced protein conformation of CD147, which is an active state to induce MMPs production.
Role of N-Glycosylation in CD147 Maturation
N-linked glycosylation plays important roles in many aspects of intracellular protein biosynthesis, such as protein folding, quality control, oligomerization and transport. However, the molecular mechanisms remain unclear. Exploring the role of the conserved glycosylation sites leads to a better understanding of the underlying mechanisms. Importance of certain N-glycosylation sites in protein maturation and activity was found in Tyrosinase related protein (TRP) family and α5 subunit of integrin [69,116].
As a transmembrane protein, both CD147 on plasma membrane and a small fraction of extracellular secreted CD147 are capable of inducing MMPs. Current studies suggest two possible mechanisms through which CD147 are secreted from cell surface: vesicle shedding and proteolytic cleavage, which produce full-length soluble CD147 and CD147 lacking transmembrane or cytoplasmic domain, respectively [117][118][119][120]. As mentioned above, CD147 on the plasma membrane and in cell conditioned medium are fully glycosylated mature form [30,53], implying that the glycosylation of CD147 may be essential for its translocation to the cell surface. Site-specific mutagenesis experiment verifies that only initial N-glycans on Asn152 play a vital role in the quality control of CD147 in the ER and determine its cell surface expression and activity. We reason that N-glycans on Asn152 may directly participate in the protein folding or is significant for the interaction between CD147 and partner proteins in protein folding, such as calnexin, calreticulin, and BiP [53]. Considering the high conservative property of the three sites across species, we believe that all the glycosylated sites may be vital for CD147. The functional diversities of each site remain to be clarified in the future.
Aberrant glycosylated CD147 by mutating the glycosylated site Asn152 retained in the ER are degraded through ER associated protein degradation (ERAD) pathway [53]. However, under normal circumstances, LG-CD147 is also superabundant owing to its continuous transcription [121]. This noticeable overproduction of CD147 ensures the interaction of CD147 and other proteins and the exertion of protein functions. For example, the association between CD147 and MCTs facilitates MCTs assembly and trafficking to the cell surface, which are only up-regulated during cell adaptation to glycolysis [58,94,121]. Tyler et al. further elaborated the ERAD pathway of the excessive LG-CD147. By mass spectrometry analysis they identified endogenous LG-CD147 in the ER as a substrate of proteasome, which was degraded via OS-9/SEL1L/Hrd1 pathway, a possible fundamental degradation manner of CD147 [54].
Role of N-Glycosylation in the Interaction of CD147 and Other Proteins
Glycosylation is involved in protein interaction. For example, the N-glycosylation of CD44 is crucial for its binding to E-selectin, and the O-glycosylation of P-selectin glycoprotein ligand-1 (PSGL-1) enhances the binding of PSGL-1 to E-selectin and P-selectin [122,123]. It has been discussed previously that many molecules regulate the maturation of CD147. On the other hand, CD147 glycosylation also regulates its association with its partner molecules. Kato and co-authors reported that CD147 on the cell surface of neutrophils bound to E-selectin during leukocyte infiltration in the renal inflammation, and CD147 glycosylation is essential for the interaction since tunicamycin treatment to inhibit the N-glycans of CD147 from HL-60 cells reduced this interaction [27]. However, Tang's study demonstrated that deglycosylation of CD147 resulted in increased interaction between CD147 and caveolin-1, suggesting that CD147 glycosylation interferes its interaction with caveolin-1 [32]. The possible role of the N-glycosylation of CD147 in its interaction with other proteins, such as integrins, MCTs, and cyclophilins, remains to be investigated.
As shown in the crystal structure of CD147 [42], the unique domain arrangement, which is responsible for the flexibility to interact with different ligands and diverse dimerization manners, is one structure basis for its multifunction character. At present, we conclude that another contributing factor is the distinct glycosylation feature of the molecule. Post-translational modification of CD147 modulates its biological functions in many aspects, including affecting protein maturation and translocation to the cell membrane, facilitating oligomerization and, hence, promoting MMPs production and tumor metastasis. In addition, N-glycans of CD147 also participate in the interaction with other proteins and exert corresponding biological effects.
Conclusions
As a highly glycosylated transmembrane adhesion molecule, CD147 plays a comprehensive role in many physiological and pathological processes. The applications of NMR, X-ray diffraction and structure-function studies by site-directed mutagenesis have illustrated the structure of CD147 and the mechanisms of the interaction of CD147 and other molecules, as well as CD147 itself, which underlies its various functions. Meanwhile, in this post-genomic era the studies on the characteristics of CD147 N-glycosylation highlight its importance. Given that the structure of the oligosaccharides and their functions have only been partly unveiled, further studies are required to elucidate molecular mechanisms underlying the effects of N-glycans on the functions of CD147 in cancer biology, to disclose the distinct oligosaccharides structures on its three glycosylation sites and their respective functions and to confirm whether aberrant glycans on CD147 could be used as a marker to predict clinical prognosis of cancer or drug resistant response of cancer therapy. We envision that this knowledge will provide direct and convincing evidence for the development of novel therapeutic perspectives, such as antibody drugs and small molecule antagonists targeting aberrant N-glycan structures in the treatment of CD147-associated diseases. It is noteworthy that Licartin, the 131 I-labeled CD147 mAb developed in our laboratory, has been applied safely and effectively in the treatment of patients with hepatocellular carcinoma [124,125]. It is reported that 41% of antibodies to a cancer cell recognized carbohydrate epitopes [126], thus, whether the sialyl Lewis structures and other carbohydrate components of CD147 glycosylation are involved in the interaction between Licartin and CD147 awaits investigation. More innovative drugs specifically targeting CD147 with higher efficacy will be discovered in the future. | 7,417.2 | 2014-04-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Is similarity in Major Histocompatibility Complex (MHC) associated with the incidence of retained fetal membranes in draft mares? A cross-sectional study
The failure of the maternal immune system to recognize fetal antigens and vice versa due to MHC similarity between the foal and its dam might result in the lack of placental separation during parturition in mares. The aim of the study was to investigate the influence of MHC similarity between a mare and a foal on the incidence of retained fetal membranes (RFM) in post-partum mares. DNA was sampled from 43 draft mares and their foals. Mares which failed to expel fetal membranes within three hours after foal expulsion were considered the RFM group (n = 14) and mares that expelled fetal membranes during the above period were the control group (n = 29). Nine MHC microsatellites of MHC I and MHC II were amplified for all mares and foals. MHC compatibility and MHC genetic similarity between mares and their foals was determined based on MHC microsatellites. The inbreeding coefficient was also calculated for all horses. The incidence of RFM in the studied population was 33%. Compatibility in MHC I and MHC II did not increase the risk of RFM in the studied population of draft mares (P>0.05). Differences in MHC similarity at the genetic level were not observed between mare-foal pairs in RFM and control group (P>0.05). We suspect that RFM in draft mares may not be associated with MHC similarity between a foal and its dam. Despite the above, draft horses could be genetically predisposed to the disease.
Introduction
The equine placenta is composed of maternal endometrial tissues and fetal allantochorionic tissues [1]. A partial or complete failure of the allantochorion to detach from the endometrium within 3 hours after foal delivery is a condition known as retained fetal membranes (RFM) and is frequent in post-partum mares [2,3]. Interestingly, up to 54% of Friesian and heavy draft type mares suffer from RFM after parturition, and they appear to be more susceptible to RFM than other breeds [2][3][4]. Despite the above, the etiology of RFM has not been completely elucidated. Parturition is often compared to a graft rejection-like reaction where the recognition of a foreign antigen triggers a characteristic outbreak of inflammatory processes [5]. At the end of pregnancy, the functioning of the maternal immune system changes, allowing for recognition of the fetal antigens expressed on fetal membranes [5,6]. A recent study demonstrated that the fetal immune system, although immature, is also able to initiate an immune response against maternal antigens [7]. Both maternal and fetal antigens may be presented by the Major Histocompatibility Complex (MHC) [8]. Studies evaluating the influence of MHC similarity/dissimilarity on the outcome of transplantation in humans indicate that MHC I and MHC II similarity between the graft and host prolongs the survival of transplanted organs. In contrast, MHC I and MHC II dissimilarity significantly shortens their lifespan [9][10][11]. A comparison of the above process to parturition indicates that the expression of dissimilar MHC antigens on the maternal and fetal placenta can trigger a graft rejection-like reaction which is associated with the characteristic inflammatory outbreak during parturition that ends with the expulsion of fetal membranes [5,6,12]. However, if fetal and maternal antigens are similar, they may not be recognized as foreign by the respective immune systems, which can impair the inflammatory process during parturition [12].
Studies of Dutch Friesians indicate that a high level of inbreeding in foals could be responsible for the high incidence of RFM in this breed [13]. High inbreeding in a population generally increases the probability of two individuals having the same alleles of a gene or genes, such as MHC [14]. MHC I was also found to be expressed on the full-term equine placenta [15].
To the best of our knowledge, the association between maternal and fetal MHC and the incidence of RFM has never been studied in draft horses. We hypothesized that MHC similarity between a mare and a foal would increase the risk of RFM in draft mares. As indicated above, the only study investigating the possible genetic component of the RFM occurrence in mares was conducted by Sevinga et al. [13], and the effect of the inbreeding coefficient on RFM was determined. Hence, to be able to refer to this study, the inbreeding coefficient of foals and their dams suffering from RFM, and foals and their dams not affected by the disease was compared.
Ethical note
Blood samples from mares and foals were taken during an annual parentage testing as required by studbook regulations. No experimentation was performed in view of European directive 2010/63/EU and the Polish laws related to ethics in animal experimentation. According to the European directive 2010/63/EU on the protection of animals used for scientific purposes chapter 1 article 1.5 "practices undertaken for the primary purpose of identification of an animal" do not need the approval of the Institutional Animal Care and Use Committee which was confirmed by the Ethics Committee for Animal Experimentation at the University of Warmia and Mazury in Olsztyn (LKE.065.07.2019). Owner of the animals informed consent and agreed on the use of blood samples.
Animals
The study was performed on 43 clinically healthy draft mares aged 4-15 years and their newborn foals. All horses were bred in the same stud farm under identical housing and feeding conditions and with equal access to veterinary care. Pregnancies and deliveries were physiological and were monitored by a veterinarian. The failure to expel fetal membranes within 3 hours after foal expulsion and a necessity for veterinary intervention based on a veterinarian's decision was regarded as RFM and treated. The mares were divided retrospectively in two groups: mares with RFM (N = 14) and control mares (N = 29).
DNA isolation
Jugular venipuncture to 8.5 ml blood tubes with ACD Solution A of trisodium citrate, 22.0g / L; citric acid, 8.0 g / L; and dextrose 24.5 g / L, 1.5mL as anticoagulant was used for blood samples collection.
RBC Lysis Solution (Qiagen, Hilden, Germany, #158902) was used to isolate peripheral blood lymphocytes. 1 ml of the buffy coat was transferred to 3 ml of RBC Lysis Solution in a conical tube, then incubated for either 5 or 20 minutes for adult horses or foals, respectively. In the next step, the tube was centrifuged at 1500 rpm for 5 minutes, and the supernatant was removed. Pellet of lymphocytes was washed with PBS three times, and during the last wash, cells were transferred to a 1.5 ml Eppendorf tube. Supernatant was removed, and the lymphocytes were snap-frozen in liquid nitrogen. Next, collected lymphocytes were transferred to the ultra freezer to-80˚C and stored in these conditions until DNA isolation.
DNeasy Blood & Tissue Kit (Qiagen, Hilden, Germany, #69506) was used to isolate genomic DNA according to the manufacturer's instructions. Obtained DNA was stored at -20˚C until further analysis.
Microsatellites typing
Nine MHC microsatellites were amplified in 3 multiplex PCRs. Their distribution on the equine chromosome 20 is shown in S1 Fig. Primers' sequences, fluorescent labels, amplicons' length as well as which microsatellites were amplified in multiplex 1, 2 and 3 are given in Table 1. 2 μl of genomic DNA, 6.25μl of DreamTaq PCR Master Mix (2X) (Thermo Fisher Scientific, Waltham, Massachusetts, USA, #K1072). 0.2 μl of fluorescently labeled forward and reverse primers in a concentration 5 μM each and H 2 O DD to a total volume of 14.5 μl was used per reaction well. PCRs were run in following conditions: 95˚C for 3 min followed by 35 cycles of 95˚C for 30 s, 60˚C for 30 s, 72˚C for 60 s and of 72˚C for 10 min for the final extension. Electrophoresis on 3% agarose gel with ethidium bromide was performed to confirm the specificity of microsatellites' amplification.
In the next step, 1μl of every PCR product was mixed with 14 μl of Hi-Di™ Formamide (Applied Biosystems TM , Foster City, California, USA, # 4311320) and 0.5 μl of GeneScan™ 500 LIZ™ dye Size Standard (Applied Biosystems TM , Foster City, California, USA, # 4322682) to final volume 15.5 μl on a 96-well plate. PCR products were then denatured at 95˚C for 5 min and placed immediately on ice. DNA fragments were separated and sized on 3500xL Genetic Analyzer capillary sequencer (Applied Biosystems TM , Foster City, California, USA, # 4322682)
Statistical analysis
Microsatellite analysis. Microsatellite loci were tested for deviations from the Hardy-Weinberg equilibrium and linkage disequilibrium with the use of GENEPOP v. 4.1 [18]. The frequencies of the null allele were analyzed using CERVUS v. 3.0 [19].
MHC compatibility. MHC compatibility was calculated for every mare and foal pair based on MHC microsatellite alleles. The analysis was performed separately for the microsatellites of MHC I and II class. The following categories of MHC compatibility were applied in this study (adapted from [12]): • Mare compatibility (MC)-the MHC alleles of the foal are compatible with the MHC alleles of the mare, i.e., the foal does not have any MHC alleles that are not present in the mare. The mare's immune system does not recognize the foal's MHC as foreign; • Foal compatibility (FC)-the MHC alleles of the mare are compatible with the MHC alleles of the foal, i.e., the mare does not have any MHC alleles that are not present in the foal. The foal's immune system does not recognize the mare's MHC as foreign; • Mare-foal compatibility (MFC)-the MHC alleles of the mare are compatible with the MHC alleles of the foal and vice versa. Neither the mare's or the foal's MHC is recognized as foreign by foal's or mare's immune system, respectively; • No compatibility (NC)-both the mare's and the foal's MHC alleles are recognized as foreign by either the foal's or mare's immune system, respectively If all loci in a given MHC class fell into an assigned compatibility category, the mare-foal pair was considered as compatible (MC/FC/MFC) in a given MHC class I, II). If one or more loci within an MHC class did not fulfill the assigned compatibility restrictions, the mare-foal pair was considered as incompatible (NC) in a given MHC class. The influence of MHC compatibility on the occurrence of RFM was analyzed in the R statistical package (R Development Core Team, 2013, http://www.R-project.org/). Logistic regression model was used with NC compatibility category as a referent.
MHC genetic similarity assessed by microsatellites. MHC genetic similarity was assessed by relatedness between the mare and the foal based on MHC microsatellites alleles obtained by microsatellite typing. The r xy statistic, which is regarded as an unbiased estimator of relatedness between two individuals, was used [20,21]. This estimator of relatedness accounts for the similarity in allele composition of two individuals by chance (identity by state; IBS) based on reference allele frequencies [20,21]. We hypothesized that genetic similarity between mares and foals from the RFM group would be higher hence, they will be more genetically similar in MHC than between mares and foals from the control group. Calculations of r xy were performed for every mare-foal pair based on obtained MHC I and MHC II alleles in Demerelate R package v. 0.9-3 [22] with the use of r xy function.
Next, the values of r xy were compared between RFM mares and their foals vs. the control group with the use of the Student's t-test and presented as means ± SD.
Inbreeding coefficient. For the calculation of the inbreeding coefficient (IF), pedigree data obtained from the database of Polish Horse Breeders Association (https://baza.pzhk.pl/) was used. IF was calculated for every mare and foal in CFC software [23]. The results have not been normally distributed. For that reason, IF of mares and foals from the RFM and control group were compared with the Mann Whitney U test. Results were expressed by median values (interquartile range), separately for mares and foals.
Statistical analyses were performed in PS IMAGO 5, IBM SPSS Statistics v.25 statistical package (IBM Corporation, Armonk, NY, USA). The results were regarded as significant at P<0.05. The normality of data distribution was tested with the Shapiro-Wilk test.
Incidence of RFM in mares
RFM occurred in 33% of post-partum mares in the studied population. One sample binomial proportion rate (Clopper-Pearson) for the proportion of RFM to the total number of mares equals 0.326 (95% CI; 0.191-0.485).
MHC I and MHC II alleles of all mares and foals participating in the study are shown in the S1 Table.
MHC compatibility and RFM
The mare-foal pairs in every MHC compatibility category are shown in Table 2. The logistic regression model demonstrated that none of the compatibility categories (MC, FC, MFC) in any MHC class (I, II) influenced the occurrence of RFM in the studied population of mares ( Table 3). The odds ratio (OR) 2.25 was determined for MFC in MHC I (CI 95%; 0.36, 13.42). However, it was not significant (P>0.05).
MHC genetic similarity assessed by microsatellites and RFM
There were no differences in pairwise relatedness r xy (t = 0.02, P>0.05) between RFM marefoal pairs and the control group mare-foal pairs. Mean relatedness r xy for RFM and control mare-foal pairs was identical at r xy = 0.52 ± 0.25.
Inbreeding coefficient
There was no difference in the IF of mares from the RFM and control group (U = 191, P>0.05) and foals from these groups (U = 189, P>0.05). Median IF of mares was 0.007 (0.01) and median IF of foals was 0.01 (0.03).
Discussion
Our findings suggest that RFM in the studied population of draft mares may not be associated with MHC similarity assessed by microsatellites between the mare and the foal. Approximately one-third of the mares were affected by RFM, however, MC, FC, or MFC did not significantly influence the incidence of the disease. Moreover, the genetic similarity assessed by microsatellites of mare-foal pairs in the MHC region did not differ between RFM and control mares. In contrast, Benedictus et al. [12] found that compatibility between the calf and its dam (MFC in our study) increased the risk of retained fetal membranes in cows. These variations could be attributed to differences in methodology. Benedictus et al. [12] assigned individuals to known MHC haplotypes based on the alleles obtained from DNA sequencing. These haplotypes were used for further calculations. However, MHC microsatellites are commonly used to evaluate the equine MHC, including studies where MHC compatibility between horses is of importance [24][25][26][27][28]. Unlike DNA sequencing, microsatellites are only indirect markers of MHC, but despite the above, they are regarded as reliable indicators of MHC haplotype and MHC genes sequencing [16,17,[24][25][26][27][28][29][30][31][32]. However, we acknowledge that microsatellite typing could be less effective than DNA sequencing in identifying possible differences and similarities in DNA. Grunig et al. [33] observed no differences in the maternal leukocyte response to invading trophoblasts in MHC-compatible and incompatible pregnancies in mares. Invasive trophoblast cells are referred to as the chorionic girdle, and they can be detected from around day 25 of pregnancy. Around day 30 of pregnancy, chorionic girdle cells begin to express MHC I of
PLOS ONE
fetal origin, which induces a leukocyte influx as part of the maternal immune response. The expression of MHC I decreases by day 45 [34]. In our study, parturition was normal in all studied mare-foal pairs, even when the mare and the foal were classified as MHC-compatible. In the control group, one mare had MFC in two MHC classes, and four mares had MFC in MHC class I. Despite the above, all control mares expelled fetal membranes physiologically. In the RFM group, five mares were incompatible in every MHC class, but they retained fetal membranes. The results reported by Grunig et al. [33] suggest that a maternal immune response could be induced even in an MHC-compatible pregnancy. Therefore it is possible that regardless of the applied method of MHC evaluation ( [33], this study), there were still differences between MHC of the foals and its dam that could provoke an immune reaction. Interestingly, the studied population was characterized by low IF values at around 0.01 for the foals and 0.007 for the mares, and no differences were noted between mares and foals from the experimental groups. In a study of Friesians, yet another breed that is highly susceptible to RFM, Sevinga [13] estimated IF values at 0.157 for foals and 0.145 for mares and reported a positive, but minor effect of IF on the incidence of RFM in that breed. The author suggested that the majority of Friesian mares and foals could be MHC-compatible due to the high values of IF. The IF of the draft and Friesian horses differs [13,35,36], but both breeds are more susceptible to RFM than others [2][3][4]. It should also be noted that the incidence of RFM in draft mares is similar across the studs in the country (personal communication to draft horse breeders). We speculate that other genetic factors shared by these breeds might play a role in RFM pathogenesis. In the cited study [13], the heritability estimates of RFM in Friesians ranged from 0.05 in mares to 0.1 in foals. A study of cows revealed that RFM could be heritable in this animal species [37]. A similar conclusion arises from research into the genetic background of placental retention in humans. Women born from pregnancies that terminated with a retained placenta or women whose partners were born from such pregnancies were at significantly higher risk of RFM [38]. The evidence from human transplantation medicine suggests that in addition to distinct factors, such as MHC mismatch between a donor and a host, differences in the non-MHC region, a minor histocompatibility complex, and specific MHC alleles could also lead to the rejection of the transplanted organ [39,40]. The expulsion of the fetus and fetal membranes can be compared to transplant rejection [41,42]; therefore, it can be speculated that the similarity of minor histocompatibility complex antigens and/or the presence of specific MHC alleles in mares and foals could influence the incidence of RFM.
MHC microsatellite typing is an indirect method of MHC evaluation [30]. However, MHC microsatellite typing is a well-established method employed in research studies investigating the role of MHC in the physiology and pathology of horses [27,28], including the experiments where the immune response to foreign antigen is of major interest [24][25][26]. The analyzed microsatellites were found to accurately correspond to MHC haplotypes, namely the set of MHC alleles in an individual [16,17,[29][30][31][32]. In this study, a comparison of MHC alleles in mares and foals demonstrated that MHC microsatellite typing is an effective technique. The applicability of MHC II microsatellites might be debatable because these molecules are not expressed in the equine placenta during pregnancy [43]. Nevertheless, the study investigates the parturition when the recruitment of various types of immune cells able to express MHC II has been reported in other species [44][45][46][47][48]. Unfortunately, immunological events that lead to and take place during parturition in horses remain unknown. Based on results reported in different species, it can be speculated that cells expressing MHC II, including lymphocytes and macrophages, can be present in mares during labor. Compatibility of the MHC II alleles between the donor and the recipient is routinely tested in human transplantation. It has been shown that match within HLA-DR (Human Leukocyte Antigen isotype DR, MHC II) between the donor and the recipient maybe even more important for the graft survival than match within HLA-A and-B (Human Leukocyte Antigen isotype A and B, MHC I) [9,10,49,50]. Mechanism of the rejection of the transplanted organ is compared to immune reactions which take place during parturition. For that reason MHC similarity between the mother and the fetus is considered as a possible factor influencing decreased immune reaction during the retention of fetal membranes in cows [12]. The above and the confirmed influence of MHC II compatibility/incompatibility on the transplant success rates in humans [9,10] indicate that the presented analysis of the associations between MHC II similarity and RFM was fully justified.
In conclusion, the incidence of RFM in draft mares may not be associated with MHC I and/ or MHC II similarity between a foal and its dam. However, draft and Friesian mares appear to be more susceptible to RFM other breeds, which could suggest that genetic factors are involved in RFM pathogenesis. | 4,709.6 | 2020-08-17T00:00:00.000 | [
"Biology"
] |
Inferring Psycholinguistic Properties of Words
We introduce a bootstrapping algorithm for regression that exploits word embedding models. We use it to infer four psycholinguistic properties of words: Familiarity, Age of Acquisition, Concreteness and Imagery and further populate the MRC Psycholinguistic Database with these properties. The approach achieves 0 . 88 correlation with human-produced values and the inferred psycholinguistic features lead to state-of-the-art results when used in a Lexical Simplification task.
Introduction
Throughout the last three decades, much has been found on how the psycholinguistic properties of words influence cognitive processes in the human brain when a subject is presented with either written or spoken forms. A word's Age of Acquisition is an example. The findings in (Carroll and White, 1973) reveal that objects whose names are learned earlier in life can be named faster in later stages of life. Zevin and Seidenberg (2002) show that words learned in early ages are orthographically or phonologically very distinct from those learned in adult life.
Other examples of psycholinguistic properties, such as Familiarity and Concreteness, influence one's proficiency in word recognition and text comprehension. The experiments in (Connine et al., 1990;Morrel-Samuels and Krauss, 1992) show that words with high Familiarity yield lower reaction times in both visual and auditory lexical decision, and require less hand gesticulation in order to be described. Begg and Paivio (1969) found that humans are less sensitive to changes in wording made to sentences with high Concreteness words.
When quantified, these aspects can be used as features for various Natural Language Processing (NLP) tasks. The Lexical Simplification approach in is an example. By combining various collocational features and psycholinguistic measures extracted from the MRC Psycholinguistic Database (Coltheart, 1981), they trained a ranker (Joachims, 2002) that reached first place in the English Lexical Simplification task at SemEval 2012. Semantic Classification tasks have also benefited from the use of such features: by combining Concreteness with other features, (Hill and Korhonen, 2014) reached the state-of-theart performance in Semantic Composition (denotative/connotative) and Semantic Modification (intersective/subsective) prediction.
Despite the evident usefulness of psycholinguistic properties of words, resources describing such properties are rare. The most extensively developed resource for English is the MRC Psycholinguistic Database (Section 2). However, it is far from complete, most likely due to the inherent cost of manually entering such properties. In this paper we propose a method to automatically infer these missing properties. We train regressors by performing bootstrapping (Yarowsky, 1995) over the existing features in the MRC database, exploiting word embedding models and other linguistic resources for that (Section 3). This approach outperform various strong baselines (Section 4) and the resulting properties lead to significant improvements when used in Lexical Simplification models (Section 5).
435
Introduced by Coltheart (1981), the MRC (Machine Readable Dictionary) Psycholinguistic Database is a digital compilation of lexical, morphological and psycholinguistic properties for 150,837 words. The 27 psycholinguistic properties in the resource range from simple frequency measures (Rudell, 1993) to elaborate measures estimated by humans, such as Age of Acquisition and Imagery (Gilhooly and Logie, 1980). However, despite various efforts to populate the MRC Database, these properties are only available for small subsets of the 150,837 words.
We focus on four manually estimated psycholinguistic properties in the MRC Database: • Familiarity: The frequency with which a word is seen, heard or used daily. Available for 9,392 words.
• Age of Acquisition: The age at which a word is believed to be learned. Available for 3,503 words.
• Concreteness: How "palpable" the object the word refers to is. Available for 8, 228 words.
• Imagery: The intensity with which a word arouses images. Available for 9,240 words.
All four properties are real values, determined based on different quantifiable metrics. We focus on these properties since they have been proven useful and are some of the most scarce in the MRC Database. As we discussed in Section 1, these properties have been successfully used in various approaches for Lexical Simplification and Semantic Classification, and yet are available for no more than 6% of the words in the MRC Database.
Bootstrapping with Word Embeddings
In order to automatically estimate missing psycholinguistic properties in the MRC Database, we resort to bootstrapping. We base our approach on that by (Yarowsky, 1995), a bootstrapping algorithm which aims to learn a classifier over a reduced set of annotated training instances (or "seeds"). It does so by performing the following five steps: 1. Initialise training set S with the seeds available.
2. Train a classifier over S.
Predict values for a set of unlabelled instances U .
4. Add to S all instances from U for which the prediction confidence c is equal or greater than ζ.
5. If at least one instance was added to S, go to step 2, otherwise, return the resulting classifier.
One critical difference between this approach and ours is that our task requires regression algorithms instead of classifiers. In classification, the prediction confidence c is often calculated as the maximum signed distance between an instance and the estimated hyperplanes. There is, however, no analogous confidence estimation technique for regression problems. We address this problem by using word embedding models.
Embedding models have been proved effective in capturing linguistic regularities of words (Mikolov et al., 2013b). In order to exploit these regularities, we assume that the quality of a regressor's prediction on an instance is directly proportional to how similar the instance is to the ones in the labelled set. Since the input for the regressors are words, we compute the similarity between a test word and the words in the labelled dataset as the maximum cosine similarity between the test word's vector and the vectors in the labelled set.
Let M be an embeddings model trained over vocabulary V , S a set of training seeds, ζ a minimum confidence threshold, sim(w, S, M ) the maximum cosine similarity between word w and S with respect to model M , R a regression model, and R(w) its prediction for word w. Our bootstrapping algorithm is depicted in Algorithm 1.
Algorithm 1: Regression Bootstrapping input: M, V, S, ζ; output: R; repeat We found that 64,895 out of the 150,837 words in the MRC database were not present in either Word-Net or our word embedding models. Since our bootstrappers use features extracted from both these resources, we were only able to predict the Familiarity, Age of Acquisition, Concreteness and Imagery values of the remaining 85,942 words in MRC.
Evaluation
Since we were not able to find previous work for this task, in these experiments, we compare the performance of our bootstrapping strategy to various baselines. For training, we use the Ridge regression algorithm (Tikhonov, 1963). As features, our regressor uses the word's raw embedding values, along with the following 15 lexical features: • Word's length and number of syllables, as determined by the Morph Adorner module of LEXenstein (Paetzold and Specia, 2015).
• Minimum, maximum and average distance between the word's senses in WordNet and the thesaurus' root sense.
• Number of images found for word in the Getty Images database 1 .
We train our embedding models using word2vec (Mikolov et al., 2013a) over a corpus of 7 billion words composed by the SubIMDB corpus, UMBC webbase 2 , News Crawl 3 , SUBTLEX (Brysbaert and New, 2009), Wikipedia and Simple Wikipedia (Kauchak, 2013). We use 5-fold cross-validation to optimise parameters: ζ, embeddings model architecture (CBOW or Skip-Gram), and word vector size (from 300 to 2,500 in intervals of 200). We include four strong baseline systems in the comparison: • Max. Similarity: Test word is assigned the property value of the closest word in the training set, i.e. the word with the highest cosine similarity according to the word embeddings model.
• Avg. Similarity: Test word is assigned the average property value of the n closest words in the training set, i.e. the words with the highest cosine similarity according to the word embeddings model. The value of n is decided through 5-fold cross validation.
• Simple SVM: Test word is assigned the property value as predicted by an SVM regressor (Smola and Vapnik, 1997) with a polynomial kernel trained with the 15 aforementioned lexical features.
• Simple Ridge: Test word is assigned the property value as predicted by a Ridge regressor trained with the 15 aforementioned lexical features.
• Super Ridge: Identical to Simple Ridge, the only difference being that it also includes the words embeddings in the feature set. We note that this baseline uses the exact same features and regression algorithm as our bootstrapped regressors.
The parameters of all baseline systems are optimised following the same method as with our approach. We also measure the correlation between each of the aforementioned lexical features and the psycholinguistic properties. For each psycholinguistic property, we create a training and a test set by splitting the labelled instances available in the MRC Database in two equally sized portions. All training instances are used as seeds in our approach. As evaluation metrics, we use Spearman's (ρ) and Pearson's (r) correlation. Pearson's correlation is the most important indicator of performance: an effective regressor would predict values that change linearly with a given psycholinguistic property.
The results are illustrated in Table 1. While the similarity-based approaches tend to perform well for Concreteness and Imagery, typical regressors capture Familiarity and Age of Acquisition more effectively. Our approach, on the other hand, is consistently superior for all psycholinguistic properties, with both Spearman's and Pearson's correlation scores varying between 0.82 and 0.88. The difference in performance between the Super Ridge baseline and our approach confirm that our bootstrapping algorithm can in fact improve on the performance of a regressor. The parameters used by our bootstrappers, which are reported below, highlight the importance of parameter optimization in out bootstrapping strategy: its performance peaked with very different configurations for most psycholinguistic properties: • Familiarity: 300 word vector dimensions with a Skip-Gram model, and ζ = 0.9.
• Age of Acquisition: 700 word vector dimensions with a CBOW model, and ζ = 0.7.
Interestingly, frequency in the SubIMDB corpus 4 , composed of over 7 million sentences extracted from subtitles of "family" movies and series, has good linear correlation with Familiarity and Age of Acquisition, much higher than any other feature. For Concreteness and Imagery, on the other hand, the results suggest something different: the further a word is from the root of a thesaurus, the most likely it is to refer to a physical object or entity.
Psycholinguistic Features for LS
Here we assess the effectiveness of our bootstrappers in the task of Lexical Simplification (LS). As shown in , psycholinguistic features can help supervised ranking algorithms capture word simplicity. Using the parameters described in Section 4, we train bootstrappers for these two properties using all instances in the MRC Database as seeds. We then train three rankers with (W) and without (W/O) psycholinguistic features: • Horn (Horn et al., 2014): Uses an SVM ranker trained on various n-gram probability features.
• Glavas (Glavaš andŠtajner, 2015): Ranks candidates using various collocational and semantic metrics, and then re-ranks them according to their average rankings.
• Paetzold (Paetzold and Specia, 2015): Ranks words according to their distance to a decision boundary learned from a classification setup inferred from ranking examples. Uses n-gram frequencies as features.
We use data from the English Lexical Simplification task of SemEval 2012 to assess systems' performance. The goal of the task is to rank words in different contexts according to their simplicity. The training and test sets contain 300 and 1,710 instances, respectively. The official metric from the task -TRank -is used to measure systems' performance. As discussed in (Paetzold, 2015), this metric best represents LS performance in practice. The results in Table 2 show that the addition of our features lead to performance increases with all rankers. Performing F-tests over the rankings estimated for the simplest candidate in each instance, we have found these differences to be statistically significant (p < 0.05). Using our features, the Paetzold ranker reaches the best published results for the dataset, significantly superior to the best system in SemEval
Conclusions
Overall, the proposed bootstrapping strategy for regression has led to very positive results, despite its simplicity. It is therefore a cheap and reliable alternative to manually producing psycholinguistic properties of words. Word embedding models have proven to be very useful in bootstrapping, both as surrogates for confidence predictors and as regression features. Our findings also indicate the usefulness of individual features and resources: word frequencies in the SubIMDB corpus have a much stronger correlation with Familiarity and Age of Acquisition than previously used corpora, while the depth of a word's in a thesaurus hierarchy correlates well with both its Concreteness and Imagery.
In future work we plan to employ our bootstrapping solution in other regression problems, and to further explore potential uses of automatically learned psycholinguistic features. | 2,996 | 2016-06-01T00:00:00.000 | [
"Linguistics",
"Computer Science",
"Psychology"
] |
Epidemic curves made easy using the R package incidence
The epidemiological curve (epicurve) is one of the simplest yet most useful tools used by field epidemiologists, modellers, and decision makers for assessing the dynamics of infectious disease epidemics. Here, we present the free, open-source package incidence for the R programming language, which allows users to easily compute, handle, and visualise epicurves from unaggregated linelist data. This package was built in accordance with the development guidelines of the R Epidemics Consortium (RECON), which aim to ensure robustness and reliability through extensive automated testing, documentation, and good coding practices. As such, it fills an important gap in the toolbox for outbreak analytics using the R software, and provides a solid building block for further developments in infectious disease modelling. incidence is available from https://www.repidemicsconsortium.org/incidence.
Introduction
Responses to infectious disease epidemics use a growing body of data sources to inform decision making (Cori Because of the increasing need to analyse various types of epidemiological data in a single environment using free, transparent and reproducible procedures, the R software (R Core Team, 2017) has been proposed as a platform of choice for epidemic analysis (Jombart et al., 2014). But despite the existence of packages dedicated to time series analysis (Shumway & Stoffer, 2010) as well as surveillance data (Höhle, 2007), a lightweight and well-tested package solely dedicated to building, handling and plotting epidemic curves directly from linelist data (e.g. a spreadsheet where each row represents an individual case) is still lacking.
Here, we introduce incidence, an R package developed as part of the toolbox for epidemics analysis of the R Epidemics Consortium (RECON) which aims to fill this gap. In this paper, we outline the package's design and illustrate its functionalities using a reproducible worked example.
Package overview
The philosophy underpinning the development of incidence is to 'do the basics well'. The objective of this package is to provide simple, user-friendly and robust tools for computing, manipulating, and plotting epidemic curves, with some additional facilities for basic models of incidence over time.
The general workflow (Figure 1) revolves around a single type of object, formalised as the S3 class incidence. incidence objects are lists storing separately a matrix of case counts (with dates in rows and groups in columns), dates used as breaks, the time interval used, and an indication of whether incidence is cumulative or not ( Figure 1). The incidence object is obtained by running the function incidence() specifying two inputs: a vector of dates (representing onset of individual cases) and an interval specification. The dates can be any type of input representing dates including Date and POSIXct objects, as well as numeric and integer values. The dates are aggregated into counts based on the user-defined interval representing the number of days for each bin. The interval can also be defined as a text string of either "week", "month", "quarter", or "year" to represent intervals that can not be defined by a fixed number of days. For these higher-level intervals, an extra parameter-standard-is available to specify if the interval should start at the standard beginning of the interval (e.g. weeks start on Monday and months start at the first of the month). incidence() also accepts a groups argument which can be used to obtain stratified incidence. The basic elements of the incidence object can be obtained by the accessors get_counts(), get_dates(), and get_interval().
This package facilitates the manipulation of incidence objects by providing a set of handler functions for the most common tasks. The function subset() can be used for isolating case data from a specific time window and/or groups, while the [ operator can be used for a finer control to subset dates and groups using integer, logical or character vectors. This is accomplished by using the same syntax as for matrix and data.frame objects, i.e. x[i, j] where x is the incidence object, and i and j are subsets of dates and groups, respectively.
The function pool() can be used to merge several groups into one, and the function cumulate() will turn incidence data into cumulative incidence. To maximize interoperability, incidence objects can also be exported to either a matrix using get_counts() or a data.frame using as.data.frame(), including an option for a 'long' format which is readily compatible with ggplot2 (Wickham, 2016) for further customization of graphics.
In line with RECON's development guidelines, the incidence package is thoroughly tested via automatic tests implemented using testthat (Wickham, 2011), with an overall coverage nearing 100% at all times. We use the continuous integration services travis.ci and appveyor to ensure that new versions of the code maintain all existing functionalities and give expected results on known datasets, including matching reference graphics tested using the visual regression testing implemented in vdiffr (Henry et al., 2018). Overall, these practices aim to maximise the reliability of the package, and its sustainable development and maintenance over time.
Modeling utilities
Many different approaches can be used to model, and possibly derive predictions from incidence data (e.g. Cori et al., 2013;Nouvellet et al., 2018;Wallinga & Teunis, 2004), and are best implemented in separate packages (e.g. Cori et al., 2013). Here, we highlight three simple functionalities in incidence for estimating parameters via modeling or bootstrap and the two specialized data classes that are used to store the models and parameter estimates. As a basic model, we implement the simple log-linear regression approach in the function fit(), which can be used to fit exponential increase or decrease of incidence over time by log-transforming case counts and applying a linear regression on these transformed data. The log-linear regression model is of the form log(y) = r × t + b where Figure 1. Generalized workflow from incidence object construction to modeling and visualization. The raw data is depicted in the top left as either a vector of dates for each individual case (typical usage) or a combination of both dates and a matrix of group counts. The incidence object is created from these where it checks and validates the timespan and interval between dates. Data subsetting and export is depicted in the upper right. Data visualization is depicted in the lower right. Addition of log-linear models is depicted in the lower left.
y is the incidence, r is the growth rate, t is the number of days since the start of the outbreak, and b is the intercept. This approach estimates a growth rate r (the slope of the regression), which can in turn be used for estimating the doubling or halving time of the epidemic, and with some knowledge of the serial interval, for approximating the reproduction number, R 0 (Wallinga & Lipsitch, 2007).
In the presence of both growing and decreasing phases of an epidemic, the date representing the peak of the epidemic can be estimated. In incidence, this can be done in two ways. The function estimate_peak() uses multinomial bootstrapping to estimate the peak, assuming that a) reporting is constant over time, b) the total number of cases is known, and c) the bootstrap never samples zero-incidence days. This function returns the estimated peak with a confidence interval along with the boostrap estimates. Alternatively, the function fit_optim_split() can be used to detect the optimal turning point of the epidemic and fit two separate models on either side of the peak. This is done by maximizing the combined mean adjusted R 2 value from the two models ( Figure 1, Figure 5).
The fit() function returns an incidence_fit object and the fit_optim_split() function returns an inci-dence_fit_list object, which is a specialized object designed to contain an unlimited number of (potentially nested) incidence_fit objects. While the incidence package returns incidence_fit objects containing log-linear models by default, they can be constructed from any model from which it's possible to extract the growth rate (r) and predict incidence along the model. Both object classes can be plotted separately or added to an existing epicurve using the function add_incidence_fit() ( Figure 5).
Operation
The minimal system requirements for successful operation of this package is R version 3.1.
Use cases
Two worked examples are used to demonstrate the functionality and flexibility of the incidence package. The first example illustrates how to compute and manipulate stratified weekly incidence directly from a line-list, while the second example shows how to import pre-computed daily incidence and fit a log-linear model to estimate growth rate (r) and doubling time for the growing phase 1 .
1) Importing data
First, we load the dataset ebola_sim_clean from the outbreaks package. The dataset contains 5,829 cases of 9 variables, among which the date of symptom onset ($date_of_onset) and the name of the hospital ($hospital) are used for computing the weekly epicurves stratified by hospitals. library('outbreaks') dat1 <-ebola_sim_clean$linelist str(dat1, strict.width = "cut", width = 76) ## 'data.frame': 5829 obs. of 9 variables: ## $ case_id : chr "d1fafd" "53371b" "f5c3d8" "6c286a" ... ## $ generation : int 0 1 1 2 2 0 3 3 2 3 ... : Factor w/ 2 levels "Death","Recover": NA NA 2 .. ## $ gender : Factor w/ 2 levels "f","m": 1 2 1 1 1 1 1 1 2 .. ## $ hospital : Factor w/ 5 levels "Connaught Hospital",..: 2 .. 1 Negative values of r in incidence are reported as halving times instead of doubling times and decreasing phase instead of growing phase 2) Building the incidence object The weekly incidence stratified by hospitals is computed by running the function incidence() on the Date variable dat1$date_of_onset with the arguments interval = 7 and groups = dat1$hospital. The incidence object i.7.group is a list with class of incidence for which several generic methods are implemented, including print.incidence() and plot.incidence(). Typing incidence object i.7.group implicitly calls the specific function print.incidence() and prints out the summary of the data and its list components. The 5,829 cases (the total number of cases stored in the $n component) with dates of symptom onset ranging from 2014-04-07 to 2015-04-27 (spanning from 2014-W15 to 2015-W18 in terms of the ISO 8601 standard for representing weeks) are used for building the incidence object i.7.group. The $counts component contains the actual incidence for defined bins, which is a matrix with one column per group. Here $count is a matrix with 56 rows and 6 columns as groups by hospital with 6 factor levels are specified. The bin size in number of days is stored in the $interval component. In this example, 7 days suggests that weekly incidence is computed, while by default, daily incidence is computed with the argument interval = 1. The $dates component contains all the dates marking the left side of the bins, in the format of the input data (e.g. Date, integer, etc.). The $timespan component stores the length of time (in days) for which incidence is computed. The $cumulative component is a logical indication whether incidence is cumulative or not.
The generic plot() method for incidence objects calls the specific function plot.incidence(), which makes an incidence barplot using the ggplot2 package. Hence, customization of incidence plot can benefit from the powerful graphical language from ggplot2. Note that when weekly incidence is computed from dates, like in this example, the ISO 8601 standard weeks are used by default with the argument standard = TRUE in the incidence() function. Under this situation, an extra component of $isoweek is added to the incidence object i.7.group to store those weeks in the ISO 8601 standard week format "yyyy-Www", and the $dates component stores the corresponding first days of those ISO weeks. Meanwhile the x-axis tick labels of the weekly incidence plot are in the ISO week format "yyyy-Www" (see Figure 2) rather than in the date format "yyyy-mm-dd" as the argument labels_iso_week in the plot() function is by default TRUE when plotting the ISO week-based incidence objects.
3) Manipulate the incidence object
In the above visualisation, it can be difficult to see what the dynamics were in the early stages of the epidemic.
If we want to see the first 18 weeks of the outbreak in the four major hospitals, we can use the [ operator to subset the rows and columns, which represent weeks and hospitals, respectively, in this particular incidence object. Here, because of the few numbers of cases in the first few weeks, we have also highlighted each case using show_ cases = TRUE ( Figure 3). We've also used a different color palette to differentiate between the subsetted data and the full data set.
As shown in Figure 2, the missing hospital name (NA) is treated as a separate group, resulting from the default of the argument na_as_group = TRUE in the incidence() function. This argument can be set to FALSE to not include data with missing groups in the object.
Example 2: importing pre-computed daily incidence and fitting log-linear model The datasets zika_girardot_2015 and zika_sanandres_2015 used in the second example are also from the outbreaks package. These datasets describe the daily incidence of Zika virus disease (ZVD) in, respectively, Girardot and San Andres Island, Colombia from September 2015 to January 2016. For details on these datasets, please refer to Rojas et al. (2016).
1) Import pre-computed daily incidence zika_girardot_2015 and zika_sanandres_2015 are data frames with the same variables date and cases. In order to obtain a more complete picture of the epidemic dynamics of ZVD in Colombia, we merge these two data.frames into a single one, dat2, by variable date. As dat2 is already pre-computed daily incidence rather than a vector of dates such as those in example 1, we can directly convert it into an incidence object grouped by geographical locations, i.group, by using the as.incidence() function. This shows the flexibility of the incidence package in making incidence objects. Using the pool() function, the daily incidence stratified by locations, i.group, can be collapsed into an incidence object without groups, i.pooled. The stratified and pooled daily incidence plots of ZVD in Colombia are shown in Figure 4, from which we can see that the epidemic of ZVD occurred earlier in San Andres Island than in Girardot. As shown in Figure 4B, the pooled daily incidence in Colombia shows approximately exponential phases before and after the epidemic peak. Therefore, we fit two log-linear regression models around the peak to characterize the epidemic dynamics of ZVD in Colombia. Such models can be separately fitted to the two phases of the epicurve of i.pooled using the fit() function, which, however, requires us to know what date should be used to split the epicurve in two phases (see the argument split in the fit() function). Without any knowledge on the splitting date, we can turn to the fit_optim_split() function to look for the optimal splitting date (i.e. the one maximizing the average fit of both models) and then fit two log-linear regression models before and after the optimal splitting date. The predictions and their 95% CIs from the two incidence_fit objects, 'before' and 'after', can be added to the existing incidence plot of i.pooled using the piping-friendly function add_incidence_fit(). As shown in Figure 5, based on visual comparison of models and data, these two log-linear regression models provide a decent approximation for the actual dynamics of the epidemic (adjusted R 2 = 0.83 and 0.77 for the increasing and decreasing phases, respectively).
Conclusion
This article has described the package incidence and its features-which include three lightweight data classes and utilities for data manipulation, plotting, and modeling. We have shown that an incidence object can flexibly be defined at different datetime intervals with any number of stratifications and be subset by groups or dates. The most important aspects of this package are use-ability and interoperability. For both field epidemiologists and academic modellers, the data received are often in the form of line-lists where each row represents a single case. We have shown that these data can easily be converted to an incidence object and then plotted with sensible defaults in two lines of code.
We have additionally shown that because the data are aggregated into a matrix of counts, it becomes simple to perform operations related to peak-finding, model-fitting, and exportation (e.g. using as.data.frame()) into different formats. Thus, because it has built-in tools for aggregation, visualisation, and model fitting, the incidence package is ideal for rapid generation of reports and estimates in outbreak response situations where time is a critical factor.
Software availability
incidence available from: https://www.repidemicsconsortium.org/incidence Code to reproduce all figures can be found by running demo ("incidence-demo", package = "incidence") from the R console with the incidence package installed.
Major comments
The proposed package is addressing some of the topmost descriptive elements of any epidemiological data set, namely a systematic time-place-person description. With regards to epidemiological curves, a limited number of dedicated packages addressing these aspects were available at the time of this package release (mainly: epitools, [last https://cran.r-project.org/web/packages/epitools/epitools.pdf update: October 26, 2017] and EpiCurve, https://cran.rstudio.com/web/packages/EpiCurve/EpiCurve.pdf [last update: April 24, 2018]). The alternative was a tailor-made, and time consuming, customization based on existing dedicated packages (for instance using customized geoms from ggplot2). bar charts While the data storage and representation is well addressed by the authors, the proposed package offers: i) some basic utilities for outbreak description across time, ii) basic tool for outbreak modelling and iii) a standard for data storage to enhance interoperability between released projects and packages from the R . In the introduction a short overview of the alternative tools mentioned above should epidemic consortium be provided to the reader, together with the new added-values of the current package. It would be an asset in order to ensure a benchmarking analysis with pre-existing resources.
"Here, we introduce , an R package developed as part of the toolbox for epidemics analysis of incidence the R Epidemics Consortium (RECON) which aims to fill this gap. In this paper, we outline the package's design and illustrate its functionalities using a reproducible worked example." According to RECON website: package incidence corresponds to "Computation, handling, visualisation and simple modelling of incidence". It is honourable that the name of a package "incidence" is the choice of the creators of course. It is true that incident cases are all individuals who change to non-disease status from disease, so in this way "incidence" could refer to the occurrence of new cases. In recent modelling papers, the term incidence has been associated to count time series under the same approach, so-called "incidence time series" . Nevertheless, the current name can be misleading for a certain number of epidemiologists. The reason is that in epidemiology the term incidence is traditionally associated to a measure of morbidity, so-called 'Incidence proportion' (or attack rate or risk; for more information see: ). The latter comprised the https://www.cdc.gov/ophss/csels/dsepd/ss1978/lesson3/section2.html numerator (= count of case used for a raw epi-curve) but as well a denominator representing the population at risk during the selected time interval. At the first glance, the target audience reading the package name might believe that the package is dedicated to incidence calculation rather than epidemiological curve graphic representation and some basic modelling utilities. Indeed, the package is presented as being able to compute, handle and visualize time-related count data through epi-curve and additional derived features which are not related to measure of incidence (proportion) strictly speaking. 1 presented as being able to compute, handle and visualize time-related count data through epi-curve and additional derived features which are not related to measure of incidence (proportion) strictly speaking. You may wish to consider adding such features and include new capabilities to this package which can cover calculation/representation of incidence in epidemiology. For instance, adding a slot for population data to populate the denominator of an incidence proportion calculation. This can be completed by dedicated graphic outputs with points for each incidence values and line through all data points (geom line). Factor-specific incidence rate (and corresponding CI) can be considered as further extensions (factor: sex-, age-, or any other factor). The advantage is the possibility to overlay and compare several incidence line charts coming for different locations (e.g. attack rate for different health districts or for several population group). It is a suggestion and it is acceptable that the authors would keep the package focus on basic utilities and epicurve representation.
With regards to the structure of the article, it would be easier to start with an example based on simple line listing (see comment on figure 1) with simple epicurve without stratification (with different timing hour, day. week), then move to more complex representation (stratification with various colour in the legend; +/facetting), and then an example of tailor-made polished figure (see code below). In doing so the figure 1 which would start for a line-list format would be easier to understand.
Minor comments
Section: Author's affiliation "which describe the number of new cases through time (incidence)-remain" -Proposed change of 'through' to 'over' Comment 4: "important source of information, particularly early in an outbreak." -Not only. This can be helpful to look at the magnitude and pattern (recurrent environmental sources, detect outliers ...). Comment 5: "Specifically epidemic curves(often" -Missing space. "provide a simple, visual outline of epi-demic dynamics, which can be used for assessing the growth or decline of an" -That is the main purpose in practice. You may consider being more assertive, by replacing "can" with "is". Very well-known added-values. No need for six references to support this statement. "[…] as well as in outbreak detection algorithms from syndromic surveillance data" -Consider more references that are recent. Note such analytical frameworks are not restricted to syndromic surveillance but to any regularly collected count data from epidemiological time-series in health surveillance system (syndromic or not). Comment 10: "[…] But despite the existence of packages dedicated to time series analysis (Shumway & Stoffer, 2010) as well as surveillance data (Höhle, 2007)" -Time series analysis is a generic term, consider "time series of epidemiological data" to be more accurate. Numerous time-series packages are available for count data that can be "re-used" for human epi. Comment 11: "a lightweight and package solely dedicated to building, handling and plotting epidemic well-tested curves directly from linelist data (e.g. a spreadsheet where each row represents an individual case) is still lacking." -Perhaps "lightweight" is something relative in package development and might be changed. More importantly, as mentioned above, please, consider mentioning other previous packages supporting specifically epi-curve creation (for instance: epitools, EpiCurve). These packages can manage aggregated/no aggregated data with and without factor. Current presented package has an added-value is its interoperability, the presence of simple modelling tools and further graph customization. Stricto sensu, epicurve were able to be done in R form user customization of ggplot2 and cited package. "The dates are aggre-gated into counts based on the user-defined interval representing the number of days for each bin."-Consider 'user-defined time interval' instead of user-defined interval. "number of days for each bin" -"days" is too restrictive. It is just the chosen time interval, it can be the number of weeks, etc. Comment 16:
Section: Methods
"also accepts a groups argument which can be used to obtain stratified incidence" -Consider changing stratified incidence by "epidemiological curve". Comment 17: "The basic elements of the incidence object can be obtained by the accessors get_counts(), get_dates(), and get_interval()." -Please, number the number of basic elements for clarity purpose. Comment 18: "The function subset() can be used for isolating case data from a specific time window and/or groups, while the [ operator can be used for a finer control to subset dates and groups using integer, logical or character vectors." -If several functions are to be presented, it is easier to use bullet points to structure the reading. Consider the removal 'for isolating case data from a specific' and changing with 'to define'. "[" change to 'indexing operator, to follow the classical denomination . https://cran.r-project.org/doc/manuals/r-release/R-lang.pdf Comment 19: " Figure 1. Generalized workflow from incidence object construction to modeling and visualization." -The first example in the paper is using a line-list format as data inputs but showed a stratified graphic. To be consistent and easier to follow for the reader, this figure illustrates the flow of such of data type, knowing that line-list is the primary source of epidemiological surveillance. A solution present in the figure is the two types of data inputs (non-aggregated/aggregated count). It would be easier, from a reader point of view, to capture the data flow from the line-lit to the final products proposed by this package. Comment 20: "The function pool() can be used to merge several groups into one," -Consider ending the sentence and explaining what it does. "and the function cumulate() will turn incidence data into cumulative incidence" -Consider changing "cumulative incidence" to "cumulative count of cases". Comment 21: […]: an option for a 'long' format which is readily compatible with (Wickham, 2016) for ggplot2 […]: an option for a 'long' format which is readily compatible with (Wickham, 2016) for ggplot2 further customization of graphics." -Would it be possible to mention how date format is exported? This might good to elaborate a bit more about date and user's customization with ggplot2. This can be addressed later in the manuscript (see proposal for a theme). Comment 22: "In line with , the package is thoroughly" -Add a RECON's development guidelines incidence hyperlink/ref. to the website: https://www.repidemicsconsortium.org/resources/guidelines/ Section: Modelling utilities Comment 23: "Here, we highlight three simple functionalities in for estimating parameters via modelling incidence or bootstrap and the two specialized data classes that are used to store the models and parameter estimates." -Consider structuring the following section according to the five elements mentioned (=three functions [ estimate_peak() , fit_optim_split()] and two specialized data classes) using fit() , for instance bullet points/subtitles. For each function, the goal, data input, statistical methods and output object(s) can be grouped in single section. Comment 24: "we implement the simple log-linear regression approach in the function , which can […]" -fit() Please add more information about the structure of the 'incidence_fit objects containing log-linear models' Comment 25: "[…] fit exponential increase or decrease of incidence over time by log-transforming case counts … " -Can be simplified to "fit exponential increase or decrease using a linear regression over time on log-transformed case counts…". Comment 26: "where is the incidence, is the growth rate, -Replace "incidence" with number of new y r t" case/incident case. Comment 27: "serial interval" -Consider adding "serial interval of the infectious agent" Comment 28: "uses multinomial bootstrapping to estimate the peak, assuming" -Some explanation about the method and references would be desirable. Comment 29: "Both object classes can be plotted separately or added to an existing epicurve using the function add_incidence_fit() ( Figure 5)." -The customization of the epicurve is well described. However, it is not mentioned how to change the layout of the model outcome and confidence interval. Indeed, some users might wish to use an alternative ggplot2 geometric object such as geom_range with a shaded grey semi-transparent band instead of two dotted lines. It would an added-value to provide some capacities or explanation and an example of customization of the layout of the "incidence_fit objects". Section: Use cases Comment 30: "Two worked examples are used to demonstrate the functionality and flexibility of the incidence package. The first example illustrates how to compute and manipulate stratified weekly incidence directly from a line-list". -Consider "The first example illustrates how to create directly from a line-list incidence object in order draw an epicurve of the weekly number of cases with or without stratification on patient characteristics". Comment 31: "while the second example shows how to import pre-computed daily incidence and fit a log-linear "while the second example shows how to import pre-computed daily incidence and fit a log-linear model to estimate growth rate ( ) and doubling time for the growing phase." -Footnote to be r included in the section about the function for more clarity.
Example 1: computing and manipulating stratified weekly incidence
Comment 32: "The weekly incidence stratified by hospitals is computed by running the function incidence() on the ate vari-able dat1$date_of_onset with the arguments interval = 7 and groups = D dat1$hospital." -Consider rephrasing. For instance: "the epicurve with the weekly number of cases by hospital can be computed from the line listing dataframe object (dat1) using the function incidence() on i) the date vari-able (dat1$date_of_onset), ii) by specifying the argument interval of seven days in order to aggregate the number case per week (interval = 7) and iii) including a the line-listing variable for stratification in the argument groups, in this case the hospital name (groups = dat1$hospital). Comment 33: "Here count is a matrix with 56 rows and 6 columns as groups by hospital" -Missing s at the end $ of $counts. Comment 34: "The generic plot() method for incidence objects calls the specific function plot.incidence(), which makes an incidence barplot using the package. Hence, customization of plot can ggplot2 incidence benefit from the powerful graphical language from ." -A short explanation and command ggplot2 line to explain how to access the code of the method would be welcome (notably the plot). This would help users to understand which ggplot2 geometric object(s) is used for the bar plot and incidence_fit lines. This would be an asset to understand how to proceed with further customization (within the aesthetic, theme or faceting specifications). This can be proposed at the end through several examples. Comment 35: # plot incidence object my_theme <-theme_bw(base_size = 12) + theme(panel.grid.minor = element_blank()) + theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.5, color = "black")) plot(i.7.group, border = "white") + my_theme + theme(legend.position = c(0.8, 0.75)) The current example assumes that all users are familiar with ggplot2, notably how to customize the non-data components through the theme. We might suggest to introduce what is a theme in ggplot2 and what it does, for instance "Themes allows modification (content and layouts) of non-data components such as titles, axis labels, legends (position and aspect …), graphics grid lines and backgrounds (Modify components of a theme; ref: )". https://ggplot2.tidyverse.org/reference/theme.html In addition, mention that theme specifications can "overwrite" some layout specification in the other part of the ggplot2 function. The use in this example of theme_bw appears to clarify what the default built-in theme is (complete themes: , other themes https://ggplot2.tidyverse.org/reference/ggtheme.html ). https://cran.r-project.org/web/packages/ggthemes/ggthemes.pdf To help the reader, you might consider the arguments as displayed in "Modify components of a theme" order as much as possible: . This is of primary importance in outbreak investigation to able to delineate case number in an easy-to-read visual and often stratified representation by case characteristics (sex, exposure, location …). This type of representation is adopted by default in package EpiCurve (see ) as well as US CDC https://cran.r-project.org/web/packages/EpiCurve/vignettes/EpiCurve.pdf training example of epicurve ( ). Representation https://www.cdc.gov/training/quicklearns/createepi/ of case as square can be achieved with the current package as illustrated in at the end of the package vignette (see: vignette ("customize_plot", package="incidence") in the section Applying the style of European Programme for Intervention Epidemiology Training (EPIET). It's understandable that square form can be changed in case of a large dataset making the size of the intervals for the x-and y-axes different. It would be an asset for the target audience and the current manuscript to contain a full example of an epicure with case made under square representation (somehow close to the example cited above in the vignette) combined with a full theme following simple and standard representation.
Comment 38
Consider change in the layout of Figure 2: 'Number of EVD cases stratified by hospital ... between week XX and week YY'. Y axis title: 'Number of cases' X axis title: 'Week of onset of illness' X axis ticks should be made visible for all weeks even the label displayed for any other week. Comment 39: Would it be possible to give some more information to which ggplot2 geometry are used when show_ cases = TRUE is specified? This is of importance to provide the reader with a clue about how both borderline and colour content of each square can be further customized. Comment 40: As standard the label of the X axis time interval is displayed on the left side of the bin. Formally, the ideal position would be right below the bin (as illustrated in the CDC example above and numerous of published epicurve). In the EpiCurve package, the standard representation is not ideal as the X axis tick is right in the middle of the bin. The position of the label can be customized by the user, but would it be possible to look at the option to place the time label under each bin instead of to the left bin tick mark (see an example of ad hoc customization below). Of note, such label position should support figure export resizing.
Comment 41: Week of onset of illness' Some readers might wonder if the faceting capabilities of ggplot2 would be supported or not. An evident alternative is to prepare separated epicurve for the both locations and further combined them using specific package (ggarrange for instance). Would it be possible to add information on this point, and if supported, provide an alternative presentation of the both epicurves using two vertically aligned panels? Comment 42: "Without any knowledge on the splitting date, we can turn to the fit_optim_split() function to look for the optimal splitting date". -Could you please reconsider the phrasing as "Without any knowledge on the splitting date" which seems not logical when looking retrospectively to an outbreak epicurve and visually identifying the peak of the epidemic wave. Comment 43: library('magrittr') -Provide a short explanation about the dependency(ies) with this package. Comment 44: "The predictions and their 95% CIs from the two incidence_ft objects, 'before' and 'after', can be added to the existing incidence plot of i.pooled using the piping-friendly function add_incidence_fit()." -Provide more explanation on "the piping-friendly function" aspect. Provide an example of layout customization of the two log-linear regression models (linetype, colour, size). Comment 45: Please find below a proposal for an advanced and custom-made layout epicurve representation using the Zika dataset. In this example, some of the important layout feature of an epicurve are available: i) no space around the limits of both x and Y axes, ii) representation of each case with a square, iii) x label under each bin and make the graphic as light as possible based on Tufte's advice (less grid as possible, visible labels …). In the graphic below, an incidence_fit object can be added for the pooled location with a specific customization to give a complete overview to the package capabilities. plot(i.group_zoom_in, n_breaks = nrow(i.group_zoom_in), border = "grey90", show_cases=TRUE) + theme_bw() + scale_y_continuous(expand = c(0, 0), limits=c(0,max(i.group_zoom_in$counts +1))) + scale_x_date(date_breaks = "1 day", date_labels = "%b %d",expand = c(0, 0), limits= c(as.Date(min(i.group_zoom_in$dates)) , as.Date(max(i.group_zoom_in$dates+1)))) + # scale_fill_discrete(name = "Location:") + #scale_x_continuous(expand = c(0, 0),limits= c(as.Date(min(zomm_in$dates)) , as.Date(max(zomm_in$dates)))) + labs(title = "Number of Zika disease cases", subtitle = "Girardot and San Andres municipalities, Colombia. Period: 6 Sep to 11 Oct 2015." , x="Week of onset of illness", y="Number of cases", fill="Municipalities:") + theme(panel.border = element_rect(colour = "white"), panel.grid.major.y = element_line(colour = "grey70", linetype="dotted", size =0.5), axis.line = element_line(colour = "black", size = 0.7), axis.text.x = element_text(angle = 45, hjust = 0.8, size = 6), axis.ticks.x = element_line(size = rel(2)), axis.ticks.y = element_line(size = rel(2)), axis.title.y = element_text(margin=margin(0,10,0,0), size=10), axis.title.x = element_text(margin=margin(10,0,0,0), size=10), plot.title = element_text(size = 12,face = "bold"), plot.subtitle = element_text(size = 9), legend.position="bottom", legend.box = "horizontal") + coord_equal() Link to plot available . here Is the rationale for developing the new software tool clearly explained? Yes. The rational is sounds and the new package "Incidence" is fulfilling the objectives cited. It allows streamlining the manipulation and representation for epidemiological data for epidemiological representation. It expected that this new package would reach a wide audience of epidemiologists and epidemiological data analysts working with R.
Is the description of the software tool technically sound?
Yes. However, some clarifications proposed in the detailed review would enhance the description of the software tool features.
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Overall, yes. Minor adjustments are proposed to allow reader to better access code (e.g. provide example to access full code of incidence (S3) class incidence and its subsequent methods (plot, …)) and offer additional clarification to improve graphic customization (notably around the incidence_fit object and further integration of ggplot2 faceting functionality).
Is sufficient information provided to allow interpretation of the expected output datasets and
any results generated using the tool? any results generated using the tool? Yes. Several practical and realistic examples are provided by the authors. All examples were tested and further tests with epidemiological additional datasets were conducted and allow to reproduce accurately tool behaviours. In addition to the paper review, the R package documentation (description file, recent reference manual and vignettes) were thoroughly reviewed to assess any discrepancies between the peer-review publication and package documentation on CRAN. 1.
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes No competing interests were disclosed. Competing Interests: I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. 25 The authors introduce a package that aids in processing linelist data ( in which each row represents a i.e., case) for building, handling, and plotting epidemic curves. This tool fills a gap within the larger epidemics toolbox of the R Epidemics Consortium in 'getting the basics right', or otherwise put, getting data common to outbreak situations in the right format for further analysis.
I agree with the authors that this is a helpful tool. This is particularly true during outbreaks, when linelist data need to be processed quickly and disseminated to relevant actors. The procedures are straightforward and well presented. The examples given in this work are well chosen and provide a good starting point for working with this package. I only have a few comments pertaining to i) the loglinear model implementation and presentation and ii) the integration with other packages.
While the package is meant to focus on processing and visualization of data, the authors have added a modeling-capability that estimates epidemic growth rates. The respective function contains an option for estimating the peak of the outbreak and fit the exponential increase or decrease during the epidemic. Basing the time window for exponential growth on the timing of the peak will underestimate the growth rate as growth will no longer be exponential just before the peak of the epidemic. A more careful description in the MS is needed to highlight this and other shortcomings of this approach. Other methods to estimate the best time window, such as those used in the R0-package , could prove helpful and be implemented in the package. Further, the choice of fitting the growth rates to the aggregated data rather than the two distinct regions (Girardot and San Andres) is a bit uncomfortable. Particularly so because, as the authors acknowledge, there seem to be two quite distinct outbreaks. | 9,107 | 2019-01-31T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Medicine"
] |
Motion Response and Energy Conversion Performance of a Heaving Point Absorber Wave Energy Converter
The heaving wave energy converter (WEC) is one typical type of point absorber WECs with high energy conversion efficiency but significantly affected by the viscous effect. It is widely known that the bottom shape of such WECs plays an important role in influencing the viscosity, so a detailed qualitative investigation is essential. Here a numerical study is performed for the influence of bottom shape on motion response and energy conversion performance of a heaving WEC. The numerical model is developed based on the potential flow theory with a viscous correction in the frequency domain. Cylindrical WECs with flat, cone, and hemispherical bottoms and the same displacement are considered. WECs with larger diameter-to-draft ratios (DDRs) are found to experience a relatively smaller viscous effect and achieve effective energy conversion in a broader frequency range. With the same DDR, the flat bottom has the most considerable viscous effect, following by the cone bottom with conical angles 90° and the hemispherical bottom. When the DDR is relatively small, the hemispherical bottom had the best energy conversion performance. Similarly, when the DDR is relatively large, the energy conversion performance of the floater with a hemispherical bottom and a cone bottom with 90° is better, while that with the flat bottom is the worst.
INTRODUCTION
Wave energy is amongst the ocean renewable energies with huge reserves. Wave energy converters (WECs) based on different mechanisms to extract energy from water waves have been invented (de O. Falcão, 2010). In general, WECs can be categorized into oscillating water columns, point absorbers, overtopping systems, and bottom-hinged systems based on their working principle (Li and Yu, 2009). The oscillating water columns, as a promising type of wave energy device, has been widely investigated by analytical (He et al., 2019), numerical (Wang et al., 2018) and experimental (He et al., 2013;Ning et al., 2019) methods. The point absorber wave energy converter (PA-WEC) is convenient for array arrangement due to its small dimension relative to the incident wavelength. It is thought to be the most efficient type in terms of the wave power conversion per unit volume (Li and Yu, 2009;McCabe and Aggidis, 2009). However, the high construction cost and the low energy conversion efficiency make the electricity generated by WECs less competitive. Therefore, it is necessary to improve the energy conversion efficiency of WECs to make wave energy economically competitive.
One typical type of PA-WECs works solely in the heave mode, which receives mechanical power resulting from the heave motion and generates electricity by the power take-off (PTO) system. Axial-symmetrical floaters are normally adopted to reduce the sensibility to the wave direction, such as Wavebob (Ireland) (Weber et al., 2009), PowerBuoy (USA) (Edwards and Mekhiche, 2014), and CETO (Australia) (Penesis et al., 2016). The hydrodynamic performance of PA-WECs needs to be studied in detail to maximize the wave power absorption, where the geometrical optimization is an important way. Generally, there are four types of methods to solve the hydrodynamic characteristics of a PA-WEC: analytical method, boundary element method, computational fluid dynamics (CFD) method, and experimental method. A comprehensive review can be found in Li and Yu (2009).
Previous studies were primarily focused on the impact of the shape of the bottom of the WEC on its energy conversion efficiency, for a specific size or considering some main parameters. The previous investigations have covered only some of the aspects, and knowledge gaps in the fields still exists. Most of the previous numerical simulations were carried out adopting a potential flow theory (PFT) approach without considering the viscous effects, which allows an initial understanding of the hydrodynamic fundamentals of WECs with different bottom shapes. De Backer (2009) compared the hydrodynamic performance and power absorption of cylinders with a conical bottom with an apex angle of 90°and a hemisphere bottom in irregular waves, showing that the cone-cylinder performs slightly better than the hemisphere-cylinder shape. Zhang et al. (2016) used a semi-analytical method in investigating the energy conversion efficiency of four floaters with different bottom shapes, including cylindrical, hemispherical, paraboloidal, and conical, with the same displacement. Zhang et al. (2016) suggested that the cylindrical type had fantastic wave energy conversion ability at some given frequencies, whereas in random sea waves, the parabolic and conical ones had better stabilization and applicability in wave power conversion. Khojasteh and Kamali (2016) found that the cone-cylinder shape slightly performed better than the hemispherical cylinder via numerical simulations based on the linear potential theory. Shi et al. (2019) compared the optimal average capture width ratio (CWR) of five types of geometry with the same mass based on the linear potential theory without viscous correction, including flat-bottom, cone-bottom, hemisphere-bottom, linear-chamfered, and circular-chamfered cylinders, showing that the flat-bottom cylinder was the best and the hemisphere-bottom cylinder was the worst. However, the ignorance of viscous effects may lead to incomplete conclusions. Especially when it is around the resonance frequency, the response simulated by non-viscous linear PFT could be 10 times or much larger than that of the experiment (Tom, 2013). An alternative approach is to use the CFD method, which can deal with strongly-nonlinear phenomena, such as vortex shedding and turbulence. Zhang et al. (2015) studied the effect of buoy shape on wave energy conversion using CFX (Computational Fluid Dynamics X) and found that the response of the circular truncated cone buoy preceded the cylindrical buoy. Jin and Patton (2017) studied three cylindrical floaters by LS-DYNA (LS-dynamic finite element analysis), and the results demonstrated that the rounded-and conical-bottom floaters had less viscous damping than that with the flat-bottom. Zhang et al. (2020a) selected four single-floater integrated systems with different bottom shapes and studied the effect of bottom shape on the hydrodynamic performance of the integrated system. However, the computational cost of detailed CFD simulations is high, partly due to the requisite large computational meshes. In this regard, the PFT with a viscous correction provides a tractable way to conduct an initial optimization design, which can be further refined by a detailed CFD study of selected cases. There are many different ways to provide the viscous correction. Bacelli et al. (2013) applied a force linearly proportional to the velocity as the equivalent viscous drag term in the frequency-domain equation, where an iterative procedure was necessary because of the correlation of the viscous coefficient and the body velocity. The numerical and experimental studies of Tom (2013) and Son et al. (2016) on a heaving point absorber WEC illustrated that the linear PFT could well predict the exciting forces. In contrast, the radiation forces (especially the damping term) were significantly affected by viscous effects. Chen et al. (2018b) showed that the viscous effect of added mass was much smaller than that of radiation damping. The experimental results of Tom (2013) showed that the viscous effect did not change obviously with the wave frequency. Therefore, the viscous correction of the hydrodynamic coefficients of the floater at the natural frequency can meet the calculation requirements. The viscous hydrodynamic coefficients can be obtained from the experiment (Chen et al., 2018a;Lee et al., 2018) or CFD simulation (Chen et al., 2018b) of the free decay test.
It is not possible to generalize the effect of the bottom shape of cylindrical PA-WECs with different diameter-to-draft ratios (DDRs) on the motion response and energy conversion performance from the existing studies. The motivation and novelty of this work are two-fold: 1) developing an efficient method for accurate prediction of the hydrodynamic performance of PA-WECs using a viscous-corrected linear potential theory, 2) understanding comprehensively the influence of the bottom shape of cylindrical PA-WECs on the motion response and energy conversion performance. This will help reduce the overall cost of wave energy harvesting.
The paper is structured as follows.
Motion Equation of Wave Energy Converter
Based on the linear frequency-domain PFT, the motion equation of a single WEC in heave mode can be written as where m is the mass of the WEC; ω is the wave frequency; i is the imaginary unit; z 3 is the heave motion of the WEC; C 33 , k 33, b pto , and F ex,3 are the restoring force, the elastic stiffness, mechanical damping due to PTO system, and the wave exciting force in the heave mode, respectively; μ 33 and λ 33 are the added mass and radiation damping of the WEC in the heave mode due to the heave motion of the WEC based on the PFT, respectively; λ vis,3 is the corrected viscous damping of the WEC in the heave mode at the natural frequency. μ 33 , λ 33 and F ex,3 are calculated by a higher-order boundary element method program WAFDUT (Teng and Taylor, 1995). WAFDUT is used to solve the diffraction and radiation problems of floating body with arbitrary shapes based on the linear PFT in frequency domain. More applications of the program can be found in Teng et al. (2014). The corrected viscous damping λ vis,3 can be obtained through the free decay motion of the floater. The non-dimensional damping κ is given by Lee et al. (2018) where z a k and z a k+ 2 are the two successive positive maximum displacements; z a k+ 1 and z a k+ 3 are the two successive negative maximum displacements. The total damping coefficient can be obtained by where C 33 and ω n are the hydrostatic coefficient and the natural frequency, respectively. The overall hydrodynamic damping, including the potential and the viscous parts, can be estimated from the decaying oscillation by determining the ratio between any pair of successive (double) amplitudes. In the present paper, the first three pairs are selected to obtain the average value. The viscous damping correction coefficient of the WEC is The non-dimensional linearized viscous damping correction is defined as where f λ,vist means the corrected ratio of the total damping to the potential damping.
Wave Power and Capture Width Ratio of Wave Energy Converter
The heave motion of WEC can be obtained from Eq. 1, i.e., The wave power P(ω) at wave frequency ω produced by the WEC is derived by The CWR of the WEC C W can be defined as where 2r is the width of the WEC, P W is the mean incident wave power transportation of a regular wave per unit wave crest, i.e., where H is the incident wave height, h is the water depth, k is the wavenumber.
Optimization of the PTO parameters can be classified into two methods (Folley, 2016). One is double-variable optimization, involving the elastic stiffness k 33 and the damping boot; the other is one-variable optimization, considering only the damping boot while remaining the elastic stiffness a constant. The first method is to keep the WEC resonance at any wave frequencies, which is not easy to realize in practical cases if the spring mechanism always needs to be changed. In the present paper, the elastic stiffness k 33 is considered as zero, and the second method is used. The optimal damping coefficient of the WEC boot at a wave frequency ω can be derived as the maximum wave power P(ω) is obtained (Sun et al., 2018). b opt m + μ 33 ω 2 − (k 33 + C 33 ) 2 ω 2 + λ 33 + λ vis,3 2 (10)
CONVERGENCE STUDY AND VALIDATION
As introduced in Motion Equation of Wave Energy Converter section, the total damping λ vist,3 can be obtained through the free decay motion of the WEC, simulating using the Star-CCM+ (need a reference). Star-CCM+ has been used in studying the interaction of waves and a two-dimensional floating body (Zhang et al., 2020a;Zhang et al., 2020b), and the free decay motion of a three-dimensional floater (Chen et al., 2018b). Their accuracy has been validated with the experimental results. The detailed setup can be found in Zhang et al. (2020a;2020b) and Chen et al. (2018b).
To further verify the accuracy of simulating the interaction between waves and a three-dimensional floating body, the experiment of a floating cylindrical floater with the flat bottom in waves by Shi et al. (2019) is simulated, where the radius of the cylinder buoy was 0.4 m, the height of the buoy was 0.4 m, the draft of the buoy was 0.12 m, the wave height was 0.2 m, and the wave period was 2 s. In the experiment, three degrees of freedom, including surge, heave, and pitch motions, were considered. The results showed that the heave motion did not interact with the surge and the pitch motions. Therefore, only the heave motion of the floater is considered in the present CFD simulation. Figure 1A compares the heave motion between the CFD results calculated by Star-CCM+ and the experiment by Shi et al. (2019). The overall agreement with the published experimental data verifies that the present CFD model in the accurate prediction of the interaction of waves with a threedimensional floater.
Although the CFD results can predict the experimental results accurately, the CPU cost is exceedingly high. The PFT with a viscous correction is an accurate and fast way to predict the motion of a WEC. The flat bottom cylindrical WEC with draft d 1.0 m and radius r 0.8 m is taken as an example to validate its accuracy further. The wave height H is 0.2 m. The optimal PTO damping based on Eq. 10 is considered. The RAOs (Response Amplitude Operator) of the heave motion calculated by the direct CFD method and the PFT with and without viscous correction are compared in Figure 1B, showing that the potential-theory results without viscous correction overestimate greatly the heave motion near the resonant frequency ω 2.5 rad/s, but the PFT results with viscous correction agree well with the direct CFD results. Therefore, the viscous-corrected PFT is applied in the following results.
Flat Bottom
In the following sections, different DDRs of 2r/d are chosen, where d 1.0 m. The investigation of Hu et al. (2020) showed that f λ,vist generally decreased to approach 1.0 as the ratio 2r/d increased. It meant that the viscous effect became smaller as the WEC became fatter, such that the viscous effect of a very fat floater is negligible. This can be explained by the KC number. The KC number means the ratio of the viscous force and inertial force. It can be simplified as KC VT/L, where V is the amplitude of the oscillation of the body, T is the period of oscillation, and L is a characteristic length. For the cylinder, the diameter is the characteristic length. As the diameter increases, the KC number decreases, which means the effect of the viscous force becomes smaller compared with the inertial force. Figures 2A,B show the variation of the heave RAO against the wave frequency for the flat bottom cylindrical WEC with different DDRs of 2r/d 0.8, 1.6, 2.4, 2.8, 3.2, 3.6 based on the PFT without and with viscous correction, respectively. The stiffness k 33 0, and the optimal PTO damping calculated by Eq. 10 are considered for all cases. Figures 2A,B show that as the wave frequency ω < 2.0 rad/s, the RAO for different 2r/d all tends to 0.7 and increases to a peak value near the natural frequency of the WEC, and then decreases to zero as the wave frequency continues to increase. The above trend is similar in both methods, but the peak value of RAO is significantly reduced by considering viscous correction for the smaller 2r/d. It is because that the viscous damping correction coefficient f λ,vist increases as the WEC becomes thinner. The potential results overestimate the RAO significantly near the resonance frequency. The maximum magnification factor of the RAO is 5.4 for the thinnest WEC 2r/d 0.8, while 1.4 for the fattest WEC 2r/d 3.6, where the magnification factor is defined as the ratio between the PFT (only) results and the results obtained by the PFT approach with viscous correction.
The corresponding CWRs C W are shown in Figures 2C,D. It is demonstrated from Figure 2C that the CWR C W increases with the increase of 2r/d in the low wave frequency region, but decreases in the high wave frequency region. Figure 2C also shows that the peak value of C W decreases with the increase of 2r/ d, while it increases by considering the viscous correction, as shown in Figure 2D. The viscous damping correction has a great influence on the thinner WEC near the resonance frequency, which significantly reduces the CWR C W . The maximum magnification factor of C W is 5.6 for the thinnest WEC 2r/d 0.8, while 1.4 for the fattest WEC 2r/d 3.6. Figure 2D shows that for the larger 2r/d, not only the peak value of C W is larger, but also the frequency range of the high power captured is broader, that is, the effective frequency-domain width is larger. Figures 2B,D show the CWR C W is larger at most of the frequencies for the fatter WEC, even the RAO is smaller, which is better for the design of WEC due to the smaller required reserved movement stroke. It means that in a limited region, the fatter WECs will capture more wave energy. Nevertheless, if the viscous effect is neglected, the contrary conclusions are obtained.
Cone Bottom
The conical bottom is usually used to reduce the viscous effect as a comparison to the flat bottom. Figure 3 shows the schematic diagram of the submerged part of the conicalbottom cylindrical WEC. The WEC with different DDRs (2r/d 1.6, 2.8, 3.6) and conical angles (α 90°, 120°, 150°, 180°) are considered to study the hydrodynamic performance and energy conversion capacity. The submerged part under the mean water line consists of two parts: one is the vertical cylindrical part with the height of d 1 and the other is the non-flat bottom part with the height of d 2 . To study the effect of the conical angle, the displacements (or massed) are taken as the same for floaters with different conical angles. The equivalence drafts d are kept as 1.0. Therefore, d 1 and d 2 of different conical angels can be calculated as , d 2 r tan(α/2) .
(11) Table 1 shows the dimension parameters and viscous correction coefficients of the conical bottom cylindrical WEC with different DDRs and conical angels. It can be seen that for the same DDR 2r/d, the viscous correction coefficient decreases with the decrease of the conical angel, which is the same as Chen et al. (2018b). Besides, for the same conical angle, the viscous correction coefficient decreases with the increase of 2r/d, similar to the flat bottom WEC. Figures 5A-E show the variations of the RAO of the heave motion against the wave frequency for the conical-bottom cylindrical WEC with different DDRs of 2r/d = 1.6, 2.8, 3.6 and different conical angels of α = 90°, 120°, 150°, 180°based on the PFT without and with viscous correction, respectively. The corresponding CWRs are shown in Figures 4A-E. Figures 5 and 4A,C,E show that the RAO and the CWR of WECs are both smaller for the smaller conical angle near the resonance frequency for the PFT results. Figures 5 and 4B,D,F show that when the viscous effect is considered, the RAO and the CWR of WECs mainly change at the resonance frequency. Because the viscous correction coefficient with a larger cone angle is larger, the peak value of the RAO and the CWR decreases significantly. In the low-frequency region, the CWRs are close for different conical angles. At the resonance frequency and in the highfrequency region, the smaller the conical angle is, the larger the CWR is. Therefore, the smaller the cone angle is, the more extensive the frequency range of the high power captured is. Figures 5B,D,F show that when 2r/d is small (2r/d 1.6 and 2.8), the smaller the conical angle is, the larger the peak value of RAO is, therefore, the broader range movement stroke needs to be reserved in the design of WEC. When 2r/d is large (2r/d 3.6), the peak value of RAO of different conical angles are very close, and even the smaller the conical angle is, the smaller the peak value of RAO is, which is better for the engineering design. It means that the WEC with a larger 2r/d and a smaller conical angel has a better energy capture performance at different wave frequencies.
Hemispherical Bottom
The hemispherical bottom is also a good bottom shape to reduce the viscous effect comparing with the flat bottom. Figure 6 shows the schematic diagram of the submerged part of the hemispherical bottom cylindrical WEC. The WEC with different DDRs of 2r/d 0.8, 1.6, 2.4 and 2.8. The equivalence drafts d are still kept as 1.0, so the height of the submerged vertical cylindrical part d 0 can be calculated by d 0 d− (2r/3). Table 2 shows the dimension parameters and the viscous correction coefficients of the hemispherical bottom cylindrical WEC with different 2r/d. It can be seen that the viscous correction coefficients all tend to 1.0 even for the smallest 2r/d 0.8. It means that the viscous effect of the hemispherical bottom cylindrical WEC is much smaller than the flat and the conical ones for the smaller 2r/d. Figures 7A,B show the variation of the heave RAO and the CWR C W against the wave frequency for the hemispherical bottom cylindrical WEC with different DDRs of 2r/d 0.8, 1.6, 2.4 and 2.8 based on the PFT without viscous correction, respectively. It can be seen from Figure 7A that as the wave frequency ω < 2.0 rad/s, the RAO for different 2r/d all tends to 0.7, then increases to a maximum near the natural frequency, and decreases to zero as wave frequency continues to increase, similar to the flat bottom, as shown in Figure 2A. Figure 7A also shows that the peak value and the value in the highfrequency region of RAO decrease as 2r/d increases. Figure 7B shows that the peak value of the CWR C W decreases with the increase of 2r/d, but it increases in the low-frequency region ω < 2.5 rad/s. Although the peak value of the capture width is smaller near the resonance frequency for the larger 2r/d, the frequency range of the high capture width is more extensive, that is, the effective width in the frequency domain is broader. Meanwhile, the smaller range of movement stroke needs to be reserved in the design of WEC. Therefore, if the wave condition is always invariable, the smaller 2r/d is better. However, the wave condition in the practical sea area is variable, the larger 2r/d may be better. However, it depends on the detailed wave condition.
EFFECT OF BOTTOM SHAPES ON MOTION RESPONSE AND CAPTURE WIDTH RATIO
To study the effect of bottom shape on the energy conversion performance of floaters, the floaters with the flat bottom, the cone bottom with conical angles 90°, and the hemispherical bottom are compared. The displacements are kept the same, i.e., the equal draft d is similar for the same diameter 2r. Table 3 shows the total damping considering the viscous effect, the radiation damping based on the PFT with viscous correction, respectively. The corresponding CWRs are shown in Figure 9. For 2r/d 1.6, the RAO and the CWR of floaters with the hemispherical bottom are the biggest, followed by the cone bottom with 90°and the flat bottom. For 2r/d 2.4 and 2.8, the RAO and the CWR of floaters with the cone bottom with 90°and the hemispherical bottom are both similar over the entire wave frequency range, and the value of them with the flat bottom is smaller since the wave frequency is larger than the natural frequency. That is to say, as the DDR 2r/d is relatively small, the hemispherical bottom has the best energy conversion performance because of its smallest viscous dissipation. The energy conversion performance of floater with a hemispherical bottom and a cone bottom with 90°is similar, while that with a flat bottom is the worst as 2r/d is relatively large.
CONCLUSION
In this study, the linear PFT in the frequency domain is presented to investigate the hydrodynamic and energy conversion performances of the cylindrical WEC with the flat, cone, and hemispherical bottoms. The viscosity effect that is ignored by the PFT is corrected, and the ten-folded overestimation of the motion response is reduced to a quite reasonable level. Further, the viscous correction is quantified for a large range of bottom shapes and dimensions. The following conclusions can be drawn from this study: (1) The viscous effect becomes smaller as the radius of the cylindrical WEC increases. With the same DDR, the viscous damping correction coefficients of the floater with a flat bottom is the biggest, followed by a cone bottom with conical angles 90°and a hemispherical bottom.
(2) For the WEC with a cone bottom, the larger DDR and the smaller conical angel lead to better energy capture performance at different wave frequencies. The smaller the conical angle is, the smaller the peak value of RAO is. Therefore, the smaller range movement stroke needs to be reserved in the design of WEC, which is better for the engineering design.
(3) For the WEC with a hemispherical bottom, the peak value of the CWR decreases as the DDR increases, but the frequency range of the high power capture is wider and the smaller range movement stroke needs to be reserved in the design of WEC. Therefore, if the wave condition is always invariable, the smaller DDR is better. However, the wave condition in the practical sea area is variable, the larger DDR may be better. (4) When the DDR is relatively small, the hemispherical bottom has the best energy conversion performance. Similarly, when the DDR is relatively large, the energy conversion performance of floater with a hemispherical bottom and a cone bottom with 90°is better, while that with a flat bottom is the worst. | 6,041 | 2020-09-29T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
The dual topoisomerase inhibitor A35 preferentially and specially targets topoisomerase 2α by enhancing pre-strand and post-strand cleavage and inhibiting DNA religation
DNA topoisomerases play a key role in tumor proliferation. Chemotherapeutics targeting topoisomerases have been widely used in clinical oncology, but resistance and side effects, particularly cardiotoxicity, usually limit their application. Clinical data show that a decrease in topoisomerase (top) levels is the primary factor responsible for resistance, but in cells there is compensatory effect between the levels of top1 and top2α. Here, we validated cyclizing-berberine A35, which is a dual top inhibitor and preferentially targets top2α. The impact on the top2α catalytic cycle indicated that A35 could intercalate into DNA but did not interfere with DNA-top binding and top2α ATPase activity. A35 could facilitate DNA-top2α cleavage complex formation by enhancing pre-strand and post-strand cleavage and inhibiting religation, suggesting this compound can be a topoisomerase poison and had a district mechanism from other topoisomerase inhibitors. TARDIS and comet assays showed that A35 could induce cell DNA breakage and DNA-top complexes but had no effect on the cardiac toxicity inducer top2β. Silencing top1 augmented DNA break and silencing top2α decreased DNA break. Further validation in H9c2 cardiac cells showed A35 did not disturb cell proliferation and mitochondrial membrane potency. Additionally, an assay with nude mice further demonstrated A35 did not damage the heart. Our work identifies A35 as a novel skeleton compound dually inhibits topoisomerases, and predominantly and specially targets top2α by interfering with all cleavage steps and its no cardiac toxicity was verified by cardiac cells and mice heart. A35 could be a novel and effective targeting topoisomerase agent.
INTRODUCTION
DNA topoisomerases are essential enzymes for cells to modulate DNA topology by regulating the over-or under-winding of DNA strands during cellular processes such as DNA transcription, replication, or recombination [1]. Top1 is a nuclear enzyme that catalyzes the relaxation of superhelical DNA by generating a transient single strand nick. Top2 mediates the ATPdependent induction of coordinated nicks in both strands of the DNA duplex, followed by the crossing of another double strand of DNA through the transiently broken duplex. Two top2 isozymes are expressed in humans: top2α and top2β. Top2α is most abundantly expressed in rapidly growing tissues, and its expression is cell cycleregulated, peaking during G2/M. In contrast, top2β is ubiquitously expressed in terminally differentiated cells including cardiomyocytes and its expression levels do not exhibit any significant changes during the cell cycle [2][3][4][5].
Given that they are highly expressed in aggressive cancer cells and are essential to cancer cell survival, top1 and top2α are potential drug targets for treating human malignancies. Compared with top1, top2α is more essential for cell viability because only top2α can drive the separation of two DNA duplexes after replication and its deletion is lethal for cell survival [6][7][8].
Chemotherapeutics targeting top1 and in particular top2α have been of great utility in clinical oncology. Although these drugs are highly effective, tumors frequently recur and even become resistant, and the occurrence of side effects, particularly cardiotoxicity and secondary malignancies, tremendously limit their application. Some studies have shown that decreased topoisomerase levels in relapsed tumors contribute to the tumor resistance [9,10]; additionally, researchers also found that there is a compensatory effect between top1 and top2α: when the expression of one type of topisomerase is decreased, the other will be increased [9,[11][12][13]. Some evidence regarding the adverse effects, particularly anthracyclineinduced cardiotoxicity mediated by targeting top2 indicated that non-specific targeting of top2β is the initial molecular mechanism underlying this phenomenon [14]. Thus, to overcome the above-mentioned drug resistance and cardiotoxicity, combinatorial targeting of top1 and top2α is essential, but recent clinical data have shown that the combination of top1 and top2α poisons might lead to severe life-threatening neutropenia and anemia resulting from the toxicity overlay of the two agents [13]. Thus, it is extremely important to identify a compound that simultaneously targets top1 and top2α. Although some researchers have investigated compounds that could target top1 and top2, they did not precisely elucidate whether the top2 target is top2α or top2β [15][16][17][18]. Recently, one study in the literature reported an agent that could inhibit top1 and top2α, but it did not further clarify its effects on top2β [19]. Additionally, the predicted lack of cardiac toxicity of reported inhibitors specifically targeting top2α was obtained only based on a cell-free assay (DNA directly incubated with synthetic topoisomerase in buffer); further validation in cardiac cells and drugs effects on animal cardiac muscle were not reported [20][21][22][23].
Berberine (BBR, Figure 1A, left), an isoquinoline natural product extracted from Coptis chinensis, has been extensively used as an anti-inflammatory [24], cholesterol-lowering [25] and antineoplastic [26] research agent. However, its anticancer activity is weak [26,27]. Cyclizing-berberine is a novel skeleton compound (berberine of 1, 13-cyclication) that is occasionally obtained during the structural transformation of berberine in the search for a highly effective cholesterol-lowering agent. A screen found that this class of compound could inhibit cell proliferation; this detection evoked our interest to determine whether this novel structural class compound induced anticancer activity. Via successive structural transformation (A series of new cyclizing-berberine derivatives were synthesized through variations at the 9-position) and repetitive cytotoxicity assays, cyclizing-berberine A35 (replacement of 9-methoxyl with an ester moiety) ( Figure 1A, right) emerged and contained a greater number of aromatic rings that facilitate intercalation into DNA or topoisomerase, and also displayed excellent anticancer activity that was clearly better than its parent berberine (data not shown). A toxicity assay further confirmed its lower toxicity (300 mg/kg administration by intraperitoneal injection; the mouse survival rate was 100%).
In the present study, we evaluated the novel skeleton compound A35 in a cell-free assay, a cell assay and in further animal experiments and demonstrated that A35 could target top1 and particularly top2α by increasing pre-strand and post-strand cleavage and inhibiting the religation. Additionally, we also illustrated that A35 could specifically target top2α and not top2β; meanwhile, we confirmed that A35 did not induce cardiac cell injury in rat cardiomyocyte cells and nude mouse hearts.
A35 is a dual inhibitor of top2α and top1
Given that A35 possesses a greater number of aromatic rings and its structure is similar to known topoisomerase2 inhibitors [22,23], we first examined the effects of A35 on top2α activity by top2α-mediated relaxation assay. As shown in Figure 1B, A35 significantly inhibited top2α relaxation activity in a concentrationdependent manner. The relaxed DNA was quantified by software Image J and IC50 (a concentration resulting in 50% relaxed DNA reduction) was calculated as 0.56 μM by SigmaPlot. At the same concentration (10 μM), inhibitory activity of A35 on top top2α was much higher than that of etoposide (VP16), indicating that A35 was a powerful top2α inhibitor. Then, we utilized top1-mediated cleavage assay with linear DNA as substrate to evaluate the effect of A35 on top1, and results showed that A35 could induce linear DNA breakage in dose-dependent manner ( Figure 1C), although the DNA breakage effect was weak and far less than the positive control TPT (topotecan), a water-soluble derivative of alkaloid camptoth ecin (CPT) and could stabilize top1 cleavage complexes [28,29], indicating A35 could inhibit top1 activity. Similarly top1-mediated relaxation assay showed that the IC50 of A35 on top1 was 22.1 μM that was far higher than the IC50 of A35 on top2α (data not shown). These results showed that A35 could dually inhibit top2α and top1, although the effect on top2α was be superior to the effect on top1. www.impactjournals.com/oncotarget (Continued ) A35 can intercalate DNA but does not interfere with top2α-DNA binding and top2α ATPase activity Given that top2α is indispensable for cell division and was strongly inhibited by A35, in the following experiment we focused on investigating the effects of A35 on top2α. There are several steps in the top2α catalytic cycle. As a consequence, multiple independent approaches are required to determine the mechanisms underlying the drug-mediated inhibition of top2α. For A35, its intercalation into DNA is attributed to its polar structure, and this intercalation might lead to the topoisomerase DNA binding site being occupied or distortion of the DNA backbone; this can then interfere with top2-DNA binding. The ability of A35 to intercalate DNA was assessed using a top1 unwinding assay as described [20], pBR322DNA was firstly relaxed by top1 and mAMSA (amsacrine) (a known DNA intercalator) as a positive control [30] or A35 was added, then we observed the negative supercoiling formed after the addition of mAMSA or A35 (Figure 2A). The relaxed DNA was quantified, and the dose-response curve and IC50 (a concentration of 50% relaxed DNA reduction) showed almost identical DNA intercalation activity between A35 and mAMSA, indicating that A35 was a DNA intercalator.
Given that A35 could intercalate into DNA, we utilized an EMSA assay [31,32] to measure whether Figure 1 (Continued ): B. 0.5 μg supercoiled plasmid pBR322DNA was incubated with top2α and various concentrations of A35 or the indicated agents for 30 min. The reaction was stopped and the reaction products were separated in 1% agarose. RLX, relaxed DNA; SC, supercoiled DNA. The relaxed DNA was scanned by Image J and the IC50 was defined as the concentration of A35 resulting in a 50% reduction of relaxed DNA. Data represent the mean ± S.E.M. of three independent experiments. C. 3′-end-labeled 117-bp oligonucleotide was reacted with top1 (60U) in the presence or absence of indicated concentrations of compound. Arrowheads indicate the migration positions of DNA fragments cleaved by top1 in the presence of compounds.
A35 interfered with top2α-DNA binding ( Figure 2B) as described. A DNA probe containing a robust top2α-binding site was synthesized and incubated with nuclear extracts; after electrophoresis, we observed the DNA-top2α binding complex at the position of the top2α protein in the gel, and further verification that the complex contained the top2α protein was obtained via supershift assay with the anti-top2α antibody. Additionally, we determined that A35 did www.impactjournals.com/oncotarget pBR322 DNA was firstly relaxed by top1 prior to the addition of the indicated concentration agents and incubated for a further 30 min at 37°C. Reaction products were resolved on an agarose gel prior to visualization with ethidium bromide. The relaxed DNA was scanned and the dose response curves and IC50 were plotted and depicted. The IC50 represents the concentration required to facilitate 50% relaxed DNA to form supercoiled DNA.
(Continued ) not disturb DNA-top2α binding at various concentrations; interestingly we found that at 6 μM A35 seemingly promoted the binding of DNA and top2α slightly, and at the highest concentration of 120 μM the binding activity decreased compared with 6 μM.
The effect of A35 on top2α ATP enzyme activity was examined by thin layer chromatography with [γ-32P]-ATP and results showed that novobiocin (a known top2α ATP enzyme inhibitor) [33] significantly suppressed top2αmediated ATP hydrolysis, demonstrating that the assay was credible. But in A35-treated groups, we did not observe significant changes of ATP hydrolysis mediated by top2α, indicating that A35 did not inhibit top2α ATP enzyme activity ( Figure 2C). K562 nuclear extracts were incubated with 1 nm double-stranded oligonucleotide containing a strong top2α binding site in the presence of increasing concentrations of A35. After supershifting, an antibody was first incubated with nuclear extracts, and then the DNA probe and compound were added. Reaction products were resolved by non-denaturing polyacrylamide gel electrophoresis, the intensities of band were scanned and the relative ratio to nuclear extract+probe was plotted. C. The effects of A35 on top2α hydrolysis activity were determined by thin layer chromatography with [γ32P]-ATP in the presence of indicated compounds. Samples from various time points were quantified by scintillation counting and plotted relative ratio to the 5 min scintillation counting of control.
A35 facilitates top2α-DNA cleavage complex formation by simultaneously enhancing pre-strand and post-strand cleavage and inhibiting DNA religation Then we evaluated the effect of A35 on the DNA cleavage. A cleavage assay in the presence of 8 U top2α, 0.2 μg pBR322 and increasing concentrations of A35 was performed, and the results showed that cleaved linear DNA was formed and the cleaved band presented in a dose-dependent manner ( Figure 3A). Together, these results indicated that A35 stabilized the DNA-enzyme complex and thus this compound belongs to the poison of topoisomerase inhibitors. www.impactjournals.com/oncotarget Usually pre-strand cleavage and post-strand cleavage participated in the top2α-mediated DNA cleavage process [20]. Interference with any step would induce the occurrence of DNA cleavage and linear DNA formation. As ATP is required for strand passage, in the absence of ATP top2 binds DNA and establishes prestrand cleavagereligation equilibrium prior to strand passage. As seen in Figure 3B, a significant increase in the amount of cleaved linear DNA was observed by addition of A35 (120 μM and 30 μM) and the cleaved band presented in a dose-dependent manner, indicating that A35 enhanced pre-strand cleavage reaction. Just in accordance with previous description [20], meso-4,4′-(3,2-butanediyl)-bis(2,6-piperazinedione) inhibited enzyme activity at the post-cleavage To determine the effects of A35 on top2α-mediated pre-strand passage cleavage, reactions were performed in the absence of ATP, and the intensities of the linear bands were quantified and plotted relative to the control (pBR322DNA+Top2α). C. Top2α-mediated post-strand passage DNA cleavage affected by A35 was carried out in a reaction buffer containing AMPPNP (1 mM) instead of ATP, and the resulting graph was constructed. D. Top2α-mediated religation of the pBR322 plasmid was examined in the presence or absence of A35. Kinetically competent top2-DNA complexes were trapped in the presence of Ca 2+ and in the absence of ATP. After the addition of A35, reactions were reinitiated with Mg 2+ and trapped at the indicated time points and examined. *P < 0.05;** P < 0.01 www.impactjournals.com/oncotarget and strand passage steps, had no effects on the formation of cleaved DNA or on the levels of nicked circular DNA in the pre-cleavage stage as a negative control.
To determine if A35 has similar effects on post-strand passage DNA cleavage, we repeated the above experiment, but with the addition of Adenylylimidodiphosphate(AMPPNP) in the reaction buffer [34]. The addition of this non-hydrolyzable ATP analog permits strand passage to occur. In the presence of AMPPNP and Mg 2+ , incubated with increasing concentrations of A35 led to cleaved DNA increased ( Figure 3C), and indicated that A35 also prompted post-strand cleavage complex formation.
We subsequently examined the effects of A35 on top2α-mediated DNA religation. We kinetically trapped top2α-DNA complexes in the presence of Ca 2+ . After using EDTA to chelate the excess Ca 2+ ions, Mg 2+ reintroduction triggered the religation of the linear band [20]. To examine whether A35 impacts the religation of DNA by top2α, we initiated reactions with Mg 2+ in the presence or absence of A35 ( Figure 3D). The experiments demonstrated that the religation of DNA occurred almost immediately and was severely hindered by the addition of A35. Together, these results indicated that A35 could enhance top2α-mediated prestrand and post-strand DNA cleavage and inhibit DNA religation.
Compared with top1, top2α is preferentially and specifically targeted in A35-induced cell DNA strand breaks, top-DNA covalent complexes and growth inhibition
Although the cell-free assay showed that A35 was a dual inhibitor against top1 and top2α, true target identification should be obtained in a cell assay. First, we used K562, HL60, Raji and Romas hematological tumor cells as well as solid tumor cells including the hepatic carcinoma cell lines HepG2, Bel7402 and Bel7404 and colorectal cancer cell lines to evaluate the growth inhibitory effects of A35. The results showed that the IC50 for all cancer cells except for Bel7404 was less than 2 μM, indicating good proliferation inhibitory activity ( Figure 4A). Given that the cell-free assay showed that A35 induce top1 and top2α mediated DNA single or double DNA cleavage, we examined DNA single strand breakage with an alkaline comet assay [35] and double strand breakage with a neutral comet assay in cells treated with A35. The results showed that A35 could induce single strand DNA breakage, and at 1.2 μM the tail moment (TM) was about 1.5 folds of control, and at higher concentrations (6 μM and 10 μM) the tail moments were approximately 3-fold of the control ( Figure 4B, upper). Neutral comet electrophoresis indicated that A35 could induce obvious DNA double breakage at each concentration and in a dose-dependent manner: at the lowest concentration of 1.2 μM, the tail moment was approximately 9-fold of the control, and at 6 μM and 10 μM the tail moments were up to 30-fold of the control, indicating that A35 induced more double strand breakage ( Figure 4B, lower).
The cell-free assays showed that A35 could induce DNA-top1 or -top2α complex formation. We then utilized the TARDIS (trapped in agarose-DNA immunostaining) assay to further clarify the effects in cells. As shown in Figure 4C, top1-DNA and top2α-DNA complexes were observed in a dose-dependent manner after treatment with A35, and at 6 μM the top2α-positive cells were up to 80% and top1-positive cells were approximately 20% of the total. However, we did not visualize the top2β-DNA complex at any concentration of A35. To further ascertain top2β is not the target of A35, we performed top2β-mediated DNA cleavage assay in the presence of A35 to evaluate the effect of A35 on DNA cleavage, and results also demonstrated that A35 did not lead to DNA breakage mediated by top2β and the positive (VP16) control induced significant DNA breakage ( Figure 4D), indicating A35 did not target top2β.
To verify top1 and top2α are the primary target of A35 to induce DNA breakage followed by cell death, we knocked down top1, top2α and top2β and further assessed whether A35-induced cell proliferation inhibition and DNA breakage that was lethal to cell survival could be reversed,. As shown in Figure 4E, the topoisomerases were knocked down, and we found there was a compensatory effect between top1 and top2α, specifically that when top1 decreased top2α would increase and vice-versa. However, in the top2βsilenced cells, there were only minor changes in top2α and top1 levels. To these topoisomerase knockdown cells, we added 2 μM A35 for 24 hours and evaluated levels of the double-strand break (DSB) damage marker γ-H2AX. The results showed that γ-H2AX levels increased in top1 knockdown cells and decreased in top2α knockdown cells, but obvious changes were not detected in top2β knockdown cells ( Figure 4E). Then, we examined the effects of A35 on cell proliferation after topoisomerase silencing, and the results showed that with the knockdown of top1, the proliferation inhibitory activity induced by A35 was strengthened but was reversed after top2α was silenced, and there was no significant change in top2β-silenced cells ( Figure 4F). The alkaline and neutral comet assays also showed that in top1-silenced cells, single and double strand DNA breakage all increased, while in top2α-silenced cells single and double strand DNA breakage decreased ( Figure 4G). These results indicated that top1 and top2α were all targets of A35, but top2α was a more vital target and A35 did not target top2β.
A35 does not induce cardiac cell cytotoxicity and mitochondrial damage and induces cancer cell apoptosis but not through the mitochondrial pathway
Currently, topoisomerase 2 inhibitors are effective antineoplastic agents and have been widely used in tumor therapy. However, given their adverse effects, the application of topoisomerase 2 inhibitors, especially anthracyclines such as doxorubicin (DOX), has been restricted primarily due to the serious cardiac toxicity that results from targeting top2β. Although the above cell-free and cell-based assays verified that A35 could not target top2β, for further verifying the non-cardiotoxic effects www.impactjournals.com/oncotarget (Continued ) of A35 we utilized H9C2 cardiac myoblasts from rats to perform the following assays. First, we evaluated whether A35 exerted proliferation inhibitory effects on H9C2 cells, and the results showed that after 24 h treatment with A35, at the concentrations of 2 μM and 5 μM cell vitality was almost equal to the control, and at 10 μM cell survival was approximately 90% of the control. After treatment for 48 h, H9C2 cell survival was approximately 90% of the control at 2 μM and 5 μM and 80% at 10 μM ( Figure 5A), indicating that A35 barely interfered with cardiac cell proliferation. However, in the doxorubicintreated group, we visualized obvious cardiac cell growth inhibitory effects after 24 h of treatment, and at 48 h, most cells were dead at higher concentrations. Specifically, after 24 h treatment cell survival was 70.2% of the control at 2 μM, 63.5% at 5 μM and 53.2% at 10 μM, and after 48 h treatment cell survival was 40.3% at 2 μM, 12.5% at 5 μM and 7.3% at 10 μM. Similarly, after the addition of DOX for 48 h, obvious apoptosis occurred, and the apoptotic cells were approximately 45% at 2 μM, 78% at 5 μM and 85% at 10 μM, but there was only 7-12% apoptotic cells observed in the A35-treated group ( Figure 5B).
A number of studies have suggested that mitochondrial dysfunction is the primary molecular mechanism underlying DOX-induced cardiotoxicity. Alterations in mitochondrial membrane potential were the main effects of DOX in the mitochondria. Recent studies showed that DOX-induced mitochondria dysfunction is entirely attributable to the targeting of mitochondria top2β, leading to mitochondrial nucleic acid synthesis blockage and causing the collapse of membrane potential, finally leading to whole-cell energy supply obstruction and cell death [14,36]. We detected the effects of A35 on mitochondrial membrane potential with JC-1. JC-1 is a fluorescent dye that exhibits potential-dependent accumulation in the mitochondria and can selectively enter the mitochondria as J-aggregates (JC-1 aggregates) with intense red-orange fluorescence in normal cells. If the membrane potential is disturbed, the dye remains in the monomeric form as J-monomer, emitting only green fluorescence. In our experiments, shown in Figure 5C, red granular aggregates were observed in the cytoplasm after excitation at 488 nm, both in untreated H9C2 cells or cells treated with various concentrations of A35, indicating that A35 did not alter the mitochondrial membrane potential of H9C2 cells. However, in doxorubicin-treated cells, at 2 μM red granular aggregates obviously decreased in the cytoplasm, and at higher concentrations (5 μM and 10 μM), red fluorescence aggregates almost entirely disappeared, indicating that cell mitochondria membrane potency was dramatically destroyed. Given that top2 was mainly located in the nucleus and doxorubicin itself emits red fluorescence, we observed red fluorescence in cells, particularly in the nuclei. Meanwhile, in the doxorubicintreated group, we observed cytoplasm that was distinctly wrinkled, distorted and broken. In comparison, the cytoplasm in A35-treated H9C2 cells was plump both at low or high concentrations, indicating that A35 did not injure cardiac cells and their mitochondria.
Then, we examined the signal proteins associated with mitochondrial damage that lead to cell apoptosis. It was previously reported that activated p53 (phosphorylated at Ser 15) and p53 levels were important to induce cardiomyocyte mitochondrial damage and finally cell death [37], and p53 levels are usually dependent on the activation of MAPK family proteins such as p-38, ERK and JNK. In our experiment, the phosphorylation levels of p-38, ERK and JNK and p53 in H9C2 cells did not significantly change following the addition of A35, despite the fact that these protein levels were obviously elevated in the doxorubicin-treated group. The levels of downstream targeting protein of p53 such as Bcl-2 did not change in A35-treated cells ( Figure 5D).
Annexin V-PI was used to evaluate the apoptosis or necrosis of tumor cells treated with A35, and the results indicated that the apoptotic cells were approximately 90% of total apoptotic and necrotic cells and increased in a dose-dependent manner ( Figure 5E). We also observed the final apoptosis event in which PARP was cleaved after A35 treatment for 24 or 48 hours either in K562 or HepG2 cells ( Figure 5F). Cleaved and activated caspase-7 that could cleave PARP was also detected after A35 addition.
(Continued )
Usually, caspase-7 can be activated by mitochondrial caspase9 or other non-mitochondrial caspases, but we did not observe cleavage and activation of caspase-9, whose activation represents mitochondrial apoptosis. The expression levels of the proteins that promote mitochondrial apoptosis pathway activation, such as Bax and p53 (although p53 is a mutant in K562 cells), were not increased in the presence of A35, and the expression of Bcl-2, a protein that suppresses mitochondrial apoptosis pathway activation, did not change ( Figure 5F).
A35 suppresses tumor cell growth in vivo and demonstrates no toxicity in mouse hearts
Next, in a tumor xenograft nude mouse model, we examined A35 anticancer efficacy and its effects on the mouse myocardium. The results indicated that A35 could suppress tumor xenograft proliferation, and at 20 mg/kg the inhibitory rate was approximately 55%, while at 10 mg/kg the inhibitory rate was approximately 35% ( Figure 6A). The body weight curves indicated that the animals tolerated well the A35 dosages administered ( Figure 6B). When the tumor sizes reached 1000 mm 3 , the mice were sacrificed, and tumors and hearts were excised to be used for further analysis. Tumor tissue was prepared as frozen sections for γ-H2AX detection and for a TUNEL assay to detect apoptosis. The results showed that A35 could significantly induce DNA double breakage, and γ-H2AX-positive cells increased to 40%, and the TUNEL results indicated that A35 could induce tumor cell apoptosis and the apoptotic cells comprised up to approximately 70% of total cells ( Figure 6C), indicating an identical action mechanism as for the in vitro results. Cardiac toxicity detection was performed with frozen cardiac tissue sections for H&E staining, γ-H2AX immunofluorescence and the TUNEL assay. H&E results showed that in both the vehicle-and A35-administered groups, the myofibrils all arranged normally, but in the positive control, the DOX-treated group, the myocardial fibers shrank, were distorted and irregularly arranged and the myoplasm significantly lessened ( Figure 6D). The TUNEL results corresponded to the H&E results: in the vehicle and A35 groups, apoptotic cells were not observed, but in the DOX group approximately 80% of cells were apoptotic and approximately 40% γ-H2AX-positive cells were observed ( Figure 6E).
DISCUSSION
Cyclizing-berberine A35 is a site 1 and 13 cyclizing berberine. The cyclizing endows this compound with more planar structures that induce intercalation into free DNA, and these aromatic rings enhance the potency of intercalation into topoisomerase [38][39][40]. This structure is similar to known top2α inhibitor NK314 [22,23]. After evaluating its effects on top1 and top2α activity, unexpectedly we found that not only could A35 inhibit top2α but it had an effect on top1, indicating that it is a dual topoisomerase inhibitor and has distinct effects on topoisomerases from NK314. Previously, some studies demonstrated that decreased topoisomerase levels are a major mechanism underlying relapse [9] and verified the compensatory effects between top1 and top2, which were also verified in the present study. Additionally, some authors also proposed that a dual targeting topoisomerase might increase overall anti-tumor activity, given that top1 and top2 have overlapping functions in DNA metabolism [41]. Thus, the novel skeleton compound A35 as a dual targeting top1 and top2α inhibitor might have the potency to avoid resistance and produce more powerful anticancer activity.
Given top2α is a more effective target based on its preferential expression in proliferating cells and as the sole enzyme to distort daughter chromosomes, and the stronger effects of A35 on top2α; we focused on studying the inhibitory effects of A35 on top2α. Top2 www.impactjournals.com/oncotarget manipulates DNA topology in an ATP-dependent manner via the mechanism known as the "catalytic cycle". The interruption of any step in the catalytic process might obstruct enzyme activity. Our results showed that although A35 could intercalate into DNA, it did not disrupt the top2α-DNA interaction. The ATP hydrolysis assay also demonstrated that A35 did not block top2α ATPase activity. Cleavage assays demonstrated that A35 could stabilize the intermediate DNA-top2α complex by exerting its actions both in pre-strand and post-strand cleavage steps and inhibiting religation. Considering all the results above mentioned, Its mechanism is distinct from other known top2α inhibitors, such as etoposide, which only has a very low affinity for intact DNA and only inserts into DNA cleavage sites and inhibits DNA religation [42,43], and doxorubicin, which at low concentrations (<1 μM) only inhibits DNA religation and at higher concentrations (>10 μM) it interferes with top2 binding to DNA to exert anticancer effects [44]. Other reported inhibitors, such as the quinolone CP-115, the ellipticines, azatoxins, and the natural flavonoid genistein, either strengthen pre-strand or post-strand cleavage to facilitate top-DNA complex formation [45].
The A35 parental core berberine was reported to inhibit top1 [46] and in our lab we did not observe any inhibitory effects on top1 or top2α at concentration up to 80 μM (data not shown), a concentration at which A35 could obviously suppress top1 and top2α activity, indicated a higher topoisomerase inhibitory activity than berberine.
Although some studies showed that berberine structural analog 5,6-dihydrocoralyne possessed inhibitory activity against top1 and top2, they did not clarify the effect on top2 isoforms [47,48]. Berberine and its structural analog trigger mitochondria-dependent apoptosis of cancer cells, and studies also found that mitochondria membrane potential was severely destroyed [27,49]; meanwhile, p53 was activated, levels of Bax, which can trigger mitochondrial caspase activation, increased and inhibitory protein Bcl2 levels decreased [27,50,51]. However, in A35-treated cancer cells, although apoptosis appeared, the mitochondria apoptosis pathway was not activated, which indicated the distinct mechanism from its parental core berberine in apoptosis induction. Usually except mitochondria caspase9, some non-mitochondrial caspases such as caspase-2, caspase-8 and caspase10 also play key role in activating effector caspase-7 [52], thus we speculate A35-induced caspase-7 activation and subsequent apoptosis might be a result of activation of non-mitochondrial caspases. This also arouses our interest and in the following study we will continue working on this issue.
Human top2α and top2β share very similar catalytic activities and are highly conserved, with 78% amino acid identity [53]; many agents targeting top2α also interfere with top2β, such as doxorubicin. However, as the sole enzyme present in heart tissue, the targeting of top2β would disrupt the normal catalytic cycle of top2β and then cause DNA DSB formation and mitochondria destruction, and damage to these organelles would activate the mitochondria apoptosis pathway mediated by p53, which underlies so-called cardiac toxicity [14]. Some reports have also demonstrated that embryonic fibroblasts lacking top2β were better protected against doxorubicin-induced cytotoxicity [54], and top2β deletion mice better prevented doxorubicin-induced cardiomyopathy [5]. Another advantage of only targeting top2α and not targeting top2β is the potential to avoid secondary malignancies. Top2β has been suggested as a major culprit in the development of secondary malignancies [55]. Currently, many novel www.impactjournals.com/oncotarget compounds have also been demonstrated to specifically target top2α, but these data were only from cell-free assays, i.e., compound, extracted topoisomerase enzyme and DNA plasmid directly reacted in buffer. However, validation with cardiomyocytes and animal hearts has not been reported. In our study, not only did we confirm the target in a cell-free assay but also using rat H9C2 cardiomyocytes and nude mice, and we demonstrated that A35 did not induce DNA breakage, mitochondrial injury, apoptosis and p53-mediated mitochondrial apoptosis pathway activation in cardiac cells, which are the main cellular alterations following the targeting of top2β, although in the DOX-treated group the above-mentioned effects were all observed, which further indicated that there were no effects of A35 on top2β at the cellular and animal levels.
In summary, as a novel and DNA intercalative agent, A35 dually inhibits topoisomerase and preferentially and specially targets top2α. It acts as a poison to promote DNA-top complex formation and had no effect on other catalytic steps mediated by top2α. Its distinct mechanisms from other known poisons will be useful for its combination with other topisomerase inhibitors. Although A35 intensively induced cancer cell apoptosis, it did not trigger the apoptosis of cardiac cells and mouse hearts and did not damage the mitochondria either in cancer cells or cardiac cells. Its further exploration might be helpful to overcome top1 and top2 resistant and cardiac toxicity, A35 is a promising topoisomerase anticancer agent and worthy to further develop in future. VP16 and mAMSA were purchased from Sigma-Aldrich. Anti-γ-H2AX, anti-phospho-p53 (Ser15), anti-p53, anti-phospho-p44/42MAPK (ERK1/2), anti-p44/42MAPK (ERK1/2), anti-phospho-p38 (Thr180/ Tyr182), anti-p38, anti-phospho-JNK (Thr183/Tyr185), anti-JNK, anti-caspase3, anti-caspase7, anti-PARP, anti-Bcl2 and anti-Bax antibodies were purchased from Cell Signaling Technology. The anti-β-actin antibody was obtained from Sigma-Aldrich, and peroxidase-conjugated goat anti-mouse or goat anti-rabbit secondary antibodies were purchased from ZSGQ-BIO Company. Antibodies against top1, top2α and top2β were purchased from Abcam. pBR322 DNA and top1 were purchased from BEIJING LIUHE TONG TRADE CO., LTD. Recombinant human top2α was purchased from Topogen. The comet assay kit was obtained from Trevigen.
Cell lines
Rat myoblasts H9C2 were obtained from the Cell Center of the Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and Peking Union Medical College. Other cell lines, such as K562, HL60, Raji, Romas, HepG2, Bel7402, Bel7404, HT29 and SMC7721, were either from Cell Center of the Institute of Basic Medical Sciences or from ATCC. K562, HL60, Raji and Romas cells were cultured in 1640 medium with 10% FBS, while H9C2, HepG2, Bel7402, Bel7404, HT29 and SMC7721 were cultured in DMEM with 10% FBS.
Topoisomerase-mediated DNA relaxation assay
The DNA relaxation assay was based on a procedure described previously [56]. Briefly, 2 μl of 10x reaction buffer with 1 mM ATP (top2α) or without ATP (top1), 0.5 μg of supercoiled pBR322, and 1 unit of top2α (Topogen) or top1 and compound were mixed in a total of 20 μl of reaction buffer. Relaxation was performed at 37°C for 30 min and stopped by the addition of 2.5 μl of stop solution (100 mM EDTA, 0.5% SDS, 50% glycerol, 0.05% bromophenol blue). Electrophoresis was performed in a 1% agarose gel in 0.5x TBE at 4 V/cm for 1.5 hr. DNA bands were stained with the nucleic acid dye EB and photographed with 300 nm UV transillumination. The DNA bands were qualified with software Image J. The percent of relaxed DNA was calculated as: (R-R 0 )/ (R control -R 0 ), where R is the intensity of relaxed DNA incubated with top2α and compound, R 0 is the intensity of relaxed DNA of pBR322 and R control is the intensity of relaxed DNA incubated with top1. The IC50 was defined as the concentration of A35 that resulted in a 50% reduction of relaxed DNA.
Topoisomerase 1-mediated DNA unwinding assay
To examine the effects of A35 on DNA intercalation, a top1-based assay was carried out according to the literature [20]. In brief, supercoiled plasmid pBR322 was firstly relaxed with recombinant human top1 (4 units) at 37°C for 30 min. Subsequently, A35 or mAMSA was added to the reaction and incubated for a further 30 min at 37°C. Reactions were stopped by the addition of SDS to a final concentration of 1% w/v, and top1 was digested by the addition of proteinase K (50 μg) and incubation for 1 h at 56°C. Samples were resolved on a 1% w/v agarose gel in 0.5x TBE at 4 V/cm for 2 hr and stained with EB. The definition of the percent of relaxed DNA and IC50 was as described in the "Topoisomerase-mediated DNA relaxation assay" section in the Methods.
Electrophoretic mobility shift assay
The effect of A35 on the binding of top2α to DNA was evaluated using an electrophoretic mobility shift (EMSA) kit (Invitrogen) as previously described [60] according to the manufacturer's protocol. Oligonucleotides containing a strong top2 binding site corresponding to residues 87-126 of the pBR322 plasmid were annealed, and 1 nm was incubated with nuclear extracts (5 μg) from K562 cells in reaction buffer (750 mM KCl, 0.5 mM dithiothreitol, 0.5 mM EDTA, 50 mM Tris, pH 7.4) on ice for 30 min. Then, the samples were electrophoresed on a 5% non-denaturing polyacrylamide gel at 100 V and 4°C in TBE buffer for 1.5 h. DNA was stained with SYBR Green and detected by 300 nm UV transillumination. In A35-treated samples, A35 and nuclear extracts were first incubated for 10 min on ice prior to the addition of oligonucleotide probes, and incubation continued for 30 min on ice. In super shift experiments, an antibody against top2 was first incubated with nuclear extracts for 1 h on ice, and then 1 nm probes were added and incubation continued for 30 min on ice.
ATPase assay
ATPase activity of top2α was examined by measuring the liberated phosphate of [γ-32 P]-ATP by thin layer chromatography as previously described [20,61]. Briefly, top2α (8 units) was incubated in reaction buffer A (Topogen) in the presence of 1.2 μg pBluescript-KS(+) plasmid DNA and the indicated drug for 10 min at room temperature prior to initiating the reaction with the addition of 3 μCi of [γ-32 P]-ATP (PerkinElmer; 3000 Ci/mmol) and continued to incubate at 37°C. Aliquots (2 μl) were withdrawn at various time points (0, 5 min, 10 min. 15 min, 20 min) and loaded onto pre-washed polyethyleneimine-impregnated cellulose plates (Sigma-Aldrich) and airdried. Reaction mixtures were resolved by developing plates with freshly prepared NH 4 HCO 3 (0.4 M). Plates were air-dried and exposed to autoradiographic film. Spots corresponding to free phosphate were excised from the thin layer chromatography plates and quantified using a scintillation counter.
Top2-mediated DNA cleavage
DNA cleavage assay was performed by using a Top2α Drug Screening Kit (Topogen), 0.2 μg pBR322 DNA plasmid was incubated with top2α or top2β which was obtained and purified as described previously [62] in 20 μl of assay buffer at 37°C for 30 min in the presence or absence of A35 or etoposide. DNA cleavage products were trapped by the addition of 2 μl of 10% SDS and 1.5 μl of 10 mg/ml proteinase K, and then incubation continued for 60 min at 56°C to digest top2α. The samples were mixed with 2.5 μl of loading buffer and subjected to electrophoresis in 1.4% agarose containing 0.5 mg/ml EB at 12 V for 15 hr. All DNA forms were separated and migrated as follows: RLX, SC, LNR, and NC.
ATP-independent pre-strand passage cleavage assay
Top2α-mediated plasmid DNA cleavage in the absence of nucleotide triphosphate was performed as described with slight modifications [56,63]. Top2α and terminated by addition of 2 μl of 10% SDS, followed by addition of 1.5 μl of 0.25M EDTA and continued to culture for 5 min at 37°C. Then, reactions were digested with 8 μg of proteinase K and incubated for 60 min at 56°C. DNA products were separated on a 1.4% gel containing 0.7 μg/μl ethidium bromide.
Post-strand passage cleavage assay
A top2α-mediated post-strand passage cleavage assay was performed as described above for pre-strand passage cleavage with the exception that 1 mM AMPPNP (Sigma) was added in the reactions [63].
ATP-independent DNA religation assay
Top2α-mediated religation of DNA in the absence of ATP was performed as described previously [20]. Top2α (8 units) was incubated with 0.2 μg of pBR322 plasmid DNA in a reaction buffer containing 0.01 M Tris pH 7.7, 0.05 M NaCl, 0.05 M KCl, 0.1 mM EDTA, 0.005 M CaCl 2 and 0.03 μg/μl of bovine serum albumin for 10 min at 37°C. Immediately after, various concentrations A35 or control were added, followed by the addition of 2 μl of 0.1 M EDTA to the reactions. Then, the reactions were re-initiated by the addition of 0.1 M MgCl 2 (2 μl) and transferred immediately onto ice. At the indicated time points (15,30 or 60 seconds), reactions were terminated by the addition of 1% w/v SDS and incubation continued at 37°C for 5 min. The reactions were then incubated with proteinase K (8 μg) for 30 min at 56°C. DNA products were separated on an agarose gel (1.4% w/v) containing ethidium bromide (0.5 μg/μl).
Comet assay
Top1-or Top2-mediated DNA breakage was measured with a neutral comet assay (Trevigen) for DSB detection or an alkaline comet assay for single-strand break detection as described in the manufacturer's procedures and the literature [64]. The treated cells were embedded in agarose on a slide and subjected to lysis followed by electrophoresis under neutral or alkaline conditions. During electrophoresis, the damaged and fragmented negatively charged DNA migrated away from the nucleus toward the anode. The amount of migrated DNA was a measure of the extent of DNA damage. To detect DNA, the slides were stained with SYBR Gold (Life Technology) staining solution. The slides were examined by fluorescence microscopy (Olympus), and the results were analyzed with the comet analysis software CASP to quantify DNA damage. For each drug concentration, 3 independent assays were conducted in which comet tails were analyzed in a minimum of 50 randomly selected cells in each assay, and parameter reflecting the DNA damage was represented as TM (tail moments, percentage of DNA in tail/tail length) [65].
TARDIS assay
The TARDIS assay is used to determine cleavable complex formation and has been described in detail [35]. Briefly, cells treated with various concentrations of A35 for 1.5 hr were embedded in agarose on microscope slides and subjected to a lysis procedure that removed the cell membrane and soluble proteins in lysis buffer (1% sarkosyl; 80 mM phosphate, pH6.8; 10 mM EDTA plus protease inhibitors). To remove noncovalently bound nuclear proteins, cells were washed with 1 M NaCl plus protease inhibitors. Then, slides were stained with primary antibody (specific for top1, top2α or top2β, with a dilution of 1:200) and a fluorescein isothiocyanate (FITC)-conjugated secondary antibody. Finally, the slides were mounted with an anti-quenching agent containing DAPI. Images were captured using a fluorescence microscope that separately visualizes green (FITC-stained) and blue (DAPI-stained) fluorescence.
Western blot
Whole-cell lysates were used for immunoblotting as described previously [67].
Measurement of mitochondrial membrane potential
5,5′,6,6′-tetrachloro-1,1′,3,3′-tetraethylbenzimidazolocar bocyanine iodide (JC-1) dye (sigma) exhibits potential-dependent accumulation in the mitochondria, and JC-1 selectively enters the mitochondria and spontaneously forms complexes known as J-aggregates. If the membrane potential is disturbed, the dye remains in the monomeric form, emitting only green fluorescence. Thus, this dye was employed to detect changes in the mitochondrial membrane potential (ΔΨm). JC-1 dye was added to the culture medium at 10 μg/ml and incubated for 15 min at 37°C. After mounting on the slides, the cells were immediately examined under a fluorescence microscope (Olympus).
In vivo antitumor activity
The in vivo efficacy of A35 was evaluated with HepG2 xenografts in nude mouse (purchased from Experimental Animals, Chinese Academy of Medical Sciences & Peking Union Medical College). First, 1 × 10 7 HepG2 cells suspended in 200 μl of PBS were inoculated s.c. in the right armpits of nude mice. After 3 weeks, the tumors were removed from the nude mice and dissected aseptically in sterile saline. Pieces of tumor tissue (2 mm 3 in size) were then transplanted into the right armpits of nude mice with a trocar. Tumor-bearing mice were randomly divided into 3 groups (n = 5) when the tumor size was about 100 mm 3 . A35 (10 or 20 mg/kg) was administered by intraperitoneal injection once a day until the mice were sacrificed. Tumor size was measured every 3 days, and tumor volume was determined as length × width 2 /2. Mice were killed when the tumor volumes of the control group reached 1000 mm 3 ; the tumor loads and hearts were isolated and used in further assays.
H & E staining, detection of apoptosis and immunofluorescence of tissue sections
Hearts from A35-administered mice were excised and preserved in liquid nitrogen until frozen heart sections were produced. In the cardiotoxicity positive control group, DOX was administered by intraperitoneal injection at 0.5 mg/kg once every two days. H&E staining was performed as described in [68]. Slides were incubated with 0.5% Triton for 20 min and blocked with FBS at 37°C for 30 min, and then γ-H2AX antibody conjugated fluorescein isothiocyanate (FITC) (BD) was added overnight at 4°C, followed by the addition of TUNEL reaction buffer (Roche Applied Science) for 10 min. The nuclei were stained with DAPI (Sigma) before the slides were sealed and examined with a fluorescence microscope (Olympus). The γ-H2AXpositive cells and cells with apoptotic nuclei were counted at 400× magnification to obtain the total nuclei per section. Five sections from each mouse were counted and averaged. | 10,229.8 | 2015-10-07T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Ubiquitination of Neuronal Nitric-oxide Synthase in Vitro and in Vivo *
It is established that suicide inactivation of neuronal nitric-oxide synthase (nNOS) with guanidine compounds, or inhibition of the hsp90-based chaperone system with geldanamycin, leads to the enhanced proteolytic degradation of nNOS. This regulated proteolysis is mediated, in part, by the proteasome. We show here with the use of human embryonic kidney 293 cells transfected with nNOS that inhibition of the proteasome with lactacystin leads to the accumulation of immunodetectable higher molecular mass forms of nNOS. Some of these higher molecular mass forms were immunoprecipitated by an anti-ubiquitin antibody, indicating that they are nNOS-polyubiquitin conjugates. Moreover, the predominant nNOS-ubiquitin conjugate detected in human embryonic kidney 293 cells, as well as in rat brain cytosol, migrates on SDS-polyacrylamide gels with a mobility near that for the native monomer of nNOS and likely represents a conjugate containing a few or perhaps one ubiquitin. Studies in vitro with the use of 125I-ubiquitin and reticulocyte extracts could mimic this ubiquitination reaction, which was dependent on ATP. The heme-deficient monomeric form of nNOS is preferentially ubiquitinated over that of the heme-sufficient functionally active homodimer. Thus, we have shown for the first time that ubiquitination of nNOS occurs and is likely involved in the regulated proteolytic removal of non-functional enzyme.
It is established that suicide inactivation of neuronal nitric-oxide synthase (nNOS) with guanidine compounds, or inhibition of the hsp90-based chaperone system with geldanamycin, leads to the enhanced proteolytic degradation of nNOS. This regulated proteolysis is mediated, in part, by the proteasome. We show here with the use of human embryonic kidney 293 cells transfected with nNOS that inhibition of the proteasome with lactacystin leads to the accumulation of immunodetectable higher molecular mass forms of nNOS. Some of these higher molecular mass forms were immunoprecipitated by an anti-ubiquitin antibody, indicating that they are nNOS-polyubiquitin conjugates. Moreover, the predominant nNOS-ubiquitin conjugate detected in human embryonic kidney 293 cells, as well as in rat brain cytosol, migrates on SDS-polyacrylamide gels with a mobility near that for the native monomer of nNOS and likely represents a conjugate containing a few or perhaps one ubiquitin. Studies in vitro with the use of 125 I-ubiquitin and reticulocyte extracts could mimic this ubiquitination reaction, which was dependent on ATP. The hemedeficient monomeric form of nNOS is preferentially ubiquitinated over that of the heme-sufficient functionally active homodimer. Thus, we have shown for the first time that ubiquitination of nNOS occurs and is likely involved in the regulated proteolytic removal of nonfunctional enzyme.
Nitric-oxide synthases (NOS) 1 are cytochrome P450-like hemoprotein enzymes that catalyze the conversion of L-arginine to citrulline and nitric oxide by a process that requires NADPH and molecular oxygen (1)(2)(3)(4). The enzymes also require bound FMN, FAD, (6R)-5,6,7,8-tetrahydro-L-biopterin (BH 4 ), and Ca 2ϩ /calmodulin for activity. Three main isoforms of NOS have been identified: isoform I or neuronal NOS (nNOS), which is constitutively expressed in a variety of neuronal cells as well as other cells; isoform II or inducible NOS, which is usually not constitutively expressed but can be induced by bacterial lipopo-lysaccharide and/or cytokines in macrophages and other cells; and isoform III or endothelial NOS, which is expressed in endothelial cells (5,6).
Both nNOS and endothelial NOS are hsp90-associated proteins and inhibition of hsp90 by geldanamycin causes the loss of NOS protein in cells (7,8). Based on studies with other hsp90-associated proteins, this loss of NOS is likely due to enhanced proteasomal degradation, presumably as a mechanism for the removal of misfolded proteins (9 -11). It has been shown for some proteins, including p53 (11) and tyrosine kinase p185 c-erbB-2 (12), that geldanamycin treatment leads to the accumulation of ubiquitinated forms of the target protein.
More recently, our laboratory has shown that suicide inactivation of nNOS enhances the proteolytic removal of the enzyme that is mediated, in part, by the proteasome (13). Based on these studies, we asked if nNOS could be regulated by ubiquitination prior to proteolytic removal by the proteasome in HEK 293 cells transfected with nNOS. This cellular model was chosen since it is the same model used in studies on the suicide inactivation and hsp90 regulation of nNOS (7,13).
In the current study, we have found that inhibition of the proteasome leads to the accumulation of higher molecular mass forms of nNOS, which are in part due to conjugation with ubiquitin (Ub). The major nNOS-Ub conjugate did not greatly change the relative mobility of the protein on SDS-PAGE gels, suggesting that the conjugate contains a few or perhaps one Ub. This limited ubiquitination of nNOS could be reproduced with an in vitro system containing purified nNOS, reticulocyte extracts, ATP, and Ub. Studies with 125 I-Ub indicate that the heme-deficient monomeric form of nNOS is preferentially ubiquitinated over the dimeric active form of nNOS. Thus, these studies establish for the first time that ubiquitination of nNOS occurs and likely plays a regulatory role in the removal of misfolded or non-functional protein.
Materials
The affinity-purified rabbit IgG used for Western blotting of nNOS was from Transduction Laboratories (Lexington, KY) or from RBI (Natick, MA) where indicated. The immunogen used for production of the antibody from Transduction Laboratories was a C-terminal peptide (residues 1095-1289) of nNOS, whereas that from RBI was an Nterminal peptide (residues 251-270). The affinity-purified IgG used for Western blotting of Ub was from Dako Corp. (Carpinteria, CA). The rabbit antiserum used to immunoprecipitate nNOS was raised against rat neuronal NOS and was the generous gift of Dr. Lance Pohl (NHLBI, National Institutes of Health, Bethesda, MD). The antibody was affinity purified prior to use. N G -Nitro-L-arginine (NNA) and the rabbit antiserum used to immunoprecipitate Ub were purchased from Sigma. Peroxidase conjugated anti-rabbit IgG antibody was from Roche Molecular Biochemicals. 125 I-Labeled Ub was purchased from Amersham Pharmacia Biotech. Lactacystin was purchased from BIOMOL (Plymouth Meeting, PA). BH 4 was purchased from Dr. Schirk's Laboratory (Jona, Switzerland). The cDNA for rat neuronal NOS was kindly provided by Dr. Solomon Snyder (Johns Hopkins Medical School, Balti-more, MD). Male Wistar rats (150 -250 g) were purchased from Charles River Laboratories (Wilmington, MA).
Methods
Cell Culture and Cytosol Preparation-Human embryonic kidney (HEK) 293 cells stably transfected with rat neuronal NOS by Bredt et al. (14) were obtained from Dr. Bettie Sue Masters (University of Texas Health Science Center, San Antonio, TX). HEK 293 cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% calf serum and G418 as described previously (3). When treated with lactacystin, cells were seeded at a density of 0.8 ϫ 10 5 cells/ml and grown for 48 h. After 48 h, the cell medium was aspirated and replaced with Dulbecco's modified Eagle's medium containing 0.1 mM arginine and supplemented with either 10 M lactacystin or H 2 O as a vehicle control. HEK cells were harvested, washed with ice-cold phosphate-buffered saline, and homogenized with a Tenbroeck ground glass homogenizer in three volumes of HE lysis buffer (10 mM Hepes, pH 7.4, 0.32 M sucrose, 2 mM EDTA, 6.0 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, 2 g/ml aprotinin, 10 g/ml trypsin inhibitor, 10 mM Na 3 VO 4 , 1% Nonidet P-40, 5 mM N-ethylmaleimide (NEM)). Homogenates were centrifuged at 16,000 ϫ g for 10 min with the supernatant taken as the cytosolic fraction. Cytosol from rat brains was prepared as above in three volumes of HE lysis buffer.
Immunoprecipitation and Western Blotting-nNOS was immunoadsorbed from 50 l of HEK 293 cytosol or 100 l of rat brain cytosol with 20 l of anti-nNOS IgG and 10 l of protein A-Sepharose in a total volume of 300 l of HE lysis buffer for 2 h at 4°C. In studies where Ub was immunoadsorbed, 25 l of anti-Ub IgG replaced anti-nNOS IgG. Immune pellets were washed three times with 1 ml of ice-cold HE lysis buffer. Immune pellets were boiled in SDS sample buffer containing dithiothreitol (6.0 mg/ml), and the proteins were resolved on 6% SDSpolyacrylamide gels and transferred to nitrocellulose membranes for 6 h at 850 mA. The membranes were probed with either a 0.1% anti-nNOS polyclonal antibody from Transduction Laboratories or 0.1% anti-nNOS antibody from RBI or 0.1% anti-Ub polyclonal antibody (Dako Corp.). Prior to probing with the anti-Ub antibody, the nitrocellulose membranes were autoclaved in distilled H 2 O for 20 min. An anti-rabbit IgG conjugated to peroxidase (Roche Molecular Biochemicals) was used as a secondary antibody. Immunoreactive bands were visualized with the use of enhanced chemiluminescence reagent (Super Signal, Pierce) and X-Omat film (Eastman Kodak Co.).
Expression and Purification of nNOS-nNOS was expressed in Sf9 cells using a recombinant baculovirus as described previously (7). Sf9 cells were grown in SFM 900 II serum-free medium (Life Technologies, Inc.) supplemented with Cytomax (Kemp Biotechnology, Rockville, MD) in suspension cultures maintained at 27°C with continuous shaking (150 rpm). Cultures were infected in log phase of growth with recombinant baculovirus at a multiplicity of infection of 1.0. After 48 h, oxyhemoglobin (25 M) was added as a source of heme. Cells were harvested and lysates prepared as described previously (7). Lysates from infected Sf9 cells (8 ϫ 10 9 ) were centrifuged at 100,000 ϫ g for 1 h. The supernatant fraction was loaded onto a 2Ј,5Ј-ADP Sepharose column, and the nNOS was affinity-purified as described previously (15), except that 10 mM 2Ј-AMP in high salt buffer was used to elute the protein. The nNOS containing fractions were loaded onto a Sephacryl S-300 HR gel filtration column (2.6 ϫ 100 cm, Pharmacia Biotech) equilibrated with 50 mM Tris-HCl, pH 7.4, containing 100 mM NaCl, 10% glycerol, 0.1 mM EDTA, 0.1 mM dithiothreitol, and 10 M BH 4 . The proteins were eluted at a flow rate of 1.3 ml/min, and 1.5 ml-fractions were collected and analyzed for protein content and NOS activity. The fractions containing NOS activity were pooled and supplemented with 10 M FAD, 10 M FMN, and 10 M BH 4 before concentration with the use of a Centriplus concentrator (Amicon, Beverly, MA). The concentrated enzyme was aliquoted and stored at Ϫ80°C. For preparation of purified heme-deficient apo-nNOS, the same procedure was followed except that oxyhemoglobin was not added to the insect cells during expression.
In Vitro Ubiquitination of nNOS-To conjugate Ub to nNOS, purified nNOS (10 g) was incubated for 1 h at 37°C in a total volume of 60 l of HKD buffer (10 mM Hepes, 100 mM KCl, 2 mM dithiothreitol) containing 50 M Ub, an ATP-regenerating system (50 mM ATP, 250 mM creatine phosphate, 20 mM MgOAc, and 100 units/ml creatine phosphokinase), and 2 mg/ml DE52-retained fraction of rabbit reticulocyte lysate (RET), which was prepared as described previously (7). After the incubation, reactions were quenched with SDS sample buffer containing dithiothreitol (6 mg/ml) and Western blotted with a 0.1% polyclonal antibody from Dako Corp. 125 I-Ub was conjugated to nNOS using a procedure modified from Ref. 16 by incubating 5 g of nNOS for 2 h at 37°C in a total volume of 25 l of HKD buffer containing 0.2 pmol of 125 I-Ub (9 ϫ 10 5 cpm), ATP-regenerating system, and 0.8 mg/ml RET. The reaction was halted by the addition of SDS sample buffer, and aliquots were resolved on 6% SDS-polyacrylamide gels. To separate nNOS monomers and dimers by SDS-PAGE, 100 M BH 4 and 100 M L-arginine were included in the SDS sample buffer and samples were kept ice-cold prior to loading on gels. This method has previously been described by Klatt et al. (17) to prevent the dissociation of nNOS dimers prior to and during electrophoresis. Gels were stained with Coomassie Blue and dried using a Bio-Rad model 583 gel dryer. Dried gels were then exposed to X-Omat film overnight at Ϫ80°C. The radioactivity was quantified with the use of a PhosphorImager (model 445 SI, Molecular Dynamics, Sunnyvale, CA). Fig. 1A, treatment of nNOS-transfected HEK 293 cells with 10 M lactacystin leads to the appearance of nNOS bands, which were detected by probing with an antibody directed against the C-terminal region of nNOS, that were of higher molecular mass than the native enzyme (cf. lane 2 with lane 1). HEK 293 cells that have not been transfected with nNOS do not give these intensely staining higher molecular mass bands (lane 4), indicating specificity of the antibody. As shown in Fig. 1B, higher molecular mass bands were also detected with an antibody directed against the N-terminal region of nNOS (lane 2). The pattern of higher molecular mass forms of nNOS differed from that found with the use of the antibody directed against the C terminus, suggesting that some of the epitopes may be masked. Moreover, these bands were only found when the lysis buffer contained NEM, suggesting an instability of these higher molecular mass nNOS forms due to a thiol-sensitive degradative pathway (cf. lane 2 with lane1).
Inhibition of the Proteasome Leads to Accumulation of Higher Molecular Mass Forms of nNOS in Vivo-As shown in
Ubiquitination of nNOS in Vivo-As expected from previous studies (12,18,19), lactacystin treatment of HEK 293 cells was found to cause a large increase in the level of immunodetectable Ub-protein conjugates (data not shown). Moreover, Ubprotein conjugates are known to be deubiquitinated by an NEM-sensitive pathway (20). Thus, we suspected that the higher molecular mass forms of nNOS were due to conjugation with Ub. As shown in Fig. 2A, immunoprecipitation of Ubconjugates from cytosol that was prepared from lactacystin- treated nNOS-transfected HEK 293 cells revealed higher molecular mass bands that were recognized by anti-nNOS IgG in the immune (I), but not non-immune pellet (N), indicating the formation of nNOS-Ub conjugates. Unexpectedly, we observed a very dark band corresponding to the nNOS monomer that could not be entirely explained by nonspecific immunoprecipitation of nNOS (cf. lanes N with lanes I). As will be described below, this is due to ubiquitination of nNOS in a manner that did not greatly alter the relative mobility of the protein on SDS-PAGE.
As shown in Fig. 2B, anti-nNOS IgG was able to immunoprecipitate large amounts of the native nNOS but not the higher molecular mass forms of the enzyme (left panel). This may be due to masking of nNOS epitopes under the nondenaturing conditions of the immunoprecipitation. Even though we could not address the nature of the higher molecular mass species by immunoprecipitation with anti-nNOS IgG, we found that Western blotting and probing of the nNOS-immunoprecipitate with an anti-Ub IgG gave an immune-specific signal corresponding approximately to the mobility of the nNOS monomer (right panel, cf. lane I with lane N). This signal was increased in lactacystin-treated cells (LC) over that found for control cells (CT), strongly suggesting that the signal was not due to cross-reactivity of the antibody and that ubiquitination of nNOS occurred. Furthermore, the ubiquitination did not greatly alter the relative mobility of the conjugate on SDS-PAGE gels, suggesting that the conjugate contained only one or a few Ub molecules. To further confirm these observations, an in vitro system was investigated as described below.
Ubiquitination of nNOS in Vitro-As shown in Fig. 3A, ubiquitination of purified nNOS was also found to occur in vitro when RET and ATP were present during the incubation (cf. lane 3 with lane 1). Again, the nNOS conjugate appeared as a band with the relative mobility on SDS-PAGE gels near that of the monomer of nNOS. The changes in intensity of this nNOS-Ub band were quantified by laser densitometry as shown in Fig. 3B. The ubiquitination was greatly increased when exogenous Ub was added (lane 4). The smear of higher molecular mass proteins recognized by the Ub antibody were not due to nNOS conjugates, as they were not recognized by anti-nNOS IgG (data not shown), and likely represent various ubiquitinated proteins from RET. The immunoreactivity to the Ub antibody found here was not due to cross-reactivity as the same amount of nNOS was present in all samples. To further support that ubiquitination of nNOS occurs, we utilized 125 I-Ub in the in vitro system. As shown in Fig. 4, the radiolabeled Ub was conjugated to nNOS only when incubated with RET and ATP (cf. lane 3 with lane 1). There were three higher molecular mass bands that were radiolabeled, but these were found in reaction mixtures not containing nNOS and are due to ubiquitination of proteins found in RET (cf. lane 3 with lane 4). Thus, taken together, nNOS was ubiquitinated in a manner that had little effect on the relative mobility of nNOS. Higher molecular mass nNOS-Ub conjugates were not observed in these in vitro studies.
Under conditions where 125 I-Ub could be conjugated to nNOS in vitro, we asked whether the monomeric or dimeric forms of nNOS were ubiquitinated. The dimeric or catalytically active form of nNOS is resistant to SDS and can be detected as a dimer if the gel and sample are kept cold (17). We utilized this procedure to separate monomer from dimer after ubiquitination with 125 I-Ub and subsequently analyzed the samples by Coomassie Blue (CB) staining and PhosphorImager analysis for radioisotope ( 125 I-Ub) (Fig. 5). As shown in Fig. 5A, our nNOS preparation was a mixture of monomer and dimer as determined by Coomassie Blue staining (lane 9). The nNOS was converted to the monomer during incubation with RET and ATP (lane 2) and reflects the instability of the enzyme during the incubation. The nNOS could be stabilized as a dimer if 10 M NNA was added to the ubiquitination reaction mixture (lane 4). Conversely, we have a purified preparation of inactive heme-deficient apo-nNOS that exists only as a monomer (lane 3). We have previously shown that this apo-nNOS preparation can be functionally reconstituted to give the active dimer (7).
As shown in Fig. 5B, treatment of nNOS gave 125 I-Ub conjugated to the nNOS monomer (lane 2) as well as to other proteins found in RET (lane 1). The intensity of the 125 I-Ub-nNOS monomer band increased when heme-deficient apo-NOS was used instead of native nNOS (lane 3), whereas the intensity decreased when the dimeric nNOS was stabilized with 10 M NNA (lane 4). An 125 I-Ub-labeled band corresponding to the dimer of nNOS was not observed, even though a substantial amount of dimer was found by Coomassie Blue staining (cf. panel A, lane 4, with panel B, lane 4). As shown in Fig. 5C, the amount of radioiodine was quantified in samples that were kept cold and thus processed to detect dimer (lanes 2-4) or in samples that were boiled and thus processed under denaturing conditions to detect total nNOS as the monomer (lanes 6 -8).
The lack of increase in radioactivity found on the monomer under the denaturing conditions over that when the dimer was stabilized, as well as the lack of radioactivity migrating with the dimer, strongly suggests that the dimer was not ubiquitinated. Moreover, the approximately 2-fold increase in 125 I-Ub associated with the heme-deficient apo-NOS over that found for nNOS treated with NNA indicated that the monomeric form was preferentially ubiquitinated. It is noteworthy that the amount of nNOS monomer as determined by Coomassie Blue staining in the NNA-treated sample was approximately onehalf of that found for heme-deficient apo-nNOS (cf. lane 4 with lane 3). This indicated that the amount of 125 I-Ub bound to the monomeric nNOS was approximately equal in these samples and further supported the conclusion that only the monomer was ubiquitinated.
Ubiquitinated nNOS in Rat Brain Cytosols-Cytosolic fractions from male Wistar rats were prepared and the nNOS was immunoprecipitated. As shown in Fig. 6, probing the immunoprecipitate with antibody to Ub gave a signal at the molecular mass of monomeric nNOS that was immune-specific (right panel, cf. lane I with lane N). Thus, native nNOS exists, in part, as a Ub conjugate. DISCUSSION Suicide inactivation of nNOS enhances the proteolytic degradation of the enzyme, in part, due to the proteasome (13). We show here that inhibition of the proteasome caused the accumulation of ubiquitinated forms of nNOS, strongly suggesting that ubiquitination regulates nNOS degradation. That ubiquitination is a mechanism for proteasomal removal of proteins is well recognized (19,21,22), although this is the first report of the ubiquitination of NOS. These observations are consistent with the finding that liver microsomal P450 cytochromes, which are heme-thiolate enzymes similar to NOS, are ubiquitinated and proteasomally degraded after suicide inactivation (23,24).
In the case of suicide inactivation of liver P450 cytochromes, structural changes and not the functional inactivation per se appears to be the "trigger" for proteolysis (25,26). In particular, the cross-linking of heme to the protein, which occurs during suicide inactivation of cytochrome P450, plays a prominent role in the enhanced turnover of cytochrome P450 (23)(24)(25)27). This process is selective in that other structural changes that occur during suicide inactivation, including the N-alkylation of the heme (25), the formation of apoprotein (28,29), and the covalent alteration of the protein (30), all fail to enhance the proteolysis of the affected P450 cytochromes.
In the case of nNOS, it is not known if heme-protein cross-links can lead to enhanced proteolytic degradation, and further studies FIG. 4. In vitro conjugation of 125 I-Ub to nNOS. nNOS was treated with a DE52-retained fraction of rabbit reticulocyte lysate (RET), an ATP-regenerating system (ATP), and 125 I-Ub (9 ϫ 10 5 cpm) as described under "Experimental Procedures." Where indicated RET, ATP or nNOS were omitted from the reaction mixtures. The reaction mixtures were resolved by SDS-PAGE and the radioactivity detected by exposure of the gel to x-ray film.
FIG. 5. Preferential ubiquitination of the nNOS monomer.
Ubiquitination reactions were as described in Experimental Procedures except that the mixtures were scaled up to a total volume of 60 l. One-half of each reaction mixture was processed to detect the monomer and dimer of nNOS (lanes [1][2][3][4]9) and the other half of the reaction mixture was used to detect total nNOS as the monomer (lanes 5-8). 6. Immunoprecipitation of nNOS-Ub conjugates from rat brain cytosol. Cytosol from male Wistar rats was prepared as described in Experimental Procedures. nNOS was immunoadsorbed from rat brain cytosol with an anti-nNOS antibody (I) or non-immune IgG (NI). Immune pellets were washed, analyzed by SDS-PAGE, and Western blotted with a polyclonal antibody to nNOS from RBI (left panel) or a polyclonal antibody to Ub from Dako (right panel).
are needed to define this pathway of nNOS heme alteration. We have found, however, that the functionally inactive monomeric form of the protein was preferentially ubiquitinated over the enzymatically active homodimer. Moreover, the slowly reversible inhibitor, NNA, stabilized the dimeric state of nNOS and decreased the extent of ubiquitination. This is consistent with the observation that NNA causes a slight increase in the amount of immunodetectable nNOS in vivo (31) and slows the rate of proteolytic degradation of nNOS in HEK 293 cells (13). That functionally inactive monomeric nNOS would be processed for removal is consistent with the recent postulate that ubiquitination and proteasomal degradation as well as chaperone-based folding and unfolding are mechanisms for cellular "quality control" of proteins (32). In this regard, our laboratory has also shown that inhibition of the hsp90-based chaperone system leads to the enhanced degradation of nNOS (7). Thus, it is possible that under conditions where heme is not available for assembly of newly synthesized nNOS to functionally active homodimers in vivo that the excess monomeric nNOS would be ubiquitinated and proteolytically removed. Alternatively, the monomeric form of nNOS may form during autoinactivation of the holoenzyme (Fig. 5). Thus, ubiquitination may be critical in regulating the level of monomeric nNOS in vivo. The amount of monomeric NOS may be important as a reserve pool of protein that could be rapidly assembled in the absence of protein synthesis to give the functional homodimer (33).
The major ubiquitinated form of nNOS found in vitro and in vivo was a conjugate that did not greatly alter the relative mobility of nNOS, suggesting that only one or a few Ub are attached. It is likely that the limited ubiquitination of nNOS gives a more stable conjugate whereas the polyubiquitinated forms, which were only observed in cells treated with lactacystin, are rapidly proteolyzed. It is noteworthy that the major conjugate observed in rat brains was that containing only limited amounts of Ub. The lability of the polyubiquitinated forms may be a possible explanation for not detecting the polyubiquitinated conjugates in our in vitro studies. The mono-ubiquitinated forms of proteins are thought to be involved in subcellular targeting (see Ref. 34 for review), and it would be important to further define if limited ubiquitination of nNOS is biologically important in this regard as a certain portion of nNOS exists in the membrane particulate fraction (35).
The sites on nNOS that serve to conjugate Ub are not known. It is likely that some structural change, which is related to inactivation and monomerization, serves to expose a lysine residue for Ub conjugation. For some proteins such as IB␣ and the large subunit of RNA polymerase II, phosphorylation serves to initiate structural changes that lead to ubiquitination (36,37). In this regard, the phosphorylation of nNOS has been shown to occur on a serine residue and lead to the inhibition of the enzyme (38,39). Moreover, suicide-inactivated liver microsomal P450 cytochromes are phosphorylated prior to ubiquitination and degradation (23). Thus, the elucidation of the structural features that predispose nNOS to ubiquitination will aid in understanding the post-translational events that govern the steady state levels of nNOS. The current report describes the ubiquitination of nNOS and the initial studies to define these structural features. | 5,934.6 | 2000-06-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Effects of the distant population density on spatial patterns of demographic dynamics
Spatio-temporal patterns of population changes within and across countries have various implications. Different geographical, demographic and econo-societal factors seem to contribute to migratory decisions made by individual inhabitants. Focusing on internal (i.e. domestic) migration, we ask whether individuals may take into account the information on the population density in distant locations to make migratory decisions. We analyse population census data in Japan recorded with a high spatial resolution (i.e. cells of size 500×500 m) for the entirety of the country, and simulate demographic dynamics induced by the gravity model and its variants. We show that, in the census data, the population growth rate in a cell is positively correlated with the population density in nearby cells up to a distance of 20 km as well as that of the focal cell. The ordinary gravity model does not capture this empirical observation. We then show that the empirical observation is better accounted for by extensions of the gravity model such that individuals are assumed to perceive the attractiveness, approximated by the population density, of the source or destination cell of migration as the spatial average over a circle of radius ≈1 km.
KT, 0000-0003-2014-5410; NM, 0000-0003-1567-801X Spatio-temporal patterns of population changes within and across countries have various implications. Different geographical, demographic and econo-societal factors seem to contribute to migratory decisions made by individual inhabitants. Focusing on internal (i.e. domestic) migration, we ask whether individuals may take into account the information on the population density in distant locations to make migratory decisions. We analyse population census data in Japan recorded with a high spatial resolution (i.e. cells of size 500 × 500 m) for the entirety of the country, and simulate demographic dynamics induced by the gravity model and its variants. We show that, in the census data, the population growth rate in a cell is positively correlated with the population density in nearby cells up to a distance of 20 km as well as that of the focal cell. The ordinary gravity model does not capture this empirical observation. We then show that the empirical observation is better accounted for by extensions of the gravity model such that individuals are assumed to perceive the attractiveness, approximated by the population density, of the source or destination cell of migration as the spatial average over a circle of radius ≈1 km.
Introduction
Demography, particularly spatial patterns of population changes, has been a target of intensive research because of its economical and societal implications, such as difficulties in upkeep of infrastructure [1][2][3], policymaking related to city planning [1,2] and integration of municipalities [3]. A key factor shaping spatial patterns of demographic dynamics is migration. [4][5][6]. These and other factors are often non-randomly distributed in space, creating spatial patterns of migration and population changes over time. A number of models have been proposed to describe and predict spatio-temporal patterns of human migration [7][8][9][10][11][12][13].
Among these models, a widely used model is the gravity model (GM) and its variants [8,10,14,15]. The GM assumes that the migration flow from one location to another is proportional to a power (or a different monotonic function) of the population at the source and destination locations and the distance between them. The model has attained reasonably accurate description of human migration in some cases [8,16,17], as well as other phenomena such as international trades [18,19] and the volume of phone calls between cities [20,21].
Studies of migration, such as those using the GM [8,17] and other migration models [11,22], are often based on subdivisions of the space that define the unit of analysis such as administrative units (e.g. country and city). However, the choice of the unit of analysis is often arbitrary. Humans whose migratory behaviour is to be modelled microscopically, statistically or otherwise, may pay less attention to such a unit than a model assumes when they make a decision to move home. This may be particularly so for internal (i.e. domestic) migrations rather than for international migrations because boundaries of administrative units may impact inhabitants less in the case of internal migrations than international migrations. This issue is related to the modifiable areal unit problem in geography, which stipulates that different units of analysis may provide different results [23]. For example, particular partitions of geographical areas can affect parameter estimates of gravity models [24]. To overcome such a problem, criteria for selecting appropriate units of analysis have been sought [24][25][26][27][28]. Another strategy to address the issue of the unit of analysis is to employ models with a maximally high spatial resolution. For example, a recently proposed continuous-space GM assumes that the unit of analysis is an infinitesimally small spatial segment [12]. This approach implicitly assumes that the unit of analysis, which a modelled individual perceives, is an infinitesimally small spatial segment. In fact, humans may regard a certain spatial region, which may be different from an administrative unit and have a certain finite but unknown size, as a spatial unit based on which they make a migration decision. If this is the case, individuals may make decisions by taking into account the environment in a neighbourhood of the current residence and/or the destination of the migration up to a certain distance. Here, we examine this possibility by combining data analysis and modelling, complementing past research on the choice of geographical units for understanding human migration [24][25][26][27][28].
In this paper, we analyse demographic data obtained from the population census of Japan carried out in 2005 and 2010, which are provided with a high spatial resolution [29]. We hypothesize that the growth rate of the population is influenced by the population density near the current location as well as that at the focal location, where each location is defined by a 500 × 500 m cell in the grid according to which the data are organized. We provide evidence in favour of this hypothesis through correlation-based data analysis. Then, we argue that the GM is insufficient to produce the empirically observed spatial patterns of the population growth. We provide extensions of the GM that better fit the empirical data, in which individuals are assumed to aggregate the population of nearby cells to calculate the attractiveness of the source or destination cell of migration.
Dataset
We analysed demographic dynamics using data from the population census in Japan [29], which consisted of measurements from K = 1 944 711 cells of size 500 × 500 m. The census is conducted every 5 years. We used data from the censuses conducted in 2005 and 2010 because data with such a high spatial resolution over the entirety of Japan were only available for these years. We also ran the following analysis using the data from the census conducted in 2000 (appendix A), which were somewhat less accurate in counting the number of inhabitants in each cell than the data in 2005 and 2010 [30]. In the main text, we refer to the two time points 2005 and 2010 as t 1 and t 2 , respectively. The number of inhabitants in cell i (1 ≤ i ≤ K) at time t is denoted by n i (t). We used the latitude and longitude of the centroid of each cell to define its position. Basic statistics of the data at the three time points are presented in table 1.
Spatial correlation
We defined the distance between cells i and j, denoted by d ij , as that between the centroids of the two cells in kilometres. We measured the spatial correlation in the number of inhabitants between a pair of which is essentially the Pearson correlation coefficient calculated from all pairs of cells at a distance ≈d apart. In equation (2.1),n = K i n i /K is the average number of inhabitants in an inhabited cell; σ 2 = K i (n i −n) 2 /K is the variance of the number of inhabitants in an inhabited cell;
Correlation between the growth rate and the population density in nearby cells
In the analysis of the growth rate of cells described in this section, we only used focal cells i whose population size was between 10 and 100 at t 1 . We did so because the growth rate of less populated cells tended to fluctuate considerably and the growth rate of a more populated cell tended to be ≈0. We carried out the same set of analysis for cells whose population size was greater than 100 to confirm that the main results shown in the following sections remain qualitatively the same (appendix C). It should be noted that cell i may be partially water-surfaced.
To calculate the correlation between the rate of population growth in a cell and the population density in cells nearby, we first divided the entire map of Japan into square regions of approximately 50 × 50 km. The regions were tiled in a 64 × 45 grid to cover the whole of Japan. The minimum and maximum longitudes in the dataset were 122.94 and 153.98, respectively. Therefore, we divided the range of the longitude into 64 windows, i.e. [122.4, 123), [123, 123.5),. . ., [153.5, 154]. Similarly, the minimum and maximum latitudes were 45.5229 and 24.0604, respectively. We thus divided the range of the latitude into 45 windows, i.e. [24,24,5), [24.5, 25),. . ., [45.5,46]. We classified each cell into one of the 64 × 45 regions on the basis of the coordinate of the centroid of the cell. Note that there were sea regions without any inhabitant. A region included 9600 cells at most.
The growth rate of cell i in the 5 years is given by We denoted by D i (d) the population density at time t 1 averaged over the cells j whose distance from cell i, d ij , is approximately equal to d, i.e. d < d ij ≤ d + 1. We calculated the Pearson correlation coefficient between the population growth rate (i.e. R i ) and D i (d), restricted to the cells in region k, i.e.
whereR k andD k (d) are the average of R i and D i (d) over the cells in region k, respectively. A positive value of ρ k (d) is consistent with our hypothesis that the population growth rate is influenced by the population density in different cells. We remind that the summation in equation (2.3) is taken over the cells whose population is between 10 and 100. The correlation coefficient ρ k (d) ranges between −1 and 1. We did not exclude water-surface cells or partially water-surface cells j from the calculation of D i (d calculate the single correlation coefficient between R i and D i (d) for the entirety of Japan. In this way, we aimed to suppress fluctuations in individual ρ k (d). We show ρ k (d) for each region in appendix B. We also show ρ k (d) for region k such that all cells within region k and those within 30 km from any cell in region k are not in the sea in appendix B.
To examine the statistical significance ofρ(d), we carried out bootstrap tests by shuffling the number of inhabitants in the populated cells at t 2 without shuffling that at t 1 and calculatingρ(d). We generated 100 randomized samples and calculated the distribution ofρ(d) for each sample. We deemed the value ofρ(d) for the original data to be significant if it was not included in the 95% confidential interval (CI) calculated on the basis of the 100 randomized samples.
Gravity model
In the standard gravity model (GM), the migration flow from source cell i to destination cell j ( = i), T ij , is given by where G, α, β and γ are parameters. Because α, β and γ are usually assumed to be positive, equation (2.4) implies that the migration flow is large when the source or the destination cell has many inhabitants or when the two cells are close to each other. In addition to the GM, we investigated two extensions of the GM in which the migration flow depends on the numbers of inhabitants in a neighbourhood of cell i or j. The first extension, which we refer to as the GM with the spatially aggregated population density at the destination (d-aggregate GM), is given by where N j (d ag ) is the number of inhabitants contained in the cells within distance d ag km from cell j. We remind that the distance between two cells is defined as that between the centroids of the two cells. The rationale behind this extension and the next one is that humans may perceive the population density at the source or destination as a spatial average. A similar assumption was used in a model of city growth, where cells close to inhabitant cells were more likely to be inhabited [32]. The second extension of the GM aggregates the population density around the source cell. To derive this variant of the GM, we rewrite equation (2.4) as T ij = n i × n α−1 i n β j /d γ ij and interpret that each individual in cell i is subject to the rate of moving to cell j, i.e. n α−1 i n β j /d γ ij . The second extension, which we refer to as the GM with the aggregated population density at the source (s-aggregate GM), is defined by Unless we state otherwise, we set d ag = 0.65 in the d-aggregate and s-aggregate GMs, which is equivalent to the aggregation of a cell with the neighbouring four cells in the north, south, east and west. We will also examine larger d ag values. Using one of the three GMs, we projected the number of inhabitants in each cell at time t 2 given the empirical data at time t 1 . The predicted number of inhabitants in cell i at time t 2 , denoted byn i (t 2 ), is given byn (2.7) We refer to K j=1 T ji , K i=1 T ij and K j=1 T ji − N j=1 T ij as the inflow, outflow and net flow of the population at cell i, respectively.
The projection of the growth rate, denoted byR i , is defined byR , based on which we calculatedρ(d) for the model. We set G = 1 because the value ofρ(d) does not depend on G.
Spatial distribution of inhabitants
The spatial distribution of the number of inhabitants at time t 2 is shown in figure 1. The figure suggests centralization of the number of inhabitants in urban areas. We calculated the Gini index, defined by K j =1 |n i − n j |/n, to quantify heterogeneity in the population density across cells; it is often used for measuring wealth inequality. The Gini index at t 1 and t 2 was equal to 0.797 and 0.804, respectively, suggesting a high degree of heterogeneity. The survival function of the number of inhabitants in a cell at t 1 and t 2 is shown in figure 2. The figure suggests that a majority of cells contains a relatively small number of inhabitants, whereas a small fraction of cells has many inhabitants. Figure 1 suggests the presence of spatial correlation in the population density, as observed in other countries [31]. Therefore, we measured the spatial correlation coefficient in the population size between a pair of cells, C(d), where d was the distance between a pair of cells. Figure 3 indicates that C(d) is substantially positive up to d ≈ 70 km, confirming the presence of spatial correlation. This correlation length was shorter than that observed in previous studies of data recorded in the USA [31] (≈1000 km) and spatial correlation in the population growth rate in Spain [33] (≈500 km) and the USA [34] (over 5000 km).
Effects of the population density in nearby cells on migration
We measuredρ(d), which quantifies the effect of the population in cells at distance d on the population growth in a focal cell. Figure 4 showsρ(d) as a function of d. The values ofρ(d) were the largest at d = 0. In other words, the effects of the population density within 1 km is the most positively correlated with the growth rate of a cell. This result reflects the observation that highly populated cells tend to grow and vice versa [35][36][37] (but see [38]). As d increased,ρ(d) decreased and reached ≈0 for d ≥ 20 km. This result suggests that cells surrounded by cells with a large (small) population density within ≈20 km are more likely to gain (lose) inhabitants.
The observed correlation between the population growth rate of a cell and the population of nearby cells may be explained by the combination of spatial correlation in the population density (figure 3) and positive correlation between the population growth rate and the population density in the same cell. To exclude this possibility, we measuredρ(d) as the partial correlation coefficient, modifying equation (2.3), controlling for the population size of a focal cell. The results were qualitatively the same as those based on the Pearson correlation coefficient (appendix D).
Gravity models
Various mechanisms may generate the dependence of the population growth rate in a cell on different cells (up to ≈20 km apart), including heterogeneous birth and death rates that are spatially correlated. Here, we focused on the effects of migration as a possible mechanism to generate such a dependency. We simulated migration dynamics using the gravity model [8,10,15] and its variants and compared the projection obtained from the models with the empirical data. We did not consider the radiation models [11,12] including intervening opportunity models [7] because our aim here was to qualitatively understand some key factors that may explain the effects of distant cells observed in figure 4 rather than to reveal physical laws governing migration.
In figure 4, we compareρ(d) between the empirical data and those generated by the GM, d-aggregate GM and s-aggregate GM. Because precise optimization is computationally too costly, we set γ = 1 and set α, β ∈ {0.4, 0.8, 1.2, 1.6} to search for the optimal pair of α and β. For this parameter set, all models yielded positive values ofρ(0), consistent with the empirical data. For the GM,ρ(d) decreased towards zero as d increased for d < 6 km, i.e. the value ofρ(d) decayed faster than the empirical values. At d > 6 km, ρ(d) generated by the GM was around zero but tended to be smaller than the empirical values. The two extended GMs yielded a decay ofρ(d), which hit zero at d ≈ 20 km, qualitatively the same as the behaviour of the empirical data. The two extended GMs generated largerρ(d) values than the empirical values for d ≤ 20 km.
To investigate the robustness of the results against variation in the parameters of the models, we varied the parameter values as α ∈ {0.4, 0.8, 1.2, 1.6} and β ∈ {0.4, 0.8, 1.2, 1.6} and measured the discrepancy between the model and empirical data in terms of the discrepancy measure defined by equation (2.8). The results for the three models are shown in figure 5. The data obtained from the GM were inaccurate except when α or β was small. In addition, the minimum discrepancy for the GM (= 1.469) was larger than that for the d-aggregate GM and s-aggregate GM (= 1.163 and 1.161, respectively). The d-aggregate GM showed a relatively good agreement with the empirical data in a wide parameter region. The performance of the s-aggregate GM was comparable with that of the d-aggregate GM only when α = 0.4 or 0.8. Our analysis suggests that aggregating nearby cells around either the source or destination of migration seems to improve the explanatory power of the GM. The performance of the d-aggregate GM was better than that of the s-aggregate GM in terms of the robustness against variation in the parameter values.
Effects of the granularity of spatial aggregation
We set d ag , the width for spatial smoothing of the population density at the source or destination cell in the extended GM models, to 0.65 km in the previous sections. To investigate the robustness of the results with respect to the d ag value, we used d ag = 1, 5 and 25 km combined with the d-aggregate and s-aggregate GMs. The discrepancy between each model and the empirical data is shown in figure 6.
When d ag = 1 km, for both models, the results were similar to those for d ag = 0.65 km (figure 4). When d ag = 5 and 25 km, the behaviour ofρ(d) was qualitatively different, withρ(d) first increasing and then decreasing as d increased, or even more complicated behaviour (i.e. s-aggregate GM with d ag = 25 km shown in figure 6b). Figure 7 confirms that the results shown in figure 6 remain qualitatively the same in a wide range of α and β. In other words, the results for d ag = 1 (figure 7a,b) are similar to those for d ag = 0.65 (figure 5b,c), whereas those for d ag = 5 (figure 7c,d) and d ag = 25 (figure 7e,f ) are not. We conclude that aggregating the population density at the source or destination of migration with d ag = 5 km or larger does not even qualitatively explain the empirical data.
One-dimensional toy model
To gain further insights into the spatial inter-dependency of the population growth rate in terms of inand out-migratory flows of populations, we analysed a toy model on the one-dimensional lattice (i.e. chain) with 21 cells (figure 8). Differently from the simulations presented in the previous sections, the current toy model assumes a flat initial population density except in the three central cells. Combined with the simplifying assumption of the one-dimensional landscape, we aimed at revealing a minimal set of conditions under which the empirically observed patterns were produced. We focused on the central cell and its two neighbouring cells, one on each side on the chain. We set the initial number of inhabitants in the central cell to x, those of the two neighbouring cells to x and those of the other cells to one as normalization. The distance between two adjacent cells was set to unity without a loss of generality. Then, we investigated the net flow (i.e. population growth rate), inflow and outflow of populations as a function of x and x using the three GMs. We set d ag = 1, with which we aggregated three cells to calculate the population density at the source or destination of the immigration in the two extensions of the GM.
The net flow, inflow and outflow in the three models are shown in figure 9. In the GM, the net flow at the central cell heavily depended on x but negatively and only slightly depended on the population size in the neighbouring cells x (figure 9a). This result was inconsistent with the empirically observed pattern (figure 4). This inconsistency was due to an increase in the outflow at the central cell as x increased (figure 9c), whereas the inflow at the central cell was not sensitive to x (figure 9b).
The patterns of migration flows for the d-aggregate and s-aggregate GMs were qualitatively different from those for the GM (figure 9d-i). In both models, the population growth rate increased as x increased (figure 9d,g), which is consistent with the empirically observed patterns. In the d-aggregate GM, this change was mainly owing to changes in the inflow, which increased as x increased (figure 9e). The outflow for the d-aggregate GM was similar to that for the GM (figure 9f ). In other words, a cell the same as that obtained from the d-aggregate GM, s-aggregate GM and empirical data ( figure 18). In addition, the sd-aggregate GM was accurate in a wide parameter region ( figure 19). We also confirmed that the discrepancy measure for the sd-aggregate GM increased as d ag increased (figures 20 and 21), similar to the results for the d-aggregate and s-aggregate GMs (figures 6 and 7). The behaviour of this model on the one-dimensional toy model was also consistent with the empirical data ( figure 22) because the inflow and outflow of the model were similar to those for the d-aggregate GM and s-aggregate GM, respectively.
Discussion
We investigated spatial patterns of demographic dynamics through the analysis of the population census data in Japan in 2005 and 2010. We found that the population growth rate in a cell was positively correlated with the population density in cells nearby, in addition to that in the focal cell. We used the gravity model and its variants to investigate possible effects of migration on the empirically observed spatial patterns of the population growth rate. Under the framework of the GM, we found that aggregating some neighbouring cells around either the source or destination of migration events considerably improved the fit of the GM model to the empirical data. The results were better when the cells around the destination cell were aggregated, in particular regarding the robustness of the results against variation in the parameter values, than when the cells around the source cell were aggregated. All the results were qualitatively the same when we set the destination cell. Because the size of the cell is imposed by the empirical data, aggregation of cells around the destination cell is equivalent to decreasing the spatial resolution of the GM by coarse graining. Traditionally, administrative boundaries have been used as operational units of the GM [39]. A cluster identified by the city clustering algorithm may also be used as the unit [38,40]. In the continuous-space GM, the unit is assumed to be an infinitely small spatial segment [12]. However, there is no a priori reason to assume that any one of these units is an appropriate choice. Our results suggest that spatial averaging with a circle of radius d ag ≈ 1 km may be a reasonable choice as compared to a larger d ag or the original cell size (i.e. 500 × 500 m 2 ). Real inhabitants may perceive the population density at the destination as a spatial average on this scale. Although we reached this conclusion using the GMs, this guideline may be also useful when other migration models are used. The present study has limitations. First, due to a high computational cost, we only examined a limited number of combinations of parameter values in the GMs. A more exhaustive search of the parameter space or the use of different migration models, as well as analysing different datasets, warrants future work.
Second, due to the lack of empirical data, we could not analyse more microscopic processes contributing to population changes. For example, because of the absence of spatially explicit data on the number of births and deaths, we did not include births and deaths into our models. However, the observed inflow and outflow were at least twice as large as the numbers of births and deaths in all the 47 prefectures in Japan (table 2). Therefore, migration rather than births and deaths seems to be a main driver of spatially untangled population changes in Japan during the observation period. The lack of data also prohibited us from looking into the effect of the age of inhabitants. In fact, individuals at a certain life stage are more likely to migrate in general [4,5]. Data on migration flows between cells, births, deaths and the age distribution, which are not included in the present dataset, are expected to enable further investigations of the spatial patterns of population changes examined in the present study.
Third, our conclusions are based on the longitudinal data at only two time points in a single country. The strength of the current results should be understood as such.
Fourth, we did not take into account the effect of water-surface cells, which cannot be inhabited. The population density at distance d from a focal cell i, i.e. D i (d), is therefore underestimated when cell i is located near water (e.g. sea, lake, large river). Additional information about the geographical property of cells such as the water area within the cell and the land use may improve the present analysis. , except for the behaviour of the GM. In figure 10,ρ(d) obtained from the empirical data, the GM, d-aggregate GM and s-aggregate GM is compared. Similar to the analysis shown in the main text, for the three GMs, we set γ = 1 and varied α, β ∈ {0.4, 0.8, 1.2, 1.6} and used the optimized parameter values. Theρ(0) value for the GM was negative, contradicting the empirical data, whereas the behaviour of the d-aggregate and s-aggregate GMs was qualitatively the same as that of the empirical data.
For α ∈ {0.4, 0.8, 1.2, 1.6} and β ∈ {0.4, 0.8, 1.2, 1.6}, the discrepancy between the model and empirical data (equation (2.8)) is shown in figure 11. The results for the GM were inaccurate for all parameter combinations that we considered (figure 11a). The d-aggregate GM yielded a good agreement with the data in a wide parameter region (figure 11b). The s-aggregate GM was accurate only for α = 0.4 (figure 11c). These results are similar to those for (t 1 , t 2 ) = (2005, 2010) (figure 5).
We then examined the robustness of the results with respect to the d ag value. The discrepancy between the models and the empirical data is shown in figure 12. For both d-aggregate and s-aggregate GMs,ρ(d) behaved similarly to that for the empirical data when d ag = 1 km but not when d ag = 5 km and 25 km. Figure 13 confirms this result for various values of α and β. For a wide region of the α-β parameter space, the discrepancy increased as d ag increased. To calculateρ(d), we used all regions. However, some regions and their nearby regions include watersurface cells, potentially biasing the estimation ofρ(d). Therefore, we examined the ρ k (d) values for region k such that all cells within region k and those within 30 km from any cell in region k are not in the sea. The average of ρ k (d) over these regions is qualitatively the same as that shown in the main text ( figure 15).
Appendix C. Analysis of cells with more than 100 inhabitants
In the main text, we used cells whose population size was between 10 and 100. Figure 16 showsρ(d) for cells whose population size was greater than 100. The behaviour ofρ(d) was qualitatively the same as that for the cells of the population size between 10 and 100 ( figure 4). We compareρ(d) between the empirical and simulated data in figure 18. The behaviour ofρ(d) obtained from the GM was qualitatively the same as that of the empirical data. The discrepancy between the model and empirical data (equation (2.8)) was small in a wide parameter region ( figure 19). We also confirmed that the discrepancy increased as d ag increased (figures 20 and 21). The net flow, inflow | 7,266 | 2017-04-25T00:00:00.000 | [
"Economics",
"Geography",
"Sociology"
] |
Nondestructive Inspection of Reinforced Concrete Utility Poles with ISOMAP and Random Forest
Reinforced concrete poles are very popular in transmission lines due to their economic efficiency. However, these poles have structural safety issues in their service terms that are caused by cracks, corrosion, deterioration, and short-circuiting of internal reinforcing steel wires. Therefore, they must be periodically inspected to evaluate their structural safety. There are many methods of performing external inspection after installation at an actual site. However, on-site nondestructive safety inspection of steel reinforcement wires inside poles is very difficult. In this study, we developed an application that classifies the magnetic field signals of multiple channels, as measured from the actual poles. Initially, the signal data were gathered by inserting sensors into the poles, and these data were then used to learn the patterns of safe and damaged features. These features were then processed with the isometric feature mapping (ISOMAP) dimensionality reduction algorithm. Subsequently, the resulting reduced data were processed with a random forest classification algorithm. The proposed method could elucidate whether the internal wires of the poles were broken or not according to actual sensor data. This method can be applied for evaluating the structural integrity of concrete poles in combination with portable devices for signal measurement (under development).
Introduction
Reinforced concrete poles are commonly used for telephone and electricity transmission. Their greater mechanical strength, cost effectiveness, longer life span (over 50 years), potential to cover longer distances, and better electrical resistance are the key reasons for their widespread usage [1]. Further, reinforced concrete poles constitute an alternative to steel poles because of their higher variability of architectural shapes and comparatively low maintenance costs. When compared to timber poles, concrete poles offer better resistance to hurricanes and are more robust against decay and fire [2]. However, structural safety defects can occur in reinforced concrete poles because of cracks, voids, corrosion, deterioration, and short-circuiting of internal reinforcing steel wires. These problems in reinforced concrete poles are caused by atmospheric exposure, earthquakes, hurricanes, floods, moisture changes, poor construction practices, and various chemical, mechanical, and physical reactions [3][4][5][6]. Structural problems demand more attention than non-structural problems. According to Doukas et al. [7] approximately half of accidents involving reinforced concrete poles are related to structural defects. These defects constitute a major reason for reduced life expectancy, structural strength, and serviceability. Further, structural defects can result in subsequent failure.
All of the algorithms used in this work were written and executed with the Python language machine learning scikit-learn library [27].
The rest of our paper is organized as follows: Section 2 describes the complete experimental setup for data gathering. The proposed system resulting from our research work is explained in Section 3. The experimental results are presented and discussed in Section 4. Finally, the conclusions are presented in Section 5.
Experimental Setup
In this section, we present the applied data gathering techniques, the used devices, and their specifications. The whole setup for this project is as follows: a magnetic sensing device, a Data Acquisition (DAQ) device, a cable, and a portable computer. To inspect a utility pole, the magnetic sensing device can be inserted into the pole through holes, and the signals can be gathered with the help of the DAQ device. This device can be attached to a portable computer to check the signals with the patterns of our application. Regarding the training of our algorithm, the signals were gathered with the same device and the dataset was manually configured by the field engineers of SMART C&S. They collected signals from 30 reinforced concrete utility poles containing signals from damaged and non-damaged wires and labeled them accordingly. They uninstalled the poles and broke down the concrete cover over steel wires to pull the wires out the concrete. In particular, the field engineers pulled all of the wires out those 30 poles and gathered the signals from each wire. They inserted the sensors into the poles through the first bolt hole nearest to the ground and detected the signals from bottom portion up to the first bolt hole of the pole, because the tendency of defects in wires inside poles is mostly in the bottom portion. The measuring section for signal gathering inside poles was set to a maximum height of 4 m, and the measurement rate was fixed to 0.3 m/s. The diameter of steel wires in all of the poles was from 9 to 12 mm, and there were 16 steel wires (eight tension and eight reinforcing) in each utility pole. The thickness of the concrete cover over the steel wires was from 12 to 24 mm in each pole. The field engineers gathered 101 Hall Effect values for every signal. These values are referred to as "features" in our dataset. The dataset could contain many ambiguous, repeated, and unimportant features. Therefore, we filtered the dataset into meaningful features to be used with a classification algorithm, which is further explained in Section 3.
The eight-channel magnetic sensing device along with all the parts is depicted in Figure 1. The specifications of all the parts of this device are displayed in Table 1. The rest of our paper is organized as follows: Section 2 describes the complete experimental setup for data gathering. The proposed system resulting from our research work is explained in Section 3. The experimental results are presented and discussed in Section 4. Finally, the conclusions are presented in Section 5.
Experimental Setup
In this section, we present the applied data gathering techniques, the used devices, and their specifications. The whole setup for this project is as follows: a magnetic sensing device, a Data Acquisition (DAQ) device, a cable, and a portable computer. To inspect a utility pole, the magnetic sensing device can be inserted into the pole through holes, and the signals can be gathered with the help of the DAQ device. This device can be attached to a portable computer to check the signals with the patterns of our application. Regarding the training of our algorithm, the signals were gathered with the same device and the dataset was manually configured by the field engineers of SMART C&S. They collected signals from 30 reinforced concrete utility poles containing signals from damaged and non-damaged wires and labeled them accordingly. They uninstalled the poles and broke down the concrete cover over steel wires to pull the wires out the concrete. In particular, the field engineers pulled all of the wires out those 30 poles and gathered the signals from each wire. They inserted the sensors into the poles through the first bolt hole nearest to the ground and detected the signals from bottom portion up to the first bolt hole of the pole, because the tendency of defects in wires inside poles is mostly in the bottom portion. The measuring section for signal gathering inside poles was set to a maximum height of 4 m, and the measurement rate was fixed to 0.3 m/s. The diameter of steel wires in all of the poles was from 9 to 12 mm, and there were 16 steel wires (eight tension and eight reinforcing) in each utility pole. The thickness of the concrete cover over the steel wires was from 12 to 24 mm in each pole. The field engineers gathered 101 Hall Effect values for every signal. These values are referred to as "features" in our dataset. The dataset could contain many ambiguous, repeated, and unimportant features. Therefore, we filtered the dataset into meaningful features to be used with a classification algorithm, which is further explained in Section 3.
The eight-channel magnetic sensing device along with all the parts is depicted in Figure 1. The specifications of all the parts of this device are displayed in Table 1. Figure 2a illustrates the process of removing a maintenance bolt and securing the bolt hole to insert the magnetic sensing device into a working utility pole. Figure 2b shows an example of laboratory verification experiments of the data-gathering device. Figure 2c shows an example of a broken steel wire and Figure 2d shows multiple steel wires inside the poles. Figure 2a illustrates the process of removing a maintenance bolt and securing the bolt hole to insert the magnetic sensing device into a working utility pole. Figure 2b shows an example of laboratory verification experiments of the data-gathering device. Figure 2c shows an example of a broken steel wire and Figure 2d shows multiple steel wires inside the poles.
Proposed System
The dataset for this research project is composed of n samples with m features, R = {(xi d , yi), i = 1, 2, …., n}, where in our case n = 240, m = 101 and d = 1, 2, …, m, xi is the input data and every row in xi has a label yi ∈ {0, 1}. Note that 0 indicates safe signals, while 1 denotes crack signals. Each row of xi represents a signal, while d indicates the features of every signal, which are the Hall Effect values for each signal. Figure 3 depicts plots of safe and crack signals from our dataset in two-dimensional graphs. Figure 3a shows a single sample of a safe signal in our dataset. Figure 3b depicts a single sample of a crack signal from our dataset. Figure 3c represents all of the safe signals in our dataset. Figure 3d shows all the crack signals in our dataset.
Proposed System
The dataset for this research project is composed of n samples with m features, Figure 3 depicts plots of safe and crack signals from our dataset in two-dimensional graphs. Figure 3a shows a single sample of a safe signal in our dataset. Figure 3b depicts a single sample of a crack signal from our dataset. Figure 3c represents all of the safe signals in our dataset. Figure 3d shows all the crack signals in our dataset.
Magnetic sensors were used for data gathering and the dataset contains many replicated features.
To reduce these, it is essential to apply a dimensionality reduction technique. Dimensionality reduction is also important for finding meaningful low-dimensional hidden structures in high dimensional data and to improve the performance of classification algorithms. Magnetic sensors were used for data gathering and the dataset contains many replicated features. To reduce these, it is essential to apply a dimensionality reduction technique. Dimensionality reduction is also important for finding meaningful low-dimensional hidden structures in high dimensional data and to improve the performance of classification algorithms.
The Flowchart of Our Proposed System
The overall flow of our proposed system is depicted in Figure 4.
The Flowchart of Our Proposed System
The overall flow of our proposed system is depicted in Figure 4. Magnetic sensors were used for data gathering and the dataset contains many replicated features. To reduce these, it is essential to apply a dimensionality reduction technique. Dimensionality reduction is also important for finding meaningful low-dimensional hidden structures in high dimensional data and to improve the performance of classification algorithms.
The Flowchart of Our Proposed System
The overall flow of our proposed system is depicted in Figure 4. The flowchart in Figure 4 illustrates that for the dimensionality reduction of our data, we applied ISOMAP, which is a manifold-based global geometric framework mainly used for non-linear dimensionality reduction. Non-linear dimensionality reduction techniques became very popular in the last decade due to their superior performance on high dimensional data when compared to linear techniques [28]. Jeong [29] showed that ISOMAP outperforms PCA (a linear dimensionality reduction technique) on high dimensional data.
ISOMAP
ISOMAP is an extension of Multidimensional Scaling (MDS), a classical method for embedding dissimilatory information into a Euclidean space. The main concept of ISOMAP is to replace Euclidean distances with an approximation of the geodesic distances on the manifold. ISOMAP is a three-step process: (1) Construct neighborhood graph; k-nearest neighbors of every data point are defined and represented by a graph G; in G, every point is connected to its nearest neighbors by edges. (2) Compute the shortest paths; the geodesic distances between all the pairs of points are estimated using the Dijkstra algorithm [30]; the squares of these distances are stored in a matrix of graph distances D (G) .
(3) Construct d-dimensional embedding; the classical MDS algorithm is applied on D (G) in order to find a new embedding of the data in a d-dimensional Euclidean space Y.
Deciding the Number of Dimensions for the ISOMAP Algorithm
Our dataset is composed of 240 data items with 101 features. Firstly, we have to determine the number of dimensions of the ISOMAP space. For this purpose, we calculated the residual variance [22,31], which is typically used to evaluate the error of dimensionality reduction. Residual variance is defined in Equation (1), as follows: where R d is the residual variance, G is the geodesic distance matrix, D d is the Euclidean distance matrix in the d-dimensional space, and r(G, D d ) denotes the correlation coefficient of G and D d . The value of d is determined by a trial-and-error approach to reduce the residual variance. Figure 5 shows that in all cases the residual variance decreases as the number of dimensions d is increased. It is recommended in [22,31] to select the number of dimensions at which this curve ceases to decrease significantly with added dimensions. To evaluate the intrinsic dimensionality of our data, we searched for the "elbow", at which this curve is not decreasing significantly with increasing dimensions. The arrow mark highlights the approximate dimensions in our data. If the dimensions are increased and exhibit some residual variance of the increased dimensions, then the data can be better explained by ISOMAP and the classification algorithm will perform better based on the residual variance. For example, in our dataset for a number of dimensions equal to 50 having a residual variance equal to 0.03, the performance is slightly better than for a number of dimensions equal to 8. We set eight dimensions for our classification algorithm because of the computational cost of both ISOMAP and the random forest. We had already achieved better performance with eight dimensions. However, the performance is not improved with increasing dimensions once the residual variance becomes zero.
Random Forest
ISOMAP reduces our dataset into eight dimensions on which a random forest classification algorithm is applied for the purpose of training features in the data. The random forest is an ensemble method, and generally, ensemble methods perform better than single classifiers. They are
Random Forest
ISOMAP reduces our dataset into eight dimensions on which a random forest classification algorithm is applied for the purpose of training features in the data. The random forest is an ensemble method, and generally, ensemble methods perform better than single classifiers. They are built from a set of classifiers and then use the weightage of their predictions to render the final output [32]. These methods employ more than one classification technique and then combine their results. They are notably less prone to overfitting. The random forest algorithm consists of three steps: (1) Construct n trees bootstrap samples from the input data while using the Classification And Regression Trees (CART) [33] methodology. (2) Grow an unpruned tree for each of the n trees , randomly sample m try of the predictors, and choose the best split among those variables. (3) Aggregate the predictions of the n tree trees for the prediction of new data. We applied the scikit-learn [27] implementation that combines classifiers by averaging their probabilistic prediction.
Choosing the number of trees in a random forest is an open question. Breiman [23] mentioned that, the greater the number of trees, the better the performance of the random forest. However, it is very difficult to find the optimal number of trees for the algorithm. Oshiro et al. [34] applied random forest on 29 different types of datasets with varying number of trees L in exponential rates using base two, which is L = 2 j , j = 1, 2, . . . , 12. They concluded that sometimes a larger number of trees in the forest only increases its computational cost, having no significant impact on its performance. We applied the same method as that of Oshiro et al. [34] to decide the number of trees in our random forest classifier. The number of trees and their performance on our dataset is reported in Table 2. Table 2 shows that using 8, 16, 32, 64, or 128 as the number of trees in a random forest on our dataset resulted in the same performance. Therefore, we set the number of trees to eight. We did not set a large number of trees, as this would increase the computational cost.
To decide on the number of random samples, the common term applied is √ p, where p is the number of predictors. Our dataset for random forest has eight features, therefore, we choose m try = 3. The OOB (out-of-bag) error estimate is used to check the performance of a random forest model. This error is calculated by averaging only those trees corresponding to bootstrap samples that render wrong predictions [23,35]. We stopped training our model when the OOB error reduced below 10%, which is a quite good fit and shows that more than 90% of the decision trees are rendering correct predications. As we are applying random training to a random forest, we executed the model 100 times and then calculated the mean OOB error from all the OOB error values.
Performance Evaluation of Our Classifier
We randomly chose 210 data items for training and 30 data items to test our classification algorithm. The structural safety labels were separated and were not shown to the algorithm for evaluation purposes. The predicted results from the algorithm were then compared with the labels. The accuracy was calculated as the ratio between accurate predicted instances and the total number of examined instances, as defined in Equation (2). As the random forest algorithm renders different results on different executions, it was executed 100 times. The accuracy, defined as the mean value of all the 100 calculated accuracies, was 97% in our case. Our dataset consisted of 211 safe signals and 29 crack signals. Therefore, it is an imbalanced dataset. It is likely that the algorithm will only predict safe signals and calculate better accuracy on the basis of the result. In such cases, accuracy alone is not enough for determining the performance of an algorithm. The accuracy mostly biases to the majority class data, and presents several other weaknesses, such as less distinctiveness, discriminability, and informativeness. Further, it is possible for a classifier to perform well on one metric while being suboptimal on other metrics. Moreover, different performance metrics measure different tradeoffs in the predictions made by a classifier. Therefore, it is essential to assess algorithms on a set of performance metrics [36]. We thus calculated the confusion matrix, precision, recall [37], and F-measure [37,38], to discriminate accurately and to select an optimal solution for our model. The confusion matrix is widely used for describing the performance of a classifier, because it displays the ways in which a classification model is confused during predication. The confusion matrix is composed of rows and columns corresponding to the classification labels. The rows of the table represent the actual class, while the columns represent the predicted class. The main diagonal elements in the confusion matrix are TP and TN, which denote correctly classified instances, while other elements (FP and FN) denote incorrectly classified instances. Here, TP means true positive, which is when the algorithm correctly predicts the positive class, while TN means true negative, which is when the algorithm correctly predicts the negative class. Further, FP means false positive, which is when the algorithm fails to predict the negative class correctly. Finally, FN means false negative, which is when the algorithm fails to predict the positive class correctly. In our data, the safe signals are represented as a positive class, while crack signals are represented as a negative class. The confusion matrix of our random forest algorithm on the test data is presented in Table 3. Note that total test points are equal to 30, and the total number of correctly classified instances is calculated as 25 + 4 = 29, while the total number of incorrectly classified instances is given by 0 + 1 = 1.
The confusion matrix clearly shows that there is only one mistake made by our algorithm, namely a crack signal predicted as safe signal. On the basis of the confusion matrix, the result is quite good. However, an analysis exclusively based on the confusion matrix is not sufficient when evaluating the performance of an algorithm. In the case of imbalanced classes, precision, and recall constitute a useful measure of success of prediction. Precision and recall are commonly used in information retrieval for evaluation of retrieval performance [39]. Precision is applied to measure the correctly predicted positive patterns from the total predicted patterns in a positive class, whereas recall is used to find the effectiveness of a classifier in identifying positive patterns. For full evaluation of the effectiveness of a model, examining both precision and recall is necessary. High precision represents a low false positive rate, while high recall represents a low false negative rate. Accuracy A, precision P, and recall R are defined in Equation (2).
The precision of our classifier on the test data is 96%, while the recall is 100%. These are high values. However, precision and recall are still not sufficient to select an optimal solution or algorithm. To find a balance between precision and recall, F-measure is used. For computing score, the F-measure considers both precision and recall. Moreover, F-measure indicates how many instances the classifier classifies correctly without missing a significant number of instances. The greater the F-measure, the better the performance of the model. F-measure is defined as the harmonic mean of precision and recall, as expressed in Equation (3).
The F-measure of our algorithm is 98%. The performance can now be measured in terms of classification accuracy, confusion matrix, precision, recall, and F-measure. We calculated all of them for the evaluation of our classifier, and the results demonstrate that our classifier is very reliable.
Comparison of Random forest with SVM and Decision Trees on Our Data
To evaluate and compare the results of a random forest with other machine learning algorithms on our ISOMAP-based reduced data, we applied SVM and decision trees. SVM attempts to find the optimal separating hyperplane between objects of different classes. Further, SVM is the most suitable for binary classification but can also be configured for multi-classification tasks. Decision trees are applied for both classification and regression tasks. In decision trees, each node represents a feature, each link represents a decision, and each leaf represents an outcome. The root node is defined as the attribute that best classifies the training data. This process is repeated for each branch of the tree. An SVM classifier with sigmoid kernel and a gini index-based unpruned decision tree with best split method were implemented. Using SVM and decision trees, we achieved 90% and 93% accuracy, respectively. A complete comparison of all these algorithms on our test data is depicted in Figure 6. Figure 6 clearly shows that random forests outperform SVM and decision trees. The precision of random forests and decision trees are the same, as well as the recall of random forests and SVMs, but the accuracy and F-measure of random forests are superior.
Conclusions
In this paper, we proposed a structural safety assessment method for reinforced concrete utility poles with the use of ISOMAP and random forests. The proposed system is able to identify the condition of wires inside reinforced concrete poles. It is also an easy and inexpensive system to adopt, because only a few devices are needed. We had a limited number of trained data items from field experiments. Therefore, we opted for machine learning techniques rather than deep learning methods. We used ISOMAP for data reduction and then applied a random forest classifier for classification purposes. The random forest algorithm outperformed other machine learning algorithms (such as SVM and decision trees) on our data set. In our first attempt, we achieved better performance. In future, we can receive more experimental data from field engineers and apply deep learning methods to accomplish better performance with higher reliability.
Author Contributions: M.J. initiated the idea and provided supervision and guidance in experiments and analysis; S.U. performed machine learning modeling and was the primary writer of the paper; W.L. designed and conducted the experiments for data gathering. Figure 6 clearly shows that random forests outperform SVM and decision trees. The precision of random forests and decision trees are the same, as well as the recall of random forests and SVMs, but the accuracy and F-measure of random forests are superior.
Conclusions
In this paper, we proposed a structural safety assessment method for reinforced concrete utility poles with the use of ISOMAP and random forests. The proposed system is able to identify the condition of wires inside reinforced concrete poles. It is also an easy and inexpensive system to adopt, because only a few devices are needed. We had a limited number of trained data items from field experiments. Therefore, we opted for machine learning techniques rather than deep learning methods. We used ISOMAP for data reduction and then applied a random forest classifier for classification purposes. The random forest algorithm outperformed other machine learning algorithms (such as SVM and decision trees) on our data set. In our first attempt, we achieved better performance. In future, we can receive more experimental data from field engineers and apply deep learning methods to accomplish better performance with higher reliability. | 5,987.8 | 2018-10-01T00:00:00.000 | [
"Materials Science"
] |
Introductory Chapter: Nonlinear Optical Phenomena
The number of publications concerning different aspects of nonlinear optics is enormous and hardly observable. We briefly discuss in this chapter the fundamental nonlinear optical phenomena and methods of their analysis. Nonlinear optics is related to the analysis of the nonlinear interaction between light and matter when the light-induced changes of the medium optical properties occur [1, 2]. The nonlinear optical effects are weak, and their observation became possible only after the invention of lasers which provide a highly coherent and intense radiation [2]. A typical nonlinear optical process consists of two stages. First, the intense coherent light induces a nonlinear response of the medium, and then the modified medium influences the optical radiation in a nonlinear way [1]. The nonlinear medium is described by a system of the dynamic equations including the optical field. The optical field itself is described by Maxwell’s equations including the nonlinear polarization of the medium [1, 2]. All media are essentially nonlinear; however, the nonlinear coupling coefficients are usually very small and can be enhanced by the sufficiently strong optical radiation [1, 2]. For this reason, to a first approximation, light and matter can be considered as a system of uncoupled oscillators, and the nonlinear terms are some orders of magnitude smaller than the linear ones [2]. Nevertheless, the nonlinear effects can be important in the long-time and longdistance limits [2]. Generally, the light can be considered as a superposition of plane
Introduction
The number of publications concerning different aspects of nonlinear optics is enormous and hardly observable. We briefly discuss in this chapter the fundamental nonlinear optical phenomena and methods of their analysis. Nonlinear optics is related to the analysis of the nonlinear interaction between light and matter when the light-induced changes of the medium optical properties occur [1,2]. The nonlinear optical effects are weak, and their observation became possible only after the invention of lasers which provide a highly coherent and intense radiation [2]. A typical nonlinear optical process consists of two stages. First, the intense coherent light induces a nonlinear response of the medium, and then the modified medium influences the optical radiation in a nonlinear way [1]. The nonlinear medium is described by a system of the dynamic equations including the optical field. The optical field itself is described by Maxwell's equations including the nonlinear polarization of the medium [1,2]. All media are essentially nonlinear; however, the nonlinear coupling coefficients are usually very small and can be enhanced by the sufficiently strong optical radiation [1,2]. For this reason, to a first approximation, light and matter can be considered as a system of uncoupled oscillators, and the nonlinear terms are some orders of magnitude smaller than the linear ones [2]. Nevertheless, the nonlinear effects can be important in the long-time and longdistance limits [2]. Generally, the light can be considered as a superposition of plane , ω, r ! , t are the wave vector, angular frequency, radius vector in the space, and time, respectively [1,2]. The medium oscillators can be electronic transitions, molecular vibrations and rotations, and acoustic waves [2]. Typically, only a small number of linear and nonlinear oscillator modes are important that satisfy the resonance conditions [1][2][3]. In such a case, the optical fields can be represented by a finite sum of discrete wave packets E ! z; t ð Þ given by [1][2][3] where c:c: stands for the complex conjugate and A z; t ð Þ is the slowly varying envelope (SVE) such that [1][2][3] Here we for the sake of definiteness consider the one-dimensional case. The evolution of the waves (1) is described by the system of the coupled equations in the so-called SVE approximation (SVEA) when the higher-order derivatives of the SVE can be neglected according to conditions (2) [1][2][3]. The typical nonlinear optical phenomena are self-focusing, self-trapping, sum-and difference-frequency generation, harmonic generation, parametric amplification and oscillation, stimulated light scattering (SLS), and four-wave mixing (FWM) [1].
During the last decades, optical communications and optical signal processing have been rapidly developing [1][2][3][4]. In particular, the nonlinear optical effects in optical waveguides and fibers became especially important and attracted a wide interest [1][2][3][4]. The nonlinear optical interactions in the waveguide devices have been investigated in detail in Ref. [3]. Nonlinear fiber optics as a separate field of nonlinear optics has been reviewed in Ref. [4]. The self-phase modulation (SPM), cross-phase modulation (XPM), FWM, stimulated Raman scattering (SRS), stimulated Brillouin scattering (SBS), pulse propagation, and optical solitons in optical fibers have been considered in detail [4]. Silicon photonics, i.e., integrated optics in silicon, also attracted a wide interest due to the highly developed silicon technology which permits the combination of the photonic and electronic devices on the same Si platform [5]. The nonlinear optical phenomena in Si nanostructures such as quantum dots (QD), quantum wells (QW), and superlattices had been discussed [6]. It has been shown that the second harmonic generation (SHG) in silicon nanostructures is possible despite the centrosymmetric structure of Si crystals [6].
Nonlinear dynamics in complex optical systems such as solid-state lasers, CO 2 lasers, and semiconductor lasers is caused by the light-matter interaction [7]. Under certain conditions, the nonlinear optical processes in such optical complex systems result in instabilities and transition to chaos [7].
In this chapter we briefly describe the basic nonlinear optical phenomena. The detailed analysis of these phenomena may be found in [1][2][3][4][5][6][7] and references therein. The chapter is constructed as follows. Maxwell's equations for a nonlinear medium and nonlinear optical susceptibilities are considered in Section 2. The mechanisms and peculiarities of the basic nonlinear effects mentioned above are discussed in Section 3. Conclusions are presented in Section 4.
Maxwell's equations for a nonlinear medium and nonlinear optical susceptibilities
Here ρ free is the free charge density consisting of all charges except the bound charges inside atoms and molecules; J Here c is the free space light velocity.
Here, χ 1 ð Þ r ! ; t is the linear susceptibility; χ n ð Þ r ! ; t , n . 1 is nth-order nonlinear susceptibility [1]. Suppose that the electric field is a group of monochromatic plane waves given by [ Then, the Fourier transform of the nonlinear polarization (1) yields [1] and The linear and nonlinear optical properties of a medium are described by the linear and nonlinear susceptibilities (12), and the nth-order nonlinear optical effects in such a medium can be obtained theoretically from Maxwell's Eqs. (3)-(6) with the polarization determined by Eq. (8) [1]. We do not present here the analytical properties of the nonlinear susceptibilities which are discussed in detail in Ref. [1].
In some simple cases, the nonlinear susceptibilities can be evaluated by using the anharmonic oscillator model [1,8]. It is assumed that a medium consists of N classical anharmonic oscillators per unit volume [1]. Such an oscillator may describe an electron bound to a core or an infrared-active molecular vibration [1]. The equation of motion of the oscillator in the presence of an applied electric field with the Fourier components at frequencies AEω 1 , AE ω 2 is given by [1] Here x is the oscillator displacement; Γ is the decay factor; ω 0 is the oscillator frequency; q, m are the oscillator charge and mass, respectively; and the anharmonic term ax 2 is small and can be considered as a perturbation in the successive approximation series given by [1,8] x The nonlinear terms become essential when the electromagnetic power is large enough in such a way that a medium response cannot be considered linear anymore [8]. We limit our analysis with quadratic and cubic nonlinearities proportional to x 2 and x 3 , respectively [1][2][3][4][5][6][7][8]. The induced electric polarization P can be expressed by using the solutions (13) and (14) as follows: P ¼ Nqx [1]. In general case, the microscopic expressions for nonlinear susceptibilities of a medium are calculated by using the quantum mechanical approach. In particular, the density matrix formalism is a powerful and convenient tool for such calculations [1,2,7,8].
Nonlinear optical effects
Electromagnetic waves in a medium interact through the nonlinear polarization (8) [1]. Typically, a nonlinear optical effect that occurs due to such an interaction is described by the coupled wave equations of the type (7) with the nonlinear susceptibilities (12) as the coupling coefficients [1]. In general case, the coupled wave method can also include waves other than electromagnetic [1]. For instance, in the case of SBS process, the acoustic waves are taken into account, and in the case of SRS process, the molecular vibrations are typically considered [1,2,4]. The coupled wave equations are usually solved by using SVEA (2) [1]. In this section, we discuss some important nonlinear optical phenomena caused by the quadratic and cubic susceptibilities χ 2 ð Þ and χ 3 ð Þ , respectively. It should be noted that χ 2 ð Þ ¼ 0 in the electric dipole approximation for a medium with inversion symmetry [1].
We start with the sum-frequency, difference-frequency, and second harmonic generation. These phenomena are based on the wave mixing by means of the quadratic susceptibility χ 2 ð Þ . The three coupled waves are E in the cases of sum-frequency [1]. The second-order nonlinear polarization with a sum-frequency ω 3 in such a case has the form [1] Similarly, in the case of the difference-frequency generation, we obtain [1] where the asterisk means the complex conjugation. Consider the particular case of equal frequencies ω 1 ¼ ω 2 ¼ ω. In such a case, the nonlinear polarization (15) has the form P 2 ð Þ j ω 3 ¼ 2ω ð Þ , and the second harmonic generation (SHG) takes place [1]. The efficient nonlinear wave mixing can occur only under the phase-matching conditions. The phase mismatch Δk between the coupled waves is caused by the refractive index dispersion n ω i ð Þ. The collinear phase matching Δk ¼ 0 can be realized in the medium with an anomalous dispersion or in the birefringent crystals [1]. The detailed analysis of the sum-frequency generation, difference-frequency generation, and SHG in different configurations may be found in [1,3,6]. It can be shown that the efficient sum-frequency generation can be realized under the following conditions [1]. The nonlinear optical crystal without the inversion symmetry or with the broken inversion symmetry should have low absorption at the interaction frequencies ω 1, 2, 3 and a sufficiently large quadratic susceptibility χ 2 ð Þ and should allow the collinear phase matching. The particular phase-matching direction and the coupled wave polarizations should be chosen in order to optimize the effective nonlinear susceptibility χ 2 ð Þ eff . The length of the nonlinear crystal must provide the required conversion efficiency. The efficient SHG can be realized with the single-mode laser beams focused into the nonlinear optical crystal [1].
Sum-frequency generation, difference-frequency generation, and SHG can be also carried out in the waveguide nonlinear optical devices [3]. Typically, a thin film of a nonlinear material such as ZnO and ZnS, ferroelectric materials LiNbO 3 and LiTaO 3 , and III-V semiconductor materials GaAs and AlAs can be used as a waveguiding layer [3]. The output power P 2ω ð Þ L ð Þ of the second harmonic (SH) mode under the no-pump depletion approximation is given by [3] 0 is the input pump power; k is the coupling constant; L is the device length; Δ is the phase mismatch; λ is the pump wavelength; β ω ð Þ , β 2ω ð Þ are the propagation constants of the pump and SH waves, respectively; and Λ is the period of the quasi-phase matching (QPM) grating. Waveguide SHG devices can be used in optical signal processing such as laser 5 Introductory Chapter: Nonlinear Optical Phenomena DOI: http://dx.doi.org/10.5772/intechopen.83718 printer, laser display, optical memory, short pulse, multicolor, and ultraviolet light generation [3].
Consider the nonlinear optical effects related to the cubic susceptibility χ 3 ð Þ . These phenomena are much weaker than the second-order ones. However, they can exist in centrosymmetric media where χ 2 ð Þ ¼ 0 and may be strongly pronounced under the high enough optical intensity pumping. We briefly discuss self-focusing, SPM, third harmonic generation (THG), SBS, SRS, and FWM.
Self-focusing is an induced lens effects caused by the self-induced wavefront distortion of the optical beam propagating in the nonlinear medium [1]. In such a medium, a refractive index n has the form [1] Here n 0 is the refractive index of the unperturbed medium, Δn E j j 2 is the optical field-induced refractive index change, and E is the optical beam electric field. Typically, the field-induced refractive index change can be described as Δn ¼ n 2 E j j 2 like in the case of the so-called Kerr nonlinearity [1,3]. If Δn . 0, the central part of the optical beam with a higher intensity has a larger refractive index than the beam edge. Consequently, the central part of the beam travels at a smaller velocity than the beam edge. As a result, the gradual distortion of the original plane wavefront of the beam occurs, and the beam appears to focus by itself [1]. The selffocusing results in the local increase of the optical power in the central part of the beam and possible optical damage of transparent materials limiting the high-power laser performance [1]. SPM is also caused by the positive refractive index change (18). It is the temporal analog of self-focusing which leads to the spectral broadening of optical pulses [4]. In optical fibers, for short pulses and sufficiently large fiber length L f , the combined effect of the group velocity dispersion (GVD) and SPM should be taken into account [4]. The GVD parameter β 2 is given by [4] In the normal-dispersion regime when β 2 . 0, the combined effect of the SPM and GVD leads to a pulse compression. In the opposite case of the anomalousdispersion regime β 2 , 0, SPM and GVD under certain conditions can be mutually compensated [4]. In such a case, the pulse propagates in the optical fiber as an optical soliton, i.e., a solitary wave which does not change after mutual collisions [4]. The solitons are described with the nonlinear Schrödinger equation (NLS) which can be solved with the inverse scattering method [4]. The fundamental soliton solution u ξ; τ ð Þ has the form [4] u ξ; τ Here η is the soliton amplitude; τ ¼ t À β 1 z ð Þ=T 0 ; ξ ¼ z=L D ; β 1 ¼ 1=v g ; v g is the light group velocity in the optical fiber; L D is the dispersion length; and T 0 is the initial width of the incident pulse. The optical solitons can propagate undistorted over long distances, and they can be applied in fiber-optic communications [4].
Consider now THG. Unlike SHG, it is always allowed [1]. The third harmonic E ! 3ω ð Þ is caused by the third-order nonlinear polarization given by [ The cubic susceptibility χ 3 ð Þ is usually small compared to the χ 2 ð Þ [1]. For this reason, the laser intensity required for the efficient THG is limited by the optical damage in crystals [1]. The phase matching for the THG is difficult to achieve which results in low efficiency of the THG process [1,4]. THG can be realized in highly nonlinear optical fibers where the phase matching can be accomplished [4]. SBS is a nonlinear optical effect related to parametric coupling between light and acoustic waves [1]. It is described by the coupled wave equation (7) for the coupled counterpropagating light waves E ! 1, 2 ω 1, 2 ð Þand the acoustic wave equation for the mass density variation Δρ ω a ¼ ω 1 À ω 2 ð Þ [1,2,4]. The nonlinear coupling between light and acoustic waves is caused by the electrostrictive pressure . 0, ω 1 ≫ ω a ¼ ω 1 À ω 2 . 0, and the optical gain is larger than the optical wave damping constant [1]. The pumping wave E 1 ! ω 1 ð Þ is decaying in the forward direction z [1]. SBS has been successfully demonstrated in optical fibers, and the SBS gain in a fiber can be used for the amplification of the weak signal with the frequency shift equal to the acoustic frequency ω a [4]. Brillouin fiber amplifiers may be used for applications where the selective amplification is needed [4].
Consider now the SRS process. SRS can be described in the framework of the quantum mechanics as a two-photon process where one photon with energy is absorbed by the system and another photon with energy ℏω 2 k 2 ! Þ is emitted [1]. The system itself makes a transition from the initial state with the energy E i to the final state with the energy E f , and the energy conservation takes place: ℏ ω 1 À ω 2 ð Þ¼E f À E i [1]. In the framework of the coupled wave description, SRS is a third-order parametric generation process where the optical pump wave E 1 ! ω 1 ð Þ generates a Stokes wave E 2 ! ω 2 ð Þ and a material excitation wave [1]. The nonlinear polarization Þrelated to SRS in such a case takes the form [1, 2] where χ 3 ð Þ R1, 2 are the third-order Raman susceptibilities coupling the optical waves and providing SRS process [1]. They can be evaluated by using the quantum mechanical methods [1]. Typically, the material excitation wave in the SRS process is considered as molecular vibrations or optical phonons [1,2,4]. The specific feature of SRS is the so-called Stokes-anti-Stokes coupling [1,2]. Indeed, the mixing of the pump wave with the frequency ω 1 and the Stokes wave with the frequency ω 2 results in the generation of the anti-Stokes wave E a ! ω a ¼ 2ω 1 À ω 2 ð Þ at the anti-Stokes frequency ω a ¼ 2ω 1 À ω 2 . ω 1 [1]. Consequently, the coupled wave analysis of SRS should include the equations for the pump wave, Stokes wave, anti-Stokes wave, and the material excitation wave [1,2]. The analysis of this problem can be found in Refs. [1,2]. Usually, the anti-Stokes wave is attenuated [2]. SRS in optical fibers can be used for the development of Raman fiber lasers and Raman fiber amplifiers [4].
FWM is the nonlinear process with four interacting electromagnetic waves [1]. FWM is a third-order process caused by the third-order nonlinear susceptibility χ 3 ð Þ . It can be easily observed by using the high-intensity lasers, and it has been demonstrated experimentally [1]. FWM is a complicated nonlinear phenomenon because it exhibits different nonlinear effects for different combinations of the coupled wave frequencies, wave vectors, and polarizations. The analysis of FWM is based on the general theory of optical wave mixing [1,2,4]. For three input pump waves with frequencies ω 1, 2, 3 , the singly resonant, doubly resonant, and triply resonant cases can occur [1]. They correspond to the situations when one, two, or three input frequencies or their algebraic sums approach medium transition frequencies [1]. In such cases the third-order susceptibility χ 3 ð Þ can be divided into a resonant part χ NR [1]. The FWM process has some important applications. Due to the wide range of the mixed frequencies, FWM can be used for the generation of the waves from the infrared up to ultraviolet range [1]. For instance, the parametric amplification can be realized when two strong pump waves amplify two counterpropagating weak waves [1]. The frequency degenerate FWM occurs when the frequencies of the four waves are the same. It is used for the creation of a phaseconjugated wave with respect to one of the coupled waves [2]. In such a case, the phase of the output wave is complex conjugate to the phase of the input wave [1,2]. FWM in optical fibers can be used for signal amplification, phase conjugation, wavelength conversion, pulse generation, and high-speed optical switching [4].
Conclusions
We briefly discussed the fundamentals of nonlinear optics. The nonlinear optical phenomena are caused by the interaction between light and matter. Generally, all media are nonlinear. However, optical nonlinearity is extremely weak, and the observation of the nonlinear optical effects became possible only after invention of lasers as the sources of the strong enough coherent optical radiation. The nonlinear optical processes are described by Maxwell's equations with the nonlinear polarization of the medium. The coupled equations for the interacting electromagnetic and material waves are usually solved by using SVEA. Typically, the secondand third-order polarizations are considered. The nonlinear polarization and the optical field in the medium are related by the nonlinear susceptibilities which in general case can be evaluated by the quantum mechanical methods. In some simple cases, the classical model of anharmonic oscillator also can be used. We briefly discussed the fundamental nonlinear phenomena related to the second-and thirdorder susceptibilities. The former exists only in the media without the inversion symmetry, while the latter exists in any medium.
The typical nonlinear optical phenomena related to the second-order susceptibility are the sum-frequency generation, difference-frequency generation, and SHG. The typical nonlinear optical phenomena related to the third-order susceptibility are self-focusing, SPM, optical soliton formation and propagation, different types of SLS such as SBS and SRS, and FWM. SBS involves the acoustic waves. SRS involves the material excitations such as molecular vibrations. We also discussed some peculiarities of nonlinear optical processes in optical fibers. The nonlinear optical effects are widely used in optical communications and optical signal processing. | 5,056.8 | 2019-01-04T00:00:00.000 | [
"Physics"
] |
Multi-Agent Robot System to Monitor and Enforce Physical Distancing Constraints in Large Areas to Combat COVID-19 and Future Pandemics
: Random outbreaks of infectious diseases in the past have left a persistent impact on societies. Currently, COVID-19 is spreading worldwide and consequently risking human lives. In this regard, maintaining physical distance has turned into an essential precautionary measure to curb the spread of the virus. In this paper, we propose an autonomous monitoring system that is able to enforce physical distancing rules in large areas round the clock without human intervention. We present a novel system to automatically detect groups of individuals who do not comply with physical distancing constraints, i.e., maintaining a distance of 1 m, by tracking them within large areas to re-identify them in case of repetitive non-compliance and enforcing physical distancing. We used a distributed network of multiple CCTV cameras mounted to the walls of buildings for the detection, tracking and re-identification of non-compliant groups. Furthermore, we used multiple self-docking autonomous robots with collision-free navigation to enforce physical distancing constraints by sending alert messages to those persons who are not adhering to physical distancing constraints. We conducted 28 experiments that included 15 participants in different scenarios to evaluate and highlight the performance and significance of the present system. The presented system is capable of re-identifying repetitive violations of physical distancing constraints by a non-compliant group, with high accuracy in terms of detection, tracking and localization through a set of coordinated CCTV cameras. Autonomous robots in the present system are capable of attending to non-compliant groups in multiple regions of a large area and encouraging them to comply with the constraints.
Introduction
In the past few years, infectious diseases have been found to be very challenging and difficult to control due to their transferring effect, resulting in a large impact on society. They can spread at different geographical levels due to their ability of human-to-human transmission. According to the national public health institute named the Centers for Disease Control and Prevention (CDC) in the United States, an infectious disease can be declared as a 'pandemic' when a sudden and rapid increase in its cases is seen in the global population. In recent history, various pandemics have been reported. In a report created by Nicholas [1], the history of pandemics over a period of time has been explained with their impact in terms of death toll, which is summarized in Table 1. Based on the history of pandemics presented in this study [1], it is likely that more such pandemics will arise in the future. Therefore, nations should be prepared for them. It is imperative to understand the reasons behind the transmission of such infectious diseases. A large number of epidemiological studies have pointed out that the main path of the spread of such infectious diseases has been human-to-human transmission [2]. This indicates that infectious diseases spread when people maintain direct physical contact with each other. Physical distancing has always been recommended as the most effective safety measure to avoid the spread of such pandemics [3] and is currently being implemented by governments worldwide to slow the spread of the COVID-19 virus [4]. Despite the implementation of such measures, the span of the virus spread has been increasing with time due to the violation of set constraints because of lack of knowledge and carelessness. In this case, continuous monitoring is required to enforce the constraints to control the spread of the virus. It is difficult to manually monitor all areas; therefore, intelligent and autonomous systems are required for efficient and persistent monitoring of set constraints such as the use of facemasks, physical distancing, body temperature checks, etc. Moreover, modern interactive technology platforms such as robots hold potential to be used for the enforcement of those constraints through social interaction with people. Such approaches can be beneficial in reducing human-to-human interaction to potentially curb the spread of the virus. Robotics has a huge potential to play a vital role in the current fight against the COVID-19 virus [5]. Robots can be deployed for various purposes to help curb the spread of the virus. For instance, they can be utilized for mobile surveillance, disinfection, delivery, interactive awareness systems, companion robots, vital signs detection, etc. In past research, a wide range of multi-agent robot systems (MARS) based on heterogeneous distributed sensor networks have been proposed for effective and efficient surveillance in multiple scenarios [6,7]. MARS is a system that comprises fixed agents, i.e., sensors fixed at some location, and single or multiple mobile agents, i.e., robots.
During the current pandemic situation, performing operational tasks, such as surveillance, digital interaction, help desks and medical service provision, using robots has gained huge popularity [8]. Fan et al. [9] presented an autonomous quadruped robot to ensure physical distancing to combat COVID-19. The designed robot was supposed to roam around the place for persistent surveillance to detect violations of physical distancing constraints. In case of any violation, the robot informed the people through verbal cues to maintain a safe distance. Moreover, social robots are playing a vital role in combating COVID-19 by minimizing person-to-person interactions, especially in healthcare services [10][11][12]. Recently, Sathyamoorthy et al. [13] presented a robot system to monitor physical distancing constraints in crowds and enforce them through robots by displaying alert messages on the robot's mounted display. In the case of persistent non-compliance by a group of persons wandering from one place to another, the robot pursued that group and kept displaying the message. One of the limitations of this system is that while pursuing that group, there is a high possibility that the robot would not be able to attend to other non-compliant groups. Moreover, this study is missing the mechanism to track groups that remained unattended by the robot. Furthermore, there was no long-term tracking of non-compliant groups to further monitor their behavior after receiving the alert message from the robot. Consequently, it was not possible to track repetitive violations using this system. Due to these issues, this system may not be able to effectively enforce physical distancing constraints in large areas such as shopping malls, airports, etc. In this regard, we present an autonomous and interactive monitoring system with large-scale area coverage for effective and efficient monitoring to combat COVID-19 and future pandemics.
In the present study, we present a cooperative MARS for monitoring and enforcing physical distancing constraints in large areas through human-robot interaction (HRI) to combat COVID-19 and future pandemics. In the present study, a group of persons who violate physical distancing constraints is referred to as a non-compliant group. The aims of the proposed system are as follows: (1) persistent monitoring of large indoor areas using multiple CCTV cameras to detect the violation of physical distancing constraints; (2) interactive encouragement of non-compliant groups to adhere to physical distancing constraints by giving them an alert message through speech-based HRI; (3) long-term tracking and re-identification of non-compliant groups through a multi-camera system to alert them on highest priority and report to the control room in the case of repetitive violations of physical distancing constraints. As shown in Figure 1, the design of the proposed system is based on two types of agents: (1) fixed agents, i.e., calibrated CCTV cameras, and (2) mobile agents, i.e., self-docking autonomous robots with collision-free navigation. Both agents work cooperatively by mutually sharing useful information between each other.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 18 alert message from the robot. Consequently, it was not possible to track repetitive violations using this system. Due to these issues, this system may not be able to effectively enforce physical distancing constraints in large areas such as shopping malls, airports, etc.
In this regard, we present an autonomous and interactive monitoring system with largescale area coverage for effective and efficient monitoring to combat COVID-19 and future pandemics.
In the present study, we present a cooperative MARS for monitoring and enforcing physical distancing constraints in large areas through human-robot interaction (HRI) to combat COVID-19 and future pandemics. In the present study, a group of persons who violate physical distancing constraints is referred to as a non-compliant group. The aims of the proposed system are as follows: (1) persistent monitoring of large indoor areas using multiple CCTV cameras to detect the violation of physical distancing constraints; (2) interactive encouragement of non-compliant groups to adhere to physical distancing constraints by giving them an alert message through speech-based HRI; (3) long-term tracking and re-identification of non-compliant groups through a multi-camera system to alert them on highest priority and report to the control room in the case of repetitive violations of physical distancing constraints. As shown in Figure 1, the design of the proposed system is based on two types of agents: (1) fixed agents, i.e., calibrated CCTV cameras, and (2) mobile agents, i.e., self-docking autonomous robots with collision-free navigation. Both agents work cooperatively by mutually sharing useful information between each other. In the present system, the persistent monitoring of physical distancing constraints is performed based on the visual information received from the distributed network of multiple CCTV cameras mounted within the building. The team of robots stay at their docking stations until a violation of physical distancing constraints is detected by the cameras. In the case of a detected violation, the system shares the location of the non-compliant group with the robot that is located closest to the area of the building where the violation was detected. After receiving the location, the robot navigates to the given location to convey an alert message to the target non-compliant group. The system architecture with a detailed description of each functional module is presented in Section 3. The main contributions of the present study are summarized as follows: 1. We propose an intelligent and cooperative MARS for the efficient monitoring of physical distancing constraints and interactively enforcing them through HRI to combat COVID-19 and future pandemics. To the best of our knowledge, we are the first to propose such a monitoring system, which is based on a distributed network of multiple cameras and a multi-robot system (MRS) to combat ongoing and future pandemics. In the present system, the persistent monitoring of physical distancing constraints is performed based on the visual information received from the distributed network of multiple CCTV cameras mounted within the building. The team of robots stay at their docking stations until a violation of physical distancing constraints is detected by the cameras. In the case of a detected violation, the system shares the location of the noncompliant group with the robot that is located closest to the area of the building where the violation was detected. After receiving the location, the robot navigates to the given location to convey an alert message to the target non-compliant group. The system architecture with a detailed description of each functional module is presented in Section 3. The main contributions of the present study are summarized as follows:
1.
We propose an intelligent and cooperative MARS for the efficient monitoring of physical distancing constraints and interactively enforcing them through HRI to combat COVID-19 and future pandemics. To the best of our knowledge, we are the first to propose such a monitoring system, which is based on a distributed network of multiple cameras and a multi-robot system (MRS) to combat ongoing and future pandemics.
2.
We develop a pipeline for group re-identification through person re-identification using a deep learning-based technique to track and re-identify non-compliant groups through the multi-camera system. This method ensures the long-term tracking of non-compliant groups that are wandering from one place to another in large areas, attending to them at highest priority through a robot and notifying the security control room in a timely manner in case of repetitive violations.
3.
Based on our proposed system, we ensured that all non-compliant groups were inclusively tracked and received the alert message about a breach of physical distancing constraints through HRI.
The rest of the paper is organized as follows. Section 2 provides an overview of the existing work related to multi-agent systems (MAS) with regard to surveillance, the effectiveness of physical distancing and the potential of robotics to combat COVID-19 and future pandemics. The proposed system with detailed descriptions of its modules is presented in Section 3. Evaluation metrics and experimental results are described in Section 4. Finally, we conclude our proposed system in Section 5 with a discussion about limitations and future directions.
Related Work
In this section, we review the previous literature on robotic systems specifically categorized into MAS for surveillance, the potential of robotics to combat COVID-19, the effectiveness of physical distancing and emerging technologies to monitor physical distancing.
Multi-Agent Systems for Intelligent Surveillance
Intelligent surveillance includes various tasks such as detection, tracking and understanding different behaviors in various environments [14]. MAS has gained huge attention in recent years due to its broad range of applications such as cooperative surveillance, distributed tracking of objects and intrusion detection [15][16][17]. Various advancements and methods have been proposed and implemented based on MAS to increase efficiency in surveillance. Milella et al. [6] implemented a MARS that is based on fixed and mobile agents for the active surveillance of places such as museums, airports, warehouses, etc. The system was able to detect the intrusion of persons within forbidden areas and send the location to the mobile agent, i.e., a robot, for further exploration of that area. Furthermore, Pennisi et al. [7] proposed an MRS for surveillance through a network of distributed sensors to detect a person through fixed sensors and send a robot to the location of the detected person for inspection and stopping the target person in case of any anomaly by blocking their way. Du et al. [18] presented a strategy for MAS-based surveillance to track an evader through cooperation between mobile agents. In another work, Mostafa et al. [19] proposed an autonomy model based on fuzzy logic to manage the autonomy of a MAS in complex environments. The aim of this model was to assist the autonomy management of the agents by helping them in making competent autonomous decisions. The application of this model was presented in the monitoring of movements of elderly people. In another work, Kariotoglou et al. [20] developed a framework based on stochastic reachability and hierarchical task allocation to solve the dimensionality problem faced by state-space gridding solutions based on dynamic programming for Markov decision processes in autonomous surveillance with a collection of pan-tilt cameras. The authors conducted the experiment with the proposed framework on a setup targeting industrial pan-tilt cameras and mobile robots. A MAS that includes robot as mobile agents is referred to as a MARS. During persistent surveillance through MARS, robots sequentially visit regions of interest (ROIs) based on applied constraints known as temporal logic (TL). Aksaray et al. [21] presented a method to minimize the time between visits of robots to ROIs by sharing the times of visits among them while considering their TLs to enhance efficiency and reduce redundant visits to those regions. Wu et al. [22] proposed an optimal method to sense robots based on less energy consumption for efficiently adjusting the position of mobile relay for maintaining the quality of the wireless link while the robots are moving. In another work, Jahn et al. [23] proposed a distributed technique for a team of robots to plan deformation while they are moving around a region to create a fence for perimeter surveillance and need to take this fence to another region. Scherer et al. [24] introduced multiple heuristics with various planning perspectives for convex-grid graphs and combined them with the tree traversal approach for better communication in the MRS for persistent surveillance with connectivity constraints.
Role of Robotics during COVID-19
The outbreak of the COVID-19 pandemic has negatively impacted our society. This situation is inevitable and requires modern solutions. COVID-19 has interrupted our usual face-to-face interactions and frustrated us because of the possibility of spreading the virus through physical interaction. The fourth industrial revolution, known as Industry 4.0, should fulfil the requirements to effectively control and manage the COVID-19 pandemic [25]. The presence of intelligent robots surged in various fields, e.g., autonomous driving, medical, rehabilitation, education, companionship, surveillance, information guide, telepresence, etc., to minimize the potential spread of this virus [26]. Robot-assisted surgeries are also being taken into positive consideration in surgical environments [8,27]. Furthermore, Mahdi Tavakoli et al. [28] presented an analysis of the robotics and autonomous systems for healthcare during COVID-19. Based on their analysis, they recommended immediate investment in robotics technology as a good step toward making healthcare services safe for both patients and healthcare workers. Moreover, the ongoing pandemic is affecting the social well-being of people and triggering feelings of loneliness in them. Social and companion robots have been considered as a potential solution to mitigate these feelings of loneliness through continuous social interaction with less fear of spreading the infectious disease [29][30][31][32][33]. Rovenso recently developed a UV disinfectant robot targeting offices and commercial spaces [34]. Moreover, there is another autonomous robot named AIMBOT developed by UBTECH Robotics that performs disinfection tasks at Shenzhen Third Hospital [35].
Effectiveness of Physical Distancing
Multiple works have simulated the spread of the virus [36][37][38] to show the effectiveness of different social distancing measures. The ratio between the total cases of infections during the entire course of the outbreak is termed as the attack rate [39]. According to Mao [36], the attack rate can be decreased up to 82% if three consecutive days are eliminated from working days of a workplace setting. Within the same setting, the attack rate can be reduced up to 39.22% [37] or 11-20% by maintaining a physical distance of 6 feet between the individuals at the workplace depending upon the frequency of contact between them [38].
Emerging Technologies to Monitor Physical Distancing
Recently, different methods have been proposed to monitor the physical distancing between people. Workers in the warehouses of Amazon are monitored through CCTV cameras to detect physical distancing breaches [40]. Other techniques are based on the use of wearable devices [41,42]. These devices use the technologies of Bluetooth or ultra-wide band (UWB). Moreover, different companies such as Google and Apple are developing applications to trace the contacts of people so that alert messages can be delivered to the users if they come in close contact with an infected person [43]. In an extensive survey by Nguyen et al. [39], the technologies that can be used to track people to detect if they are following social distancing rules properly are discussed. The pros and cons of these technologies, such as WiFi, RFID, Bluetooth, artificial intelligence and computer vision, are also discussed in this comprehensive survey.
Proposed System
In this section, we first describe the hardware architecture used to build the proposed system. Then, we present our method with a detailed description about each functional module.
Hardware Architecture
We developed the robot on top of the smart base named 'EAIBOT SMART' [44], which is shown in Figure 2. The smart base consists of dual liDAR sensors of type YDLIDAR G4 to map the surrounding area. One of the liDAR sensors was mounted on top while the second was mounted beneath the smart base. Dual liDAR sensors mounted at two different heights made the collision avoidance and mapping system more robust. Collision avoidance was aided by a gyroscope and five ultrasonic sensors mounted in different directions to cover the world at 360 • . We used built-in collision avoidance, mapping and navigation in this robot base. Moreover, the smart base had a 10-h battery life, which was long enough for it to survive for longer durations. It had a docking station as well and was able to autonomously navigate to the docking station to automatically put itself on charge.
Proposed System
In this section, we first describe the hardware architecture used to build the proposed system. Then, we present our method with a detailed description about each functional module.
Hardware Architecture
We developed the robot on top of the smart base named 'EAIBOT SMART' [44], which is shown in Figure 2. The smart base consists of dual liDAR sensors of type YDLI-DAR G4 to map the surrounding area. One of the liDAR sensors was mounted on top while the second was mounted beneath the smart base. Dual liDAR sensors mounted at two different heights made the collision avoidance and mapping system more robust. Collision avoidance was aided by a gyroscope and five ultrasonic sensors mounted in different directions to cover the world at 360°. We used built-in collision avoidance, mapping and navigation in this robot base. Moreover, the smart base had a 10-h battery life, which was long enough for it to survive for longer durations. It had a docking station as well and was able to autonomously navigate to the docking station to automatically put itself on charge. We equipped our robot with a laptop on top of it to use its RGB camera, speaker and microphone for HRI. This laptop can be replaced with a tablet or any other setup having the sensors mentioned above for HRI. Furthermore, we set up the CCTV RGB cameras with the resolution of 1080, mounted at heights that provide angled views of different locations of the building so as to monitor different areas. We used a machine with the Intel i7 10th generation CPU and an Nvidia RTX 2070 Super GPU to process video streams received from the CCTV cameras.
Our Method
We used the Robot Operating System (ROS) [45] with the 'kinetic' distribution named to build our system. The ROS is very efficient in structuring and managing robot applications. Moreover, it ensures a modular and expandable system. The overall system architecture is shown in Figure 3.
In this section, we describe the main components of our method to detect the violation of physical distancing constraints and enforce them using HRI. The main components of our method are as follows: (1) Detection, localization and tracking of the persons; (2) Search for non-compliant groups based on the violation of physical distancing constraints; (3) Re-identification of the moving non-compliant groups through a multi-camera system; (4) Prioritization of the non-compliant groups; (5) Delivery of alert message to non-compliant groups through speech-based HRI. We equipped our robot with a laptop on top of it to use its RGB camera, speaker and microphone for HRI. This laptop can be replaced with a tablet or any other setup having the sensors mentioned above for HRI. Furthermore, we set up the CCTV RGB cameras with the resolution of 1080, mounted at heights that provide angled views of different locations of the building so as to monitor different areas. We used a machine with the Intel i7 10th generation CPU and an Nvidia RTX 2070 Super GPU to process video streams received from the CCTV cameras.
Our Method
We used the Robot Operating System (ROS) [45] with the 'kinetic' distribution named to build our system. The ROS is very efficient in structuring and managing robot applications. Moreover, it ensures a modular and expandable system. The overall system architecture is shown in Figure 3.
Monitoring Physical Distancing Constraints
The criterion used for detecting the violation of physical distancing constraints by non-compliant groups in our method was to detect the physical distance of less than 1 m. All CCTV cameras continuously monitored the environment within their respective fields of view (FoV) to detect non-compliant groups. The main components of this functional module are as follows. In this section, we describe the main components of our method to detect the violation of physical distancing constraints and enforce them using HRI. The main components of our method are as follows: (1) Detection, localization and tracking of the persons; (2) Search for non-compliant groups based on the violation of physical distancing constraints; (3) Re-identification of the moving non-compliant groups through a multi-camera system; (4) Prioritization of the non-compliant groups; (5) Delivery of alert message to non-compliant groups through speech-based HRI.
Monitoring Physical Distancing Constraints
The criterion used for detecting the violation of physical distancing constraints by non-compliant groups in our method was to detect the physical distance of less than 1 m. All CCTV cameras continuously monitored the environment within their respective fields of view (FoV) to detect non-compliant groups. The main components of this functional module are as follows.
Person Detection and Tracking
Object detection [46,47], localization and tracking have been active areas of research. For person detection and tracking, we used a pipeline based on the tracking algorithm proposed in [48] and the object detection method named 'You Only Look Once (YOLO)' version four, i.e., YOLOv4 [49]. According to the results presented in [49], it outperformed state-of-the-art object detection methods such as 'EfficientDet' [46] and its own previous version named 'YOLOv3' [47] in terms of the average precision and frames per second (FPS) speed. The experimental results showed that the pipeline achieved very good performance in terms of accuracy and speed. Input to this pipeline was RGB images received from the CCTV camera and output was the set of bounding boxes, i.e., top left corner coordinates, width, and height, for the detected persons in the given image. It also generated a unique identity for each detected person that remains same while the person remains in the current FoV of the camera. In order to consume this pipeline for video streams from each CCTV camera within the multi-camera system for monitoring, we used multi-threading and asynchronous calls. Each video stream was handled in an independent thread.
Localization of Detected Persons
All CCTV cameras were mounted in such a way that they provided angled views of the ground plane. In order to accurately calculate the distance between the persons, we preferred the top view of the ground plane, which represents the exact location of persons' feet on ground. For this purpose, we converted the angled view to the top view by applying the homography matrix to the four reference points on the angled view from CCTV. These reference points were manually selected during camera calibration so that they could cover the maximum area of the FoV of the CCTV camera, as shown in Figure 4. The conversions of these points were performed by using Equation (1).
x topView y topView = H * x angledView y angledView (1) In Equation (1), x angledView and y angledView indicate the pixel coordinates of one of the four reference points in the angled image view from the CCTV; x topView and y topView represent the same point after conversion to top view; and 'H' represents the scaled 3 × 3 homography translation-matrix. To transform the angled view to the top view of any detected person, as shown in Equation (2), we used the middle point of the bottom corners' points of the bounding box yielded by the detection and tracking pipeline (Section 'Person Detection and Tracking') against that detected person. This middle point represented the feet of the detected person.
Distance Estimation and Search for Non-Compliant Groups
After transforming the pixel coordinates of the detected persons into top view, we estimated the distance between each person. Here, we treated each transformed position as a node. The distance between two nodes was calculated using the formula of Euclidean distance. The pair-wise Euclidean distances between multiple persons are shown in Table 2. Then, a truth table was created based on this correlation matrix of Euclidean distances. Any connection or Euclidean distance less than one meter between the two nodes was denoted as True (Table 3). A modified depth-first search algorithm [50,51] was used to find all paths between the nodes in the environment, with no repeated nodes. In an environment with only one path, the algorithm can find this path in 'O(V + E)' time, where 'V' and 'E' represent the number of vertices and edges in the graph, respectively. However, there is a possibility of a very large number of paths in a graph, for instance, 'O(!n)' in a graph of order 'n'. In order to deal with this problem, the search on each path was terminated when it reached the threshold of group size 'x c ', where 'c' represents the number of nodes, i.e., persons, in the group. In our case, we considered 'x c = 10' as a threshold value to stop searching. For each returned path, the average position of all the nodes in a group was considered as the position of that group in the map. This information about the position of a non-compliant group was used to navigate the robot. Representation of the list of non-compliant groups of detected persons is shown in Equation (3).
where 'l' is the unique identity assigned to the group 'G l ', 'k' is the unique identity of the person 'P k ', 'c' is the total number of nodes and 't l ' is the duration of non-compliance by the lth non-compliant group 'G l '. 't l ' was computed by the tracking of non-compliant groups based on unique identities given to the tracked persons.
Re-Identification of Non-Compliant Groups
This module of the system ensured the persistent monitoring and tracking of the non-compliant groups that were moving within the environment through a multi-camera setup. It was performed using the motion tracking and the re-identification modules based on the information communication between multiple CCTV cameras through the server. Re-identification was performed in parallel to the detection, tracking and localization of the non-compliant groups, as shown in Figure 3. The duration of non-compliance (t l ) by a moving non-compliant group kept incrementing until that group was re-identified while again violating the physical distancing constraints. The flow diagram in Figure 5 shows the overall process of group re-identification.
path, the algorithm can find this path in 'O(V + E)' time, where 'V' and 'E' represent the number of vertices and edges in the graph, respectively. However, there is a possibility of a very large number of paths in a graph, for instance, 'O(!n)' in a graph of order 'n'. In order to deal with this problem, the search on each path was terminated when it reached the threshold of group size 'xc', where 'c' represents the number of nodes, i.e., persons, in the group. In our case, we considered 'xc = 10' as a threshold value to stop searching. For each returned path, the average position of all the nodes in a group was considered as the position of that group in the map. This information about the position of a non-compliant group was used to navigate the robot. Representation of the list of non-compliant groups of detected persons is shown in Equation (3).
where 'l' is the unique identity assigned to the group 'Gl', 'k' is the unique identity of the person 'Pk', 'c' is the total number of nodes and 'tl' is the duration of non-compliance by the l th non-compliant group 'Gl'. 'tl' was computed by the tracking of non-compliant groups based on unique identities given to the tracked persons.
Re-Identification of Non-Compliant Groups
This module of the system ensured the persistent monitoring and tracking of the noncompliant groups that were moving within the environment through a multi-camera setup. It was performed using the motion tracking and the re-identification modules based on the information communication between multiple CCTV cameras through the server. Re-identification was performed in parallel to the detection, tracking and localization of the non-compliant groups, as shown in Figure 3. The duration of non-compliance (tl) by a moving non-compliant group kept incrementing until that group was re-identified while again violating the physical distancing constraints. The flow diagram in Figure 5 shows the overall process of group re-identification.
Motion Tracking
In the process of the motion tracking of each non-compliant group, we first selected bounding boxes based on the identities of the detected persons belonging to a non-
Motion Tracking
In the process of the motion tracking of each non-compliant group, we first selected bounding boxes based on the identities of the detected persons belonging to a non-compliant group, yielded by the section 'Distance Estimation and Search for Non-Compliant Groups'. After selecting the bounding boxes, we found the center position of the non-compliant group in an image received from the CCTV camera by taking the average of the pixel coordinates representing the top leftmost and bottom rightmost corners among all corners of the bounding boxes selected in the previous step. We repeated the previous two steps with the skip of five frames in the video stream to find the position of that group in the next frame. After finding the positions of the group in two different frames, we calculated the Euclidean distance (Equation (3)) between pixel coordinates at both positions of the group. It was considered as a change in position if the calculated distance reached a set threshold value. We divided the FoV of the CCTV camera into boundary regions, which is shown in Figure 6. pliant Groups'. After selecting the bounding boxes, we found the center position of the non-compliant group in an image received from the CCTV camera by taking the average of the pixel coordinates representing the top leftmost and bottom rightmost corners among all corners of the bounding boxes selected in the previous step. We repeated the previous two steps with the skip of five frames in the video stream to find the position of that group in the next frame. After finding the positions of the group in two different frames, we calculated the Euclidean distance (Equation (3)) between pixel coordinates at both positions of the group. It was considered as a change in position if the calculated distance reached a set threshold value. We divided the FoV of the CCTV camera into boundary regions, which is shown in Figure 6. If the new position of the moving non-compliant group was located within the boundary regions, images of the persons of that group were extracted based on the bounding boxes yielded by the person detection pipeline, which is discussed in the section 'Person Detection and Tracking'. Later, these images were stored in the database to re-identify that non-compliant group in the case of being detected through other CCTV cameras while again violating the physical distancing constraints. Information kept in the name of each stored image of a person was person identity and group identity as 'Plk'.
Re-Identification
We developed a pipeline for group re-identification based on the existing algorithm for person re-identification. In order to perform the re-identification of non-compliant groups, we used a lightweight and state-of-the-art person re-identification deep learning model termed as Omni-Scale Network (OSNet) [52]. We used a pre-trained model, which was trained on six widely used person re-identification image datasets: Market1501 [53], CUHK03 [54], MSMT17 [55], DukeMTMC-reID (Duke) [56,57], GRID [58] and VIPeR [59]. In this person re-identification technique, the database of existing images of the persons, from which the model has to re-identify the target person, is called 'Gallery', and the image of the target person is called the 'Query' image. For implementation, we used a library for deep learning person re-identification named Torchreid [60], which is based on a wellknown machine learning library named PyTorch.
Group re-identification was performed whenever a non-compliant group was detected through any CCTV camera to re-identify if it was previously detected by another camera while violating the physical distancing constraints. We cropped and extracted images of the persons in non-compliant groups detected by the person detection and tracking pipeline (Section 'Person Detection and Tracking') through a current CCTV camera for using them as query images in the re-identification module. The Gallery was based on the images of the persons present in moving non-compliant groups, which were tracked and extracted by the motion tracking module (Section 'Motion Tracking') We used the term re-identification score 'Sl' to determine if the target non-compliant group was reidentified, where 'l' represents the unique identity of that group. It was based on how If the new position of the moving non-compliant group was located within the boundary regions, images of the persons of that group were extracted based on the bounding boxes yielded by the person detection pipeline, which is discussed in the section 'Person Detection and Tracking'. Later, these images were stored in the database to re-identify that non-compliant group in the case of being detected through other CCTV cameras while again violating the physical distancing constraints. Information kept in the name of each stored image of a person was person identity and group identity as 'P lk '.
Re-Identification
We developed a pipeline for group re-identification based on the existing algorithm for person re-identification. In order to perform the re-identification of non-compliant groups, we used a lightweight and state-of-the-art person re-identification deep learning model termed as Omni-Scale Network (OSNet) [52]. We used a pre-trained model, which was trained on six widely used person re-identification image datasets: Market1501 [53], CUHK03 [54], MSMT17 [55], DukeMTMC-reID (Duke) [56,57], GRID [58] and VIPeR [59]. In this person re-identification technique, the database of existing images of the persons, from which the model has to re-identify the target person, is called 'Gallery', and the image of the target person is called the 'Query' image. For implementation, we used a library for deep learning person re-identification named Torchreid [60], which is based on a well-known machine learning library named PyTorch.
Group re-identification was performed whenever a non-compliant group was detected through any CCTV camera to re-identify if it was previously detected by another camera while violating the physical distancing constraints. We cropped and extracted images of the persons in non-compliant groups detected by the person detection and tracking pipeline (Section 'Person Detection and Tracking') through a current CCTV camera for using them as query images in the re-identification module. The Gallery was based on the images of the persons present in moving non-compliant groups, which were tracked and extracted by the motion tracking module (Section 'Motion Tracking') We used the term re-identification score 'Sl' to determine if the target non-compliant group was re-identified, where 'l' represents the unique identity of that group. It was based on how many persons were re-identified from the group based on the query images. 'S l = 1' meant that one person from the query images was re-identified. The target non-compliant group was considered as re-identified in case of 'S l ≥ 2', which means that at least two persons were re-identified from query images. The steps included in the pipeline developed for performing group re-identification are shown in Algorithm 1. Where 'l' represents the identity of the group, 'k' represents the identity of the person and 'D' represents the Gallery database of the persons belonging to the moving noncompliant groups. Whenever a non-compliant group was re-identified, the corresponding identities of the persons from that group were updated in the database to keep track of them.
Prioritization of Non-Compliant Groups
Prioritization refers to which non-compliant group should be addressed first by the robot. Only those non-compliant groups were considered in this step who were locked at a location and not moving, which was decided based on the section 'Motion Tracking'. It was performed based on two factors, namely, the size of the non-compliant group (x c ) and the duration of non-compliance (t l ). Prioritization was performed in a hierarchical way based on these two factors. 't l ' was considered first while prioritizing the non-compliant groups because the continuous violation of physical distancing constraints over a longer period can be more dangerous. 't l ' was divided into different ranges with a constant difference. It could be '0 min ≤ t l ≤ 5 min', '5 min ≤ t l ≤ 10 min', etc. The non-compliant groups within the higher range of 't l ' were given high prioritization based on 'x c '. The group with a higher value of 'x c ' was given higher priority. On top of these two factors, a non-compliant group that was re-identified while again violating physical distancing constraints was given highest priority due to its continuity in violation.
Enforcement of Physical Distancing Constraints
After the prioritization of locked non-compliant groups, a prioritized list of locations of these groups was transmitted to the social robot for sending it to these locations one by one and giving alert messages to these non-compliant groups through speech-based HRI. The robot received an updated list of locations of non-compliant groups after a fixed interval of time (i.e., 5 s) after every search for non-compliant groups. Once the robot arrived in the vicinity of the location of a group, it played an audio message to alert the persons in the group about violation and encourage them to maintain a physical distance of at least one meter. Moreover, the robot explained to them the reason that they were approached by it was because they were not abiding by the rule of maintaining a safe physical distance.
Experimental Results
In this section, we explain the experimental setup and the metrics used to evaluate the proposed system and provide an analysis of the results obtained during experiments. We tested our system in the main building of our university (Norwegian University of Science and Technology (NTNU)) with a total of 15 participants divided into four different groups. All groups were given demonstrations of the test scenarios, which are discussed in Sections 4.1 and 4.2 Three cameras were mounted in different locations of the building to monitor three different areas. Overall, 28 experiments were conducted to test the proposed system. The number of violations detected and the successful enforcements performed under different configurations are shown in Table 4. Table 4. Overall performance of proposed system with respect to different configurations.
Configuration Total Experiments (Violations Made) Number of Violations Detected Number of Enforcements
Without Group Re-identification 19 Single Group at a Time Multiple Groups at a Time 17 11 6 With Group Re-identification 9 8 8 In overall experiments, the system was unable to properly detect the violation on three occasions. In two of them, the reason was that one individual was fully occluded by the other person standing in front them in-line with the camera's angled view in a two-person group. The third system failure was due to the failure in group re-identification, which occurred due to a large variation in lighting conditions. However, the violation was detected, and enforcement was performed precisely. Details of these experiments and metrics used to evaluate the proposed system and their results are discussed as follows:
Accuracy of Non-Compliant Group Localization
This evaluation metric was used to measure the accuracy of localization based on comparison between the ground truth location of the non-compliant group and the location estimated using our method. Prior to the experiments, ground truth locations were manually marked on the 2D map of the ground plane based on the Cartesian coordinate system used by the robot for navigation and localization. Participants were planted at those ground truth locations in order to test the accuracy of our system with respect to the localization of non-compliant groups. The plot of the ground truths as green circles and estimated locations of non-compliant groups as orange circles is shown in Figure 7.
Science and Technology (NTNU)) with a total of 15 participants divided into four different groups. All groups were given demonstrations of the test scenarios, which are discussed in Sections 4.1 and 4.2. Three cameras were mounted in different locations of the building to monitor three different areas. Overall, 28 experiments were conducted to test the proposed system. The number of violations detected and the successful enforcements performed under different configurations are shown in Table 4. Table 4. Overall performance of proposed system with respect to different configurations.
Configuration Total Experiments (Violations Made) Number of Violations Detected Number of Enforcements
Without Group Re-identification 19 Single Group at a Time Multiple Groups at a Time 17 11 6 With Group Reidentification 9 8 8 In overall experiments, the system was unable to properly detect the violation on three occasions. In two of them, the reason was that one individual was fully occluded by the other person standing in front them in-line with the camera's angled view in a twoperson group. The third system failure was due to the failure in group re-identification, which occurred due to a large variation in lighting conditions. However, the violation was detected, and enforcement was performed precisely. Details of these experiments and metrics used to evaluate the proposed system and their results are discussed as follows:
Accuracy of Non-Compliant Group Localization
This evaluation metric was used to measure the accuracy of localization based on comparison between the ground truth location of the non-compliant group and the location estimated using our method. Prior to the experiments, ground truth locations were manually marked on the 2D map of the ground plane based on the Cartesian coordinate system used by the robot for navigation and localization. Participants were planted at those ground truth locations in order to test the accuracy of our system with respect to the localization of non-compliant groups. The plot of the ground truths as green circles and estimated locations of non-compliant groups as orange circles is shown in Figure 7. The above plot shows the non-compliant groups being localized with respect to the Cartesian coordinate system fixed to the robot. The maximum error observed between the ground truth and the estimated location was 0.24 m and the average error observed between them was approximately equal to 0.12 m. The detection and localization of the non-compliant groups during two different experiments performed with the proposed system and navigation of the robot to these groups are shown in Figure 8.
The above plot shows the non-compliant groups being localized with respect to the Cartesian coordinate system fixed to the robot. The maximum error observed between the ground truth and the estimated location was 0.24 m and the average error observed between them was approximately equal to 0.12 m. The detection and localization of the noncompliant groups during two different experiments performed with the proposed system and navigation of the robot to these groups are shown in Figure 8.
Accuracy of Re-Identification of Non-Compliant Groups
We designed multiple scenarios in order to test the accuracy of the proposed system in the re-identification of non-compliant groups and the enforcement of physical distancing constraints. As shown in Table 4, eight experiments were based on group re-identification. Groups of participants were asked to violate physical distancing constraints within the FoV of one of the three mounted CCTV cameras and then walk to the other side and perform the same action within the FoV of another CCTV camera. Here, the purpose was to re-identify the non-compliant groups while repeating the violation of physical distancing constraints. We stored the video streams captured by all the mounted CCTV cameras while conducting the experiments for creating our own dataset and testing the overall accuracy of the re-identification deep learning model with respect to the environmental conditions in our designed test scenarios. We used YOLOv4 [49] to crop the images of the persons from all video frames and then annotated each person with a personal as well as camera identity. In this way, we created our own dataset, which consisted of 7687 images of 15 different persons captured through three different CCTV cameras. After annotation of the dataset, we divided it into two categories: Gallery and Query images. Ten percent of the total images were categorized as Query images and rest were used as Gallery images. The results of the re-identification module based on the deep learning model on our created dataset are shown in Table 5.
Accuracy of Re-Identification of Non-Compliant Groups
We designed multiple scenarios in order to test the accuracy of the proposed system in the re-identification of non-compliant groups and the enforcement of physical distancing constraints. As shown in Table 4, eight experiments were based on group re-identification. Groups of participants were asked to violate physical distancing constraints within the FoV of one of the three mounted CCTV cameras and then walk to the other side and perform the same action within the FoV of another CCTV camera. Here, the purpose was to re-identify the non-compliant groups while repeating the violation of physical distancing constraints. We stored the video streams captured by all the mounted CCTV cameras while conducting the experiments for creating our own dataset and testing the overall accuracy of the re-identification deep learning model with respect to the environmental conditions in our designed test scenarios. We used YOLOv4 [49] to crop the images of the persons from all video frames and then annotated each person with a personal as well as camera identity. In this way, we created our own dataset, which consisted of 7687 images of 15 different persons captured through three different CCTV cameras. After annotation of the dataset, we divided it into two categories: Gallery and Query images. Ten percent of the total images were categorized as Query images and rest were used as Gallery images. The results of the re-identification module based on the deep learning model on our created dataset are shown in Table 5. Figure 9 shows the enforcement of physical distancing constraints based on the reidentification of a non-compliant group, and Figure 10 presents some of the results that were generated while testing group re-identification. Figure 9 shows the enforcement of physical distancing constraints based on the reidentification of a non-compliant group, and Figure 10 presents some of the results that were generated while testing group re-identification.
(a) Camera I ( b) Camera II
Conclusions, Limitations and Future Work
A novel system for monitoring and enforcing physical distancing constraints in large areas is proposed in this paper, which consists of multiple CCTV cameras and an MRS. Monitoring was conducted using a multi-camera system to detect and track groups of Figure 10. Images of persons re-identified by re-identification module based on images stored in database (Gallery) and given query images.
Conclusions, Limitations and Future Work
A novel system for monitoring and enforcing physical distancing constraints in large areas is proposed in this paper, which consists of multiple CCTV cameras and an MRS. Monitoring was conducted using a multi-camera system to detect and track groups of persons who did not comply with physical distancing constraints. We proposed a pipeline for group re-identification to detect repetitive violations of physical distancing constraints by a non-compliant group of individuals. We used an autonomous, collision-free mobile robot for the enforcement of physical distancing constraints by attending to non-compliant groups through HRI and encouraging them to comply with the set constraints. The effectiveness and accuracy of our system were demonstrated in terms of the detection and localization of non-compliant groups, group re-identification in the case of repetitive non-compliance and the enforcement of physical distancing constraints through HRI. We conclude that the monitoring of physical distancing constraints with group re-identification is effective in the long-term tracking of non-compliant groups to detect repetitive violations and notify the security control room in a timely manner to stop them. We also considered the ethical concerns in our system through efficient and secure data gathering and data handling mechanisms.
A limitation of our system is that the re-identification of the non-compliant group is not deployed by the robot. The re-identification of the non-compliant group through the robot would increase the overall accuracy of the system with regard to the enforcement of physical distancing constraints. Due to COVID-19 restrictions, we could only test the system in controlled settings with a low crowd density. Moreover, due to the same reason, we could not evaluate the social impact of our system.
In the future, testing our system in environments with high crowd densities is required to make it more robust. Furthermore, usability tests with security officials need to be conducted to demonstrate the effectiveness of the proposed system. In future studies, we will develop a mechanism based on our MARS to predict the future location from the past motion trajectory of a non-compliant group that is wandering within the environment. This can help to attend to non-compliant groups that are wandering within the area in a timely manner. | 12,125.4 | 2021-08-04T00:00:00.000 | [
"Engineering",
"Computer Science",
"Medicine",
"Environmental Science"
] |
Universality of breath figures on two-dimensional surfaces: an experimental study
Droplet condensation on surfaces produces patterns, called breath figures. Their evolution into self-similar structures is a classical example of self-organization. It is described by a scaling theory with scaling functions whose universality has recently been challenged by numerical work. Here, we provide a thorough experimental testing, where we inspect substrates with vastly different chemical properties, stiffness, and condensation rates. We critically survey the size distributions, and the related time-asymptotic scaling of droplet number and surface coverage. In the time-asymptotic regime they admit a data collapse: the data for all substrates and condensation rates lie on universal scaling functions.
Breath figures are droplets patterns formed by a supersaturated vapour flux condensing on a substrate [1]. They appear in nature, for example when dew deposits on leaves, spider nets and vegetable fibers. They also have an appealing potential for technological purposes. Possible applications include dew harvesting devices for water collection [2-4], heat exchangers with increased efficiency [5][6][7][8], and patterned surfaces production [9][10][11][12][13]. Breath figure self-assembly has been exploited to fabricate porous bead-on-string fibers [13]. Droplets have been used as a template to produce ordered porous materials for membrane manufacturing [14], as well as to introduce desired materials inside textiles by means of three-dimensional porous microstructures [15]. Recent studies have shown that droplet patterns on surfaces can also give origin to structural colours [16]. In all these applications, understanding the droplet formation process and the evolution of the condensation patterns is a crucial step towards controllability and further technological development.
The theory of breath figures is based on scaling arguments [17][18][19][20][21][22]. The condensation process leading to the formation of a droplet pattern develops in several phases [23], corresponding to different time and length scales characterizing the phenomenon. A first nucleation of droplets (primary nucleation) is followed by their initial growth as a monodisperse population. After some time, the droplets start to merge, releasing space on the substrate, which is used for further nucleation (secondary nucleation). The distribution continues to evolve, becoming polydisperse and eventually self-similar [17,18,24,25]. Scaling concepts, closely related to fractals theory, can be used to describe the evolution of the droplet distribution [25,26]. Experiments [17,[26][27][28][29] and simulations [19,26,30,31] have shown that, in the late-time regime, the droplet size distribution is bimodal, with two well-separated parts. In particular, it features a bell-shaped peak, corresponding to the monodisperse population of the largest hence oldest droplets, and a power law distribution of smaller droplets, which is terminated by a cutoff function at the nucleation length scale. Scaling manifests itself in a data collapse of droplet distributions taken at different times, and in the time-dependence of the moments of the distribution. In the long-time regime they approach power laws with exponents that can be expressed in terms of a single non-trivial exponent, denoted as "polydispersity exponent τ ." In particular, the asymptotic time decay of the droplet number and the porosity (ratio between non-wetted area and total substrate area) are described by the same exponent. It was widely expected that the polydispersity exponent is a universal number, depending only on the dimensionality of the system [17-22, 32, 33]. Its value was calculated [33] by assuming universality [34]. However, the exponents found in recent numerical simulations [26,31] differ clearly from the prediction. This calls for experimental studies. So far experimental studies have mostly addressed the early stages of the droplet nucleation [23,27,35], and the initial phases of the polydisperse transient regime [36,37], with noticeable exceptions in [8,26]. The impact of surface properties and condensation rates has not been examined.
Here, we present extensive experimental data for breath figures on a range of substrates with different stiffness, surface chemistry, and temperature. The different regimes of surface coverage are discussed with an emphasis on the self-similar phase. We observe the predicted scaling and critically survey the predicted data collapse for all data of all experimental settings. The conclusion, that there are universal scaling functions, is substantiated by Kinetic Monte Carlo simulations.
Evolution of droplet patterns. -We induce the nucleation and growth of water droplets on substrates consisting of a glass cover slip either coated with a 30 µm layer of silicone (Dow Corning Sylgard 184) or silanized with tridecafluoro-1,1,2,2tetrahydrooctyl trichlorosilane (SiHCl 3 ) or hexametildisilazane (HMDS). The upper side of the substrates is exposed to a steady flux of air saturated with water at room temperature. On the bottom, they are in contact with a temperature-controlled plate at a set-point temperature T * p of 5 • C. We image the droplets from the top with a dissecting microscope (Nikon SMZ80N). The smallest measured droplets have a radius of about 15 µm, the largest about 2.5 mm. In each image, we identify the center and radius of each droplet. We acquire images of the droplets over logarithmically spaced time intervals between 0.1 sec and 10 h, from the moment the first visually resolvable droplets appear. Using silicone substrates allows us to reduce the stiffness, and hence the number of nucleation sites respect to glass [38]. The static contact angle θ c is measured via side imaging, with a CMOS camera (Thorlabs, DCC3240M) and LED back-illumination, with a precision of 2 • . On all substrates droplets can be Representative snapshots of droplet nucleation, growth, and coalescence on a silicone substrate with elastic modulus E = 2 MPa are shown in Fig. 1. After an initial burst of nucleation, (Fig. 1a), the droplets grow with roughly uniform size, (Fig. 1b). After about one minute, the droplets start to come into contact and coalesce (Fig. 1c). New droplets nucleate in the gaps between larger ones, and the range of droplet sizes grows ( Fig. 1d-f).
Four stages of the condensation process emerge clearly when we plot the total number of droplets per unit area as a function of time, N (t), as shown in Fig. 2a (black line, left vertical axis). The nucleation stage lasts for the first second. It is characterized by a rapid increase in the number of droplets and it is labelled (i) in Fig. 2a. In the uniform growth stage, labelled (ii) in Fig. 2a, the number of droplets remains essentially fixed and the mean and maximum droplet radii increase, as shown in Fig. 2b. Throughout the nucleation and growth stages, the droplet size distribution remains unimodal, as shown by the red histogram in Fig. 2c. In the early coalescence stage, labelled (iii), the number of droplets per unit area steadily decreases (Fig. 2a), the droplets growth accelerates, as shown in Fig. 2b, and the size distribution becomes bimodal, as shown by the yellow histogram in Fig. 2c. In the late coalescence stage, starting after about 10 3 s, labelled (iv), the number of droplets decays more slowly than before (Fig. 2a), while the spread between the maximum and mean droplet radii widens, as shown in Fig. 2b, reflecting the broadening of the underlying size distribution (Fig. 2c, blue histogram). As the droplets distribution grows, the free area on the substrate decreases. This decay is quantified by the porosity, where the index i labels the i th droplet, A i = πR 2 i is its wetted area, with R i its time-dependent radius, and A tot is the total substrate area [39]. Experimentally, the porosity is calculated from the droplet coordinates and radii. Prior to the nucleation of droplets, for an empty surface, the porosity is equal to one. As the area covered by droplets grows in time, the porosity decays, as shown in Fig. 2a (grey line, red online, right vertical axis).
Scaling of the droplet number density. -We now evaluate the droplet size distributions. The probability density function n(s, t) represents the number of droplets of size s per substrate unit area, per unit size. The "size", proportional to the droplets mass, is defined as s = r 3 , where r is the radius and n has units of m −5 . During the late-stage scaling regime, n should adopt a scaling form. In particular, it is expected [18,19] that the intermediate portion of the distribution, excluding the tails of the smallest and the largest droplets, scales as n(s, t) ∼ [s/Σ(t)] −τ Σ(t) −θ , where θ is a trivial exponent, depending on the dimensionality of the system, τ is the polydispersity exponent and Σ(t) is the maximum droplet size at time t. The exponent θ must take the value 5/3 for 3-dimensional droplets on a 2-dimensional substrate, such that the dimensions of n and the scaling expression match. The exponent τ is expected to take a value of 19/12 [33]. The resulting power laws are indicated by dashed lines in the plots of our data. Further discussion is provided in Sect. E of the Supplementary Material. We analyze n both for experiments and simulations. In particular, we perform Kinetic Monte Carlo simulations [40][41][42][43] on a 1200×1200 square lattice with periodic boundary conditions and a constant water flux impinging onto the surface. The simulations account for droplet nucleation and growth as well as for merging events. A full description is provided in Sect. I of the Supplementary Material.
To compare our findings to the theoretical predictions, we plot the rescaled droplet number density, n(s, t)Σ θ , in Fig. 3. In this plot, the droplet sizes are normalized by the maximum droplet size observed at that time point, Σ(t). Thus, large droplet peaks line up at s/Σ ≈ 1 for all times. As time progresses, the small droplet peak becomes broader and has a maximum value close to the smallest resolvable size.
At large times, the distribution presents three distinct features: an intermediate self-similar range where n ∼ (s/Σ) −τ , a monodisperse bump describing the large droplets, and a tail describing the small droplets. Such features clearly appear in both our numerical and experimental results (Fig. 3). For all data the large-droplet cutoff emerges for s/Σ 10 −2 . The simulations show a power law over around four decades, in the intermediate scaling range, for 10 −6 < s/Σ < 10 −2 . In contrast, for the experimental data, the scaling range is limited to at most two decades, 10 −4 < s/Σ < 10 −2 , even for our latest time t = 1.47 10 4 sec. The small-droplet cutoff is much broader for the experiments and it covers roughly three decades, with the cross-over at s/Σ ≈ 10 −4 . The experiments cannot be further extended in time, since gravitational effects impact the droplet shape when the large droplets approach the capillary length, γ/ρg ≈ 2.6 mm.
Porosity and Droplet Number. -The scaling form of the distribution entails that the porosity and the droplet number exhibit a power-law decay [18,19]. According to the theory, in the scaling regime, the radius of the largest droplets should scale as R ∼ t ν with exponent ν = 1, and both the number of droplet per unit surface N , and the porosity p should evolve as a power law N ∼ p ∼ t −k [18,19], with exponent (2) The derivations are provided in Sect. F, G and H of the Supplementary Material. For θ = 5/3 and Blackman's prediction τ = τ * = 19/12 [33], we expect that k = k * = 1/4, where the asterisks denote the specific theoretical values. The temporal growth of the maximum droplet radius is shown in Fig. 2b. In the last time decade of the experiment, it follows the predicted power law with exponent ν = 1. Fig. 4 shows the average of the porosity (grey, red online, right vertical axis) and number of droplets per unit area (black, left vertical axis) averaged over 6 experiments. The error bars represent the standard deviation at each instant. Within the error bars, the decay rate of both the porosity and droplets number are compatible with the theoretical exponent −1/4 (dashed line), but deviations up to 20% remain possible [31,33]. Note that the exponent k is extremely sensitive to any variation of the exponent τ . A deviation ∆τ from the theoretical prediction τ * , would be enhanced by a factor of three in the deviation from k * , thus resulting in a twentyfold increase of relative deviations, ∆k/k * = 3∆τ /(1/4) = 12τ * ∆τ /τ * = 19∆τ /τ * . Moreover, the integral quantities of porosity and droplet number are easier to evaluate than the full droplet distribution.
Condensation for different surfaces and water fluxes. -In view of its robust scaling in the late time regime, we now use the porosity time decay to compare condensation for different surfaces and condensation fluxes. For the porosity the theory predicts that ( Here, Φ is the constant water flux, i.e. the water volume deposited on the substrate per unit surface per unit time, α = (π/3)(2 + cos θ c )(1 − cos θ c ) 2 /(sin θ c ) 3 is a geometrical factor accounting for the contact angle θ c , measured through the liquid phase, is the width of the scale separation between smallest and largest droplets; the non-dimensional sizes s p /Σ(t) and x p , where s p and x p are constants, represent indeed the cutoffs for small and large droplets respectively. Full details are provided in Sect. G of the Supplementary Material. For this discussion we note that, due to the small exponent θ − τ = 1/12, the ratio (s p /x p ) θ−τ in Eq. 3 can change by at most 30%, even for s p /x p differing by two orders of magnitude for two vastly different materials. Hence, it remains practically unvaried, and the impact of Φ/α on the time evolution of Σ(t) is expected to be the only noticeable parameter influencing the asymptotic evolution of the porosity. Fig. 5a shows the time evolution of the porosity for three different surfaces with the same temperature of the cooling plate, T * p = 5 • C, and the same contact angle up to experimental precision. The light and dark grey curves (red and blue online) correspond to 1 and 2 MPa silicone surfaces, respectively, while the black curve shows the porosity for fluor-silane coated glass. The two silicone surfaces behave similarly, with a monotonic decrease in the porosity. However, the softer surface (E = 1 MPa) is populated at a slower rate. Hence the drop in porosity occurs later, as one can observe for times around 100 s. The silanized glass surface behaves very differently in the early stages of condensation. It has a very high nucleation rate and is covered by tiny droplets almost immediately, such that the porosity drops to a very small value. Initially, the droplets are so small and so densely packed that they cannot be individually resolved ( Fig. 10 in Supplementary Material). Between 10 s and 100 s the droplets start to merge such that the areas in between can be discerned, and the porosity rises towards the values observed for the silicone surfaces. In the late-time scaling regime the surface is entirely covered by water droplets, and the flux Φ onto the droplets is solely determined by the cooling plate temperature, such that Eq. 3 predicts the same power law for the three systems. Despite dramatic difference in initial droplet nucleation and growth, at late times, t 1000 s, all data fall exactly on top of each other. The three surfaces do not only share the same scaling exponent, but also the same prefactor of the power law.
In Fig. 5b we compare condensation on four different types of substrates with different stiffness and contact angle θ c , and we also vary the vapour flux Φ by changing the temperature of the cooling plate T * p . The porosity of all data falls on a single scaling function when plotted as function of the maximum droplet radius. We interpret this very good, parameter-free data collapse as strong evidence for universality.
Conclusion. -We present a series of experiments and simulations where a time-constant uniform water vapour flux condenses on rigid cold surfaces. The emerging droplets patterns undergo four stages on their way to organize into a self-similar arrangement whose number densities feature non-equilibrium scaling (see Fig. 2): (i) a first wave of nucleation of droplets; (ii) uniform growth of roughly equally spaced and monodisperse droplets; (iii) early coalescence, releasing surface area formerly occupied by the first generation of droplets; and (iv) re-population of the gaps between droplets and emergence of a self-similar droplet pattern. In the self-similar regime the droplet number densities at different times admit a data collapse, Fig. 3. The scaling of the number densities implies a power law decay of the droplet number and the porosity, i.e. the area not covered by droplets. We showed here that substrates with vastly different surface properties evolve towards identical power laws with matching exponents and pre-factors, where different surface fluxes are fully accounted for by adopting the maximum droplet radius as a time variable (cf. Eq. (3)). These findings provide compelling evidence for universal scaling of the asymptotic self-similar regime of breath figures.
ACKNOWLEDGMENTS
We would like to express our gratitude to Eric Dufresne, in whose lab the present experiments have been performed, for his input and valuable suggestions on how to improve earlier versions of the manuscript. We are thankful to Rob Style, for his help with the design of the experimental setup, and to Kathryn Rosowski and Qin Xu for experimental support. We acknowledge Nicolas Bain, Daniel Dernbach and Robert Haase for insightful discussions, and we thank Martin Callies for feedback on the manuscript.
LS designed, performed and analyzed the experiments. FG and EAM performed the simulations. GP and KSM contributed to the design of experiments and the realization of the samples. LS and JV developed the theory and wrote the manuscript. All the authors provided feedback on the manuscript. A sketch of the experimental setup is displayed in Fig. 6. We induce the nucleation and growth of water droplets on a rigid substrate (sample), inside a condensation chamber (6×5×4 cm). The sample is placed in direct contact with a 4-mm thick aluminium plate-fin 6×5 cm. The plate is positioned on top of a 12 V fan, and acts like a heat sink. The temperature control of the condensation substrate is realized by controlling the temperature of the aluminium plate with a computer-regulated feedback mechanism. To this aim, a thermistor (NTC 30 kΩ, Amphenol Advanced Sensors) measures the temperature of the plate and sends the signal to a PID controller, programmed on an Arduino microcontroller. Both the thermistor and the fan are connected to the PID controller. The PID controller compares the measured temperature of the plate, T p , with the desired temperature, T * p , and varies the speed of the fan, until T p = T * p ± 0.1 • C. To guarantee a constant water deposition rate, the sample is placed inside a humidity chamber. A constant flux of saturated vapour is introduced through two openings on opposite sides of the chamber. To generate such a flux, a pump (Tetra APS50 Air Pump) insufflates a constant air flux at the bottom of a deionized water column (5 cm diameter × 50 cm height). The air bubbles raise through the column collecting saturated vapour. From the top of the column, they are conveyed into a second column, repeating the process. The humidity of the vapour flux at the exit of the second column is measured with a humidity sensor (393 -AM2302 Digital Temperature and Humidity Sensor, 5V, Adafruit), connected to a second Arduino microcontroller. The vapour flux is turned on before lowering the temperature of the plate. Within few seconds, the humidity increases up to the point where it reaches full saturation (100%) and the measure (which has a precision higher than 0.01%) does not change anymore. All our experiments are done in full saturation conditions of the air flux (100% humidity), with a constant temperature difference between the controlled lab environment (24 • C) and the aluminium plate (5 • C or 10 • C). We choose to change the plate temperature to vary the deposited water flux, as this provides the best experimental controllability and reproducibility, respect to changing the flow or the thickness of the substrate. The samples are imaged from the top with a Nikon SMZ80N microscope. Recording is done with a ThorLabs USB 3.0 Digital Camera. To improve the imaging, a LED light (ThorLabs MCWHL5) is shed through one of the objectives of the microscope and a 100 µl water droplet is deposited between the sample and the aluminium plate.
B. Sample fabrication
The condensation experiment is performed with substrates made of two different materials: silicon gel and hydrophobic glass. The silicone gel substrates are fabricated by spin-coating a layer of polydimethylsiloxane (PDMS) on top of a glass cover slip (Menzel-Gläser 24x50 mm, N. 1.5). For the PDMS preparation, we use a commercial binary mixture of Sylgard™184 Silicone Elastomer (Dow Corning, Base and Curing Agent). The two components are mixed with different mass ratios (4:1 and 10:1) to achieve different stiffness of the condensation substrates. The mixture is degassed in vacuum and spin-coated on top of the cover-slip for 1 min at 1000 rpm. The samples are then cured at 40 • C, for 7 days. The stiffness of the cured substrate is measured through the Young's elastic modulus E, by means of a compression test performed with a texture analyzer (Stable Microsystem TA.XT plus). The 4:1 and 10:1 substrates have an elastic modulus of E = 2 ± 0.18 MPa and E = 1 ± 0.2 MPa respectively. The glass substrates are made hydrophobic by silane vapour deposition, using the following procedure: a glass cover slip (Menzel-Gläser 24x50 mm N. 1.5) is exposed to UV radiation for 20 minutes. Subsequently, it is positioned inside a vacuum desiccator together with a 1 ml droplet of tridecafluoro -1,1,2,2 -tetrahydrooctyl trichlorosilane 97%. We generate a 5 mbar depression inside the desiccator and we isolate it from the air-void line. After 24 h, we remove the glass from the desiccator, we wash it with toluene and we vacuum dry it. We measure the static contact angle of the droplets θ c on the different substrates by means of side imaging with a CMOS Camera (Thorlabs, DCC3240M) and back-illumination with a 3.5 × 6 white LED (Edmundoptics). We find a static contact angle of the order of 90 • for silicone and fluor-silanized glass (95.5 • ± 2 • for 4:1 silicone, 94.3 • ± 2 • for 10:1 silicone and 92.2 • ± 2 • for fluor-silanized glass), while HMDS-silanized glass has a contact angle of 67 • ± 2 • . Given the slow speed of the condensation process, the droplets can be considered in their equilibrium shape during growth by direct water deposition. Though such shape may change during droplets merging, the relaxation time of a droplet formed by coalescence is very fast (below few seconds for the largest droplets in the late regime), hence droplets with a non-circular footprint are rarely captured in the images and statistically irrelevant for the analysis.
C. Image processing
The image processing is carried out with a combination of Fiji and Matlab scripts. In order to eliminate artifacts due to possible uneven illumination, we first subtract the background from the original images, by using the Fiji plugin 'Pseudo flat field correction' of the Jan Borcher's Biovoxxel toolbox [45]. The images are then converted from grey scale to black and white by means of the 'Enhance local contrast (CLAHE)' [46] or with the standard 'Adjust/Threshold' Fiji plugin. The droplets appear as black areas on a white background. The binarized images are visually inspected to correct the imperfections due to reflections on the surface of the large droplets. We use the 'Binary/Fill holes' plugin, to correct the images where white areas appear in the middle of the large droplets, due to such reflections. The segmentation of connected droplets is done with Michael Schmid's Fiji 'Adjustable Watershed' plugin [47], with an appropriately selected tolerance parameter and subsequently manually corrected when required. The detection of the radius and the coordinates of the centre of the droplets is performed with a Matlab selfdeveloped code. The static contact angle for the different substrates are measured with Stalder's Fiji Low Bond Axisymmetric Drop Shape Analysis plugin [48].
D. Data collection and statistical analysis
For the droplet number density, n(s, t), we consider only the droplets whose centre of mass resides inside the field of view. For each time point, the corresponding droplets size distribution is calculated based on at least 3000 droplets. To this aim, at each time, we collect several images, in different points of the sample (at least 10) .
E. Scaling arguments for size distribution
We revisit here the main aspects of the theory of breath figures [18,19]. Specifically, we address predictions of the timeasymptotic scaling of the droplet number density n(s, t), and its bearing to the asymptotic power-law decay of the droplet number and the porosity. Let r be the radius of the circular area covered by the droplet and let the 'size' s = r 3 be a proxy for the droplet volume. The size is related to the volume by a constant factor that depends of the wetting angle of the droplets, and hence on the surface properties.
By applying the Buckingham-Pi theorem [49], one can write where f (x, y) is a non-dimensional scaling function. Such a function depends on the dimensionless ratios x = s/Σ(t) and y = s/s 0 , where s 0 is a constant characterizing the smallest droplets in the system and Σ(t) the maximum droplet size at time t. By purely dimensional considerations, the exponent θ must be θ = (D + d)/D [19], where D is the dimensionality of the droplets, and d is the dimensionality of the substrate. In our case, for three-dimensional droplets (D = 3), on a two-dimensional substrate (d = 2), we have θ = 5/3. The theory for breath figures [18,19] asserts that the droplet arrangement in the late-time regime is self-similar and features a scaling range. Moreover, the tails of the distribution, encompassing the smallest and the largest droplets, lie outside of the scaling range [19,50]. Hence, the droplet number density can be expressed as wheref (s/Σ(t)) andĝ(s/s 0 ) are the so-called 'cutoff functions', describing the large and small droplets respectively, i.e. the tails of the distribution, while K is a constant, chosen in such a way thatf (x) = 1 for small arguments x, andĝ(y) = 1 for large y. Here, the polydispersity exponent τ must take a value 0 < τ < θ to cope with a finite droplet volume and droplet number at all times [24]. The scaling range of n(s, t) amounts to the interval of s/Σ(t) where both cutoff functions,f (x) andĝ(y), are constant. It increases over time, since Σ(t) grows in time, while s 0 is a constant.
F. Scaling arguments for droplet growth exponent
The radius of the largest droplet in the system, R, is expected to grow in time as R ∼ t ν , where ν is a constant. Different values have been reported in the literature for the exponent ν [17,26,27,36,51,52], depending on the mechanism governing the individual droplets growth (e.g. surface or bulk diffusion of the vapour, heat dissipation, etc.). Typical values of ν range from 1/9 to 1/3, in the monodisperse non-coalescing growth phase [27,36,52], and from 1/3 to 1 in the polydisperse growth phase [26,52]. In line with previous observations, our data show the values ν = 1/3 in the initial nucleation phase (called (i) in Fig. 2), and ν = 1 in the late-time regime (called (iv) in Fig. 2). Such values can be explained by purely geometrical considerations, under the following assumptions: 1. All droplets take the shape of spherical caps, such that their volume V is proportional to the cube of the radius R of the wetted area. In particular, V = αΣ, where Σ = R 3 is the size of the droplet, conserved when two droplets merge and α = (π/3)(2 + cos θ c )(1 − cos θ c ) 2 /(sin θ c ) 3 , with θ c the contact angle, measured through the liquid phase. Hence for hemispherical droplets, θ c = π/2 and α = 2π/3. 2. The water flux Φ (water volume per unit area per unit time) is constant in time and impinges uniformly on the surface. 3. In growth regime (i) there is a constant number N 0 of droplets that are roughly monodisperse and equally spaced such that each droplet covers an area A 0 . The water flux impinging on the surface is entirely and uniformly distributed between the droplets, due to surface diffusion. Hence, in the initial phase each droplet collects the flux impinging on a constant area A 0 such that Here s 0 and t 0 denote the initial droplet size and the initial time, respectively. The droplet size grows linear in time, and its radius with a 1/3 power law. 4. In the late stage, self-similar growth regime (iv) the surface is densely covered by droplets such that direct water deposition on the droplets dominates and each droplet collects the flux impinging on an area proportional to the surface that it covers, The droplet's volume grows asymptotically as t 3 while the radius grows linearly in time with a velocity πΦ/(3α), equal to Φ/2 for hemispherical droplets. Hence, in the late-regime we have R ∼ t ν and Σ ∼ t 3ν , with ν = 1.
G. Time decay of the porosity
In order to determine the time dependence of the porosity, we express the fraction a of the surface area covered by droplets as Cs d/D n(s, t) ds (12) where C = π. We then employ Eq. (5) to derive where K = KC, x = s/Σ and the cutoff functions are taken into account by an informed choice of the constant values s p and x p in the integration bounds. In the long-time limit the lower integration bound, s p /Σ, will approach zero and the surface area will be fully covered such that a = 1. Consequently, from which we derive By making use of Eqs. 14 and 16 , the porosity can then be written as We substitute Eq. (11), inside Eq. (17) and we derive where from which Thus the porosity decays in time as p ∼ t −k . For Blackman's prediction τ * = 19/12 [33] in combination with the values ν = 1 from geometrical considerations and θ = 5/3 from dimensional analysis, we find that p(t) ∼ t −1/4 . Note that the observed decay of the porosity implies that τ < θ. We conclude the analysis by taking a closer look at the prefactor of the power law. The ratio in the brackets amounts to the width of the scaling range, with cutoffs at the non-dimensional sizes s p /Σ(t) and x p in Fig. 3. For a given size Σ(t) of the largest droplet, the width of the scaling range for different systems is expected to differ mostly due to differences in the lower cutoff function [33]. Due to the exponent θ − τ = 1/12, even a difference of two decades for vastly different materials, would imply that the prefactor of the power law varies by at most 30%.
H. Time decay of the number of droplets
In view of Eq. (5) one can write the number of droplets per unit area of substrate, N (t), as Here s N and x N in the integration bounds are constants chosen in such a way to keep into account the cutoff functions. For τ < 1 the second term in the square bracket in Eq. (24) is sub-dominant, and the total droplet number is obtained by dividing the total area A tot of the surface by the area Σ d/D covered by a typical large droplet. In view of Fig. 3, this is at variance with the finding that small droplets dominate the droplet size distribution in the late-regime. Hence, in the late regime, τ > 1 and With θ = 1 − d/D and Σ = R 3 max one obtains that Hence, in dominant order the droplet number N (t) shows the same power-law decay as the porosity, namely N ∼ t −k . Its dependence as function of R max is provided in Fig. 7. However, the data analysis is complicated in this case because the power law suffers from a cross over, even in the large-time asymptotic regime (see Eq. (24)). Therefore, whenever possible, the analysis of the power law time decay of the porosity should be preferred as a way to assess scaling in the late regime, respect to the analysis of the number of droplets.
I. Kinetic Monte Carlo simulations
The numerical simulations are performed with a lattice Kinetic Monte Carlo algorithm [40][41][42] implemented in Python using object-oriented programming. The code used in this work is a direct adaptation of the code developed in [43] for the description of atoms aggregating in 3D clusters over surfaces. The simulations are performed in a non-dimensional fashion. In order to present the results in dimensional form, the appropriate length and time units are chosen in such a way that the numerical late time evolution of the largest droplet matches the experimental one (Fig. 9).
The computational domain consists of a square lattice of N c = 1200 × 1200 cells (the 'sites') with periodic boundary conditions. Each site can be 'empty' or 'occupied', i.e. completely filled with water. Each droplet consists of a set of contiguous occupied sites, symmetrically arranged around the droplet's center. Therefore, each droplets is a circle approximated by its staggered version, in perfect analogy with the pixel resolution of the experimental optical measurements. Initially, no droplets are present and all sites are empty. As time progresses, water is deposited and the droplets start to nucleate, grow and merge with each other. Time progression happens in a discrete fashion, assuming that, at each time t j an instantaneous 'event' (deposition of a discrete amount of water) occurs. The event is then followed by a waiting time, i.e. a time step, ∆t j , when nothing happens. Thus, the next time instant will be t j+1 = t j + ∆t j . Note that the time intervals ∆t j are not constant, but randomly chosen from a Poisson distribution, since the events are considered as rare and independent from each other [40][41][42]. The average of such a distribution is 1/k tot , where k tot =ΦA tot with A tot the total area of the substrate, namely the total number of sites andΦ the number of events (falling droplets) per unit time per unit surface, hence k tot represents the total frequency at which the events take place on average, i.e. the number of events per unit time. The flux Φ, i.e. the water volume deposited per unit area, can be determined as Φ =ΦV 0 , where V 0 is a constant representing the volume deposited in a single event. However, changing Φ is equivalent to a mere time rescaling [43] and does not affect the critical exponents. Therefore one can take bothΦ and V 0 equal to unity and choose the time and length units to achieve dimensional matching with both the size of the smallest droplet detected in the experiments and the experimental late time evolution of the largest droplet.
Once the time step to advance the system has been decided, the algorithm chooses the type of event that takes place. In particular, two types of events can occur: the nucleation of a new droplet on an empty site or the growth of an existing droplet due to direct water deposition. The type of event is randomly determined, at each time, based on the probability that either growth or nucleation occurs. Such probabilities are N c,occ /N c for droplet growth, and (N c − N c,occ )/N c for droplet nucleation, where N c,occ is the total number of occupied sites. The volume of deposited water at each time step is constant, analogous to the deposition of a droplet. At this point, once the type of event (droplet growth or nucleation) has been decided, the exact position where the water volume will fall is determined with a third random number, based on the probabilities associated to each existing droplet and nucleation site. In particular, if the event is a droplet's growth, each existing droplet i will have a probability N i /N c,occ to grow, where N i is the number of sites occupied by the droplet i. If the event is a nucleation, each free site will entail a probability 1/(N c − N c,occ ) to host the new droplet. Whenever one of these events leads to overlapping between droplets, i.e. occupation of the same site by more than one droplet, all the involved droplets are merged. The merging of droplets is assumed to be instantaneous and preserves the droplets' volume. A droplet resulting from a merging event has the same volume as the sum of the merging droplets and the center located at the center of mass of the system formed by the two merging droplets. The process is sequentially repeated until no more overlapping between droplets is present and each site is occupied at most by a single droplet. | 8,779.4 | 2021-10-02T00:00:00.000 | [
"Physics"
] |
A Model-Based Tool for Assessing the Impact of Land Use Change Scenarios on Flood Risk in Small-Scale River Systems—Part 1: Pre-Processing of Scenario Based Flood Characteristics for the Current State of Land Use
: Land use changes influence the water balance and often increase surface runoff. The resulting impacts on river flow, water level, and flood should be identified beforehand in the phase of spatial planning. In two consecutive papers, we develop a model-based decision support system for quantifying the hydrological and stream hydraulic impacts of land use changes. Part 1 presents the semi-automatic set-up of physically based hydrological and hydraulic models on the basis of geodata analysis for the current state. Appropriate hydrological model parameters for ungauged catchments are derived by a transfer from a calibrated model. In the regarded lowland river basins, parameters of surface and groundwater inflow turned out to be particularly important. While the calibration delivers very good to good model results for flow (E vol =2.4%, R = 0.84, NSE = 0.84), the model performance is good to satisfactory (E vol = − 9.6%, R = 0.88, NSE = 0.59) in a different river system parametrized with the transfer procedure. After transferring the concept to a larger area with various small rivers, the current state is analyzed by running simulations based on statistical rainfall scenarios. Results include watercourse section-specific capacities and excess volumes in case of flooding. The developed approach can relatively quickly generate physically reliable and spatially high-resolution results. Part 2 builds on the data generated in part 1 and presents the subsequent approach to assess hydrologic/hydrodynamic impacts of potential land use changes. receive special treatment to a certain extent, as their processing is relatively complex. Once the transect corrections are completed, all categories of calculation points including their attributes can be listed together and sorted in order to create the list of junctions. Now the invert elevations at the culvert and intersection nodes can be interpolated using the open cross-sections upstream and downstream. From the list of junctions, the list of conduits is created. Each conduit is assigned an inlet and outlet node and a unique ID. A flow restriction may only be applied to the conduits connecting the storm water disposals with the stream. If the permitting authorities have specified a diameter for the lower end of the storm sewer channel, then the diameter limits the flow. Otherwise, the flow limitation is realized via the approved peak discharge. primarily on flood characteristics, high flows are of particular interest here. These occur in the catchment area of the Schmarler Bach mainly in the summer months. In particular, the months of June and July 2017 exhibited the highest flows in the observation period. The events of mid/late July even led to local flooding of streets and cellars in the inner city of Rostock [31,32].
Background
Flooding is a natural and recurring phenomenon. It ensures fertile floodplains and therefore favors agriculture in river valleys. Besides, the use of rivers as transport routes for trade promoted human settlement along the waterways. However, for both reasons, land cultivation and water transport, rivers and streams have often been straightened [1]. In parallel, population growth is inevitably accompanied by increasing land sealing, which in turn accelerates surface runoff [2][3][4] at the expense of evaporation and infiltration. In Italy, for example, an average increment of 8.4% in soil sealing induced an average increase in surface runoff equal to 3.5% and 2.7% respectively for 20-and 200-year return periods [5]. Increased surface runoff, flow course shortening, or deformation and loss of retention space are drivers for raising peak flows and increased flood probabilities. These factors are superimposed by changing hydro-meteorological conditions due to climate [6][7][8]. Accordingly, responsible development of land use should also take the resulting impact on river runoff and flood probability into account. This requires a sound understanding of the hydrological and hydrodynamic processes in the regarded catchment and the affected river basin.
The term flood risk is always related to the probability or recurrence interval of a certain runoff or water table. Generally, these values can be derived via three ways: (1) Statistical analysis of historic time series.
(2) Statistical regionalization of flood characteristics. (3) Hydrologic modeling (if a water table is required, supplemented by hydrodynamic models).
Time series analysis requires the availability of monitoring data of flow and/or water table over a sufficient long observation period (10 a minimum, 30 a or more is better [9]). Since monitoring stations are maintenance-intensive and costly, those data are only available for a very limited number of rivers or river sections. Smaller streams and tributaries tend not to be surveyed at all.
To close this data gap, various procedures for regionalizing flood parameters are in use. Most of them are based on observed discharges in similar regions. Simple methods are related solely on the size of the catchment and assume the same discharge per area at the location with measurement and at the location without measurement. Further development of this is the multiple regression, which links several relevant basin parameters (e.g., basin size, slope, flow length, basin shape, soil, and geology parameters) to peak discharge. There are a number of other procedures, yet these will not be considered further here. Statistical regionalization methods are relatively simple to apply and require comparatively little time, which is why they are justifiably utilized in practice for certain questions.
The third option is to employ (regionalized) hydrologic models to predict runoff from ungauged watersheds, with the objective of relating various model parameters to the physical characteristics of the watershed [10,11]. Models used in this context are usually conceptual, lumped models, describing the runoff process in a simplified way, based on comparatively few model parameters. Influencing factors and thus significant model parameters differ from region to region and depend on the dominant hydrological processes in the respective catchment region and on the desired result (e.g., peak flows vs. average monthly or annual flow values). A study conducted in the Ivory Coast marked that land use and rainfall distribution over the year are important model parameters in the context of regionalization [12]. The role of precipitation was also highlighted by a study from Australia using a monthly water balance model, where the mean annual precipitation at different locations ranged from 600 mm to 2400 mm [13]. However, this shows that model results are also sensitive to the precipitation characteristics, which is actually an input variable.
Astonishingly, deterministic models with a clear conceptual link between physical conditions in the catchment and the resulting hydrologic processes have been rarely applied in ungauged systems, probably mainly due parametrization questions. However, namely process-oriented semi or fully distributed models should be well suitable for those situations provided that the physical data can be derived from geodata or remote sensing. This would allow for physically funded parameter transfer from modeling studies with calibration data. A corresponding study dedicated to flash floods achieved only a small decrease of performance of 10% by transferring calibrated model parameters to a new validation site [14].
None of the studies mentioned above consider river or stream hydraulics, which is of particular importance in the formation of flood flows. In particular, small rivers in cultivated or urbanized areas are modified by man-made structures such as culverts and pipelines, which have a significant influence on flow dynamics and water level. Leading back to the initial problem of analyzing future land use changes on stream hydraulics, deterministic models should also provide a hydrodynamic functionality, requiring additional physical data, i.e., river profiles and infrastructural data.
Meanwhile, in many parts of the world, the availability and quality of hydrologically and hydrodynamically relevant geodata (soil type, land use, DEM, groundwater levels, etc.) is very good. Setup and parametrization of physically based models directly based on these data should therefore be more and more possible.
However, this concept has a clear constraint: The application of those models, even if well parametrized, is hardly applicable by regional planners who are typically not modeling experts. When providing those models for regional planning purposes, they must be tailored for the envisaged group of end-users. A promising way to do so is the combination model setup, parametrization and preprocessing for the status quo with a simplified GIS-based analysis for land-use change scenarios.
Objectives and Structure of the Study
Summing up the arguments above, the overall objective of this study is twofold: (4) To develop a concept to setup and parametrize a deterministic distributed model based on available geodata. (5) To develop a simplified algorithm for analyzing land-use change scenarios that is based on the models developed but can be used by regional planning practitioners.
Following these targets, the study is separated into two papers. Part 1 is dedicated to model setup and parametrization and the determination of flood characteristics for the current state and thus forms the basis for the second part. The innovative approach of the method presented lies in the automated transfer of physical model parameters based on geodata for the use of spatially and temporally highly resolved deterministic rainfall runoff and stream models. The desired results can be generated comparatively fast but are, at the same time, physically validated. Part 2 will describe the developed simplified procedure for the rapid calculation of land use change effects on flood characteristics and its embedding in a GIS-based decision support system (reference to part 2).
Study Area and Data Used
The study area is located in the northeast of Germany and covers approximately 530 km 2 . It comprises the city of Rostock and its neighbouring municipalities (see Figure 1) and contains more than 1500 km of small tributaries that drain into the Warnow or directly into the Baltic Sea. In order to achieve a high spatial resolution in the model setup and to maintain an overview in the process, the study area was divided into several smaller catchments.
Although the catchments are located close to each other and are subject to very similar climatic conditions, they differ in some characteristics. For example, the landscape in the south-east is relatively hilly, while the catchments near the Baltic Sea are rather flat. The catchments within the city have a high proportion of sealed surfaces, while agricultural land use dominates in the surrounding municipalities.
For model calibration, the Schmarler Bach catchment was used ( Figure 2) as continuous flow and water level measurement data had already been collected here [15]. The 23 km 2 area has little gradient and is therefore one of the flat representatives (−1 m-30 m above sea level). A pumping station keeps the water level in the lower reaches below the level of the Baltic Sea. Approximately 34% of the area is partially impervious, which is due to urban use (residential area, traffic area, and industry/trade) [16]. The second largest share is arable land with 29 %. With intensive urban use, the number of storm water disposals increases. At Schmarler Bach, there are a total of 91 points, which is the largest number compared to the other model sites. Although the catchments are located close to each other and are subject to very similar climatic conditions, they differ in some characteristics. For example, the landscape in the south-east is relatively hilly, while the catchments near the Baltic Sea are rather flat. The catchments within the city have a high proportion of sealed surfaces, while agricultural land use dominates in the surrounding municipalities.
For model calibration, the Schmarler Bach catchment was used ( Figure 2) as continuous flow and water level measurement data had already been collected here [15]. The 23 km 2 area has little gradient and is therefore one of the flat representatives (−1 m-30 m above sea level). A pumping station keeps the water level in the lower reaches below the level of the Baltic Sea. Approximately 34% of the area is partially impervious, which is due to urban use (residential area, traffic area, and industry/trade) [16]. The second largest share is arable land with 29 %. With intensive urban use, the number of storm water disposals increases. At Schmarler Bach, there are a total of 91 points, which is the largest number compared to the other model sites.
The monitoring station is located in the southern branch of the stream network. Its catchment is about 12 km 2 in size and is already significantly influenced by urban use. The catchment of the Carbäk stream was used for testing the concept of parameter transfer based on geodata without additional calibration ( Figure 3). With its 42 km 2 , it is about twice as large as the Schmarler Bach catchment. Due to the large east-west exten- The monitoring station is located in the southern branch of the stream network. Its catchment is about 12 km 2 in size and is already significantly influenced by urban use.
The catchment of the Carbäk stream was used for testing the concept of parameter transfer based on geodata without additional calibration ( Figure 3). With its 42 km 2 , it is about twice as large as the Schmarler Bach catchment. Due to the large east-west extension, the surface elevations span between 0 m to 65 m above sea level and thus show a comparatively larger range. Differences can be noted in land use patterns: While arable land takes up more than 50% of the area, partial sealed uses are only represented by 18 %. Accordingly, there are fewer storm water disposals to count (44 in total). The catchment of the monitoring station is 33 km 2 in size and thus almost 3 times larger than the catchment of the monitoring station in the Schmarler Bach. Figure 4 compares the measured flows of the two monitoring stations and illustrates the daily sums of rainfall of the rain gauge "Uni Rostock Hy" (the University of Rostock, department of hydrology and applied meteorology). Schmarler Bach flows show high peaks in the summer months, especially in wet June and July 2017, which is due to intensive rainfall and the large proportion of sealed areas that provoke a high amount of direct (and fast) runoff. In contrast, the Carbäk shows the highest flows generally in winter and spring and also in the extraordinary wet month June/July 2017. This suggests that the source of high flows in the Carbäk catchment are different from those in the Schmarler Bach. Since the Carbäk catchment is intensively farmed and drained, the high flows can be attributed to agricultural tile drainage interflows. These occur in the stream when the surrounding soil is saturated, which is usually the case when more rain falls than evapotranspirates. As these interflows have to pass through the soil to enter the drainage network, they require more time compared to surface runoff, which results in a stretched, flattened course of discharges. Figure 4 compares the measured flows of the two monitoring stations and illustrates the daily sums of rainfall of the rain gauge "Uni Rostock Hy" (the University of Rostock, department of hydrology and applied meteorology). Schmarler Bach flows show high peaks in the summer months, especially in wet June and July 2017, which is due to intensive rainfall and the large proportion of sealed areas that provoke a high amount of direct (and fast) runoff. In contrast, the Carbäk shows the highest flows generally in winter and spring and also in the extraordinary wet month June/July 2017. This suggests that the source of high flows in the Carbäk catchment are different from those in the Schmarler Bach. Since the Carbäk catchment is intensively farmed and drained, the high flows can be attributed to agricultural tile drainage interflows. These occur in the stream when the surrounding soil is saturated, which is usually the case when more rain falls than evapotranspirates. As these interflows have to pass through the soil to enter the drainage network, they require more time compared to surface runoff, which results in a stretched, flattened course of discharges.
Data Processing and Modelling Software
The basis for further work is the homogenization of geodata, which was carried out using QGIS (version 3.10.2) [17,18]. The attributes of the homogenized geodata are further processed with the help of a spreadsheet program. Here, Microsoft (MS) Excel was used together with its Visual Basic for Applications (VBA) interface [19]. VBA contributes to automation and enables faster processing of repetitive tasks. A free alternative to MS Excel is LibreOffice Calc [20], which also provides a VBA interface, but with limited macro support.
Prior to actual model development, a thorough review of available modelling software tools was performed. There is a wide range of hydrologic models with different pros and cons (cf. [21]). For the purposes of this study, the software should fulfill the following criteria: • Freeware for wide transferability and applicability.
•
Combined representation of rainfall-runoff and hydrodynamic streamflow processes to avoid external coupling of different models.
•
Physically based, parameters widely derivable from geodata. • Sufficient spatial distribution, capable to allocate distinct land use changes in the regarded river basin. • Easy and automatable setup and parametrization of the model.
Namely, the required hydrodynamic functionality is rarely available. After a first screening, the combination of HEC-HMS [22] with HEC-RAS [23] and SWMM-UrbanEVA [24], an extension of the widely used software SWMM, were the most promising candidates. The UrbanEVA upgrade involves the implementation of vegetation-specific evapotranspiration and its reduction by a shading factor in the case of urban shading. A detailed description can be found in [16,24]. In a following detailed comparison, the decision was made in favor of SWMM-UrbanEVA.
SWMM (storm water management model [25]) was originally developed for the simulation and evaluation of storm runoff and sewer hydraulics in urban areas [26]. However, with the extension for evapotranspiration calculation, SWMM is very well suited for the
Data Processing and Modelling Software
The basis for further work is the homogenization of geodata, which was carried out using QGIS (version 3.10.2) [17,18]. The attributes of the homogenized geodata are further processed with the help of a spreadsheet program. Here, Microsoft (MS) Excel was used together with its Visual Basic for Applications (VBA) interface [19]. VBA contributes to automation and enables faster processing of repetitive tasks. A free alternative to MS Excel is LibreOffice Calc [20], which also provides a VBA interface, but with limited macro support.
Prior to actual model development, a thorough review of available modelling software tools was performed. There is a wide range of hydrologic models with different pros and cons (cf. [21]). For the purposes of this study, the software should fulfill the following criteria: • Freeware for wide transferability and applicability.
•
Combined representation of rainfall-runoff and hydrodynamic streamflow processes to avoid external coupling of different models.
•
Physically based, parameters widely derivable from geodata. • Sufficient spatial distribution, capable to allocate distinct land use changes in the regarded river basin. • Easy and automatable setup and parametrization of the model.
Namely, the required hydrodynamic functionality is rarely available. After a first screening, the combination of HEC-HMS [22] with HEC-RAS [23] and SWMM-UrbanEVA [24], an extension of the widely used software SWMM, were the most promising candidates. The UrbanEVA upgrade involves the implementation of vegetation-specific evapotranspiration and its reduction by a shading factor in the case of urban shading. A detailed description can be found in [16,24]. In a following detailed comparison, the decision was made in favor of SWMM-UrbanEVA.
SWMM (storm water management model [25]) was originally developed for the simulation and evaluation of storm runoff and sewer hydraulics in urban areas [26]. However, with the extension for evapotranspiration calculation, SWMM is very well suited for the simulation of near-natural catchments outside urban areas [16]. The calculation of water balance variables and streamflow is largely physically based. The SWMM input file is a simple text file that can be opened, read, and modified in any text editor, which facilitates an automated model generation. One of the biggest advantages of SWMM is that it combines both a hydrological rainfall-runoff model and a hydrodynamic drainage model in one software, which makes the numerical calculation very effective and stable, since no external coupling is needed.
The General Concept
In order to obtain flood characteristics for the actual state of land use, a method was developed that consists of several steps, each involving the use of different software tools ( Figure 5). The individual steps are described in detail in the following subsections. simulation of near-natural catchments outside urban areas [16]. The calculation of water balance variables and streamflow is largely physically based. The SWMM input file is a simple text file that can be opened, read, and modified in any text editor, which facilitates an automated model generation. One of the biggest advantages of SWMM is that it combines both a hydrological rainfall-runoff model and a hydrodynamic drainage model in one software, which makes the numerical calculation very effective and stable, since no external coupling is needed.
The General Concept
In order to obtain flood characteristics for the actual state of land use, a method was developed that consists of several steps, each involving the use of different software tools ( Figure 5). The individual steps are described in detail in the following subsections.
Homogenization of Available Geodata
In the first step, geodata are homogenized so that uniform datasets without gaps are available for the entire study area. The necessary sub-steps for this were carried out with QGIS. Table 1 provides an overview of the data used and the attributes derived from it.
Homogenization of Available Geodata
In the first step, geodata are homogenized so that uniform datasets without gaps are available for the entire study area. The necessary sub-steps for this were carried out with QGIS. Table 1 provides an overview of the data used and the attributes derived from it. For the construction of the hydrodynamic stream model, mainly vector data in the form of lines are used. These include open channels, pipelines, and culverts ( Figure 6). Points are generated at certain positions on these lines, which later become the calculation nodes (or junctions) in SWMM. Cross sections (also called transects in SWMM) of the open channels were generated every 50 m on the basis of the DEM with a cell size of 20 cm. The high spatial resolution thus enables the recording of smaller streams with a width of less than 2 m. When deriving cross profiles using the DEM, it should be noted that the lowest point represents the water level and not the actual invert, if water is present. Since we are interested in flood forecast and thus in high water levels, and the deviation of the absolute water levels in the upper layer of the trapezoidal or parabolic cross-sections is small (<10 cm), this inaccuracy is negligible here.
In addition to the points of the watercourse network, the storm water disposal points are added, which represent the last point of the storm sewer network before the rainwater enters the stream. With the information on the diameter and/or the maximum permissible discharge, the direct runoff from the linked areas can be throttled during the simulation. In this way, the storm sewer network does not have to be included in detail. In the end, five categories of points are produced from which the hydrodynamic model is built: Cross section (open channel), pipe and culvert points, intersection points, and storm water disposal points. They are all assigned a unique ID composed of the hierarchical 12-digit watercourse identification number (WIN) in conjunction with the chainage (e.g., 492000000000_4847.0). In addition to the points of the watercourse network, the storm water disposal points are added, which represent the last point of the storm sewer network before the rainwater enters the stream. With the information on the diameter and/or the maximum permissible discharge, the direct runoff from the linked areas can be throttled during the simulation. In this way, the storm sewer network does not have to be included in detail. In the end, five categories of points are produced from which the hydrodynamic model is built: Cross section (open channel), pipe and culvert points, intersection points, and storm water disposal points. They are all assigned a unique ID composed of the hierarchical 12-digit watercourse identification number (WIN) in conjunction with the chainage (e.g., 492000000000_4847.0).
For the rainfall-runoff model, the subcatchments are generated on the basis of the surface subcatchments of the 50 m-stream segments. Since the spatial resolution is quite high, they have to be generalized to save computing time during the simulation. Therefore, the subcatchments of the 50 m-stream segments are accumulated in such a way that new subcatchments start whenever two streams meet or rainwater is discharged from the storm sewer network. Each generated subcatchment is assigned an outlet, which serves to exchange the simulated water volumes between the rainfall-runoff model and the stream. Information on the mean groundwater level is also required for the model construction, which is derived from the groundwater isohypses or from the corresponding interpolated raster map, respectively. The subcatchments are then subdivided according to 13 land use classes ( Table 2). For the rainfall-runoff model, the subcatchments are generated on the basis of the surface subcatchments of the 50 m-stream segments. Since the spatial resolution is quite high, they have to be generalized to save computing time during the simulation. Therefore, the subcatchments of the 50 m-stream segments are accumulated in such a way that new subcatchments start whenever two streams meet or rainwater is discharged from the storm sewer network. Each generated subcatchment is assigned an outlet, which serves to exchange the simulated water volumes between the rainfall-runoff model and the stream. Information on the mean groundwater level is also required for the model construction, which is derived from the groundwater isohypses or from the corresponding interpolated raster map, respectively. The subcatchments are then subdivided according to 13 land use classes ( Table 2). For the intersected subcatchments, mean values of ground height, slope, soil attributes), and degree of sealing are calculated. The point data for the hydrodynamic flow model and the area-based data for the rainfall-runoff model are processed further using Excel and VBA. receive special treatment to a certain extent, as their processing is relatively complex. Once the transect corrections are completed, all categories of calculation points including their attributes can be listed together and sorted in order to create the list of junctions. Now the invert elevations at the culvert and intersection nodes can be interpolated using the open cross-sections upstream and downstream. From the list of junctions, the list of conduits is created. Each conduit is assigned an inlet and outlet node and a unique ID. A flow restriction may only be applied to the conduits connecting the storm water disposals with the stream. If the permitting authorities have specified a diameter for the lower end of the storm sewer channel, then the diameter limits the flow. Otherwise, the flow limitation is realized via the approved peak discharge.
Runoff and a Hydrodynamic Stream Model
The computation points of the hydrodynamic stream system are loaded into an Excel table and sorted according to their WIN and chainage. After assigning the node properties ( Figure 7, attributes = white boxes), the cross sections of the open channels (transects) receive special treatment to a certain extent, as their processing is relatively complex. Once the transect corrections are completed, all categories of calculation points including their attributes can be listed together and sorted in order to create the list of junctions. Now the invert elevations at the culvert and intersection nodes can be interpolated using the open cross-sections upstream and downstream. From the list of junctions, the list of conduits is created. Each conduit is assigned an inlet and outlet node and a unique ID. A flow restriction may only be applied to the conduits connecting the storm water disposals with the stream. If the permitting authorities have specified a diameter for the lower end of the storm sewer channel, then the diameter limits the flow. Otherwise, the flow limitation is realized via the approved peak discharge. Pipeline routes are designed depending on a specified minimum gradient and a minimum cover with soil, beginning with a depth of 2 m below ground.
For the hydrological rainfall-runoff model, a large part of the work has already been done in QGIS. The output is a large attribute table in which the properties of each subcatchment (surface, land use, soil, and aquifer properties, inlet node of hydraulic network) are stored. The necessary VBA steps now consist of copying the values under the appropriate SWMM headings and formatting them in a software-readable format.
Model Setup, Calibration, Parameter Transfer and Validation
The catchment and stream model setup was developed and tested in a case study using the Schmarler Bach site. The combined model was consistently built up on the basis of homogenized geodata using VBA macros (VBA-Visual Basic for Applications) to automate the process. By splitting the river course in fairly short sections of about 50 to 100 m, spatially high-resolution flood characteristics (maximum flow, maximum head, maximum capacity, etc.) can be provided for a relatively large area.
After setup, a detailed calibration was performed on the basis of continuous monitoring data of flow and water level. Calibration methods and their fields of application are presented and discussed in [16]. Table 3 presents performance criteria applied to check the model accuracy with regard to the stream flow. It has been supplemented by the peak error E peak , which represents the relative deviation of the simulated from the observed maximum value of a specific peak flow event.
Mean absolute error Nash Sutcliffe efficiency Peak error E peak E Peak = Q calc Q obs ·100 Q calc = calculated flow, Q obs = observed flow; obs = measured value (observed); calc = calculated value; Indices: i = location, t = time, n = number of measurement data.
In the process, physical model parameters were derived from geodata and adjusted to obtain the best model fit regarding stream flow. In a subsequent step, the resulting correlations between geodata and model parameters were transferred to another monitored river basin, the Carbäk catchment, and validated with measured flow data. This way the general validity of the calibrated model parameters is checked, and it is simultaneously tested whether the transfer of largely physically based model parameters is fundamentally satisfactory-despite the different territorial characteristics. After assessing the applicability of this modelling concept, the method was transferred to other river basins in the area without monitoring data.
The parametrized models are finally applied to simulate precipitation scenarios of defined duration and return period in order to generate flood characteristics for the current state of land use. The flood-relevant return period is related to the predominant land uses in the study area and corresponds to the demanded protection level. For heterogeneous land use, this requires defining different return periods and running the model with the appropriate precipitation data.
Selection of Statistical Rainfall Events
When choosing statistical rainfall events, it is first necessary to consider which risk classes are present in the study area. The risk class in turn depends on the predominant land use. For the area of the Hanseatic City of Rostock, assignments of protection levels (return period) to land use classes have already been made (Table 4). Within the framework of this study, these were as well transferred to the surrounding rural district. To determine hydraulic parameters such as statistical flows and water levels as well as profile capacities, simulations were carried out on the basis of statistical precipitation events. Their return periods were selected according to Table 4, whereby a return period of 50 a was additionally taken into account. The duration of the decisive (worst) precipitation event depends primarily on the size of the subcatchment and the corresponding flow length, i.e., the smaller the catchment, the shorter (and at the same time more intense) the decisive rainfall event. Here, the duration categories 1 h, 3 h, 6 h, 9 h, and 12 h were applied and combined with the return periods to generate 18 precipitation scenarios (Table 5). The precipitation amounts were retrieved from the heavy rainfall regionalization (German abbreviation: KOSTRA atlas) of the German Weather Service [30]. The KOSTRA atlas provides raster data on precipitation amounts and intensities per area for Germany as a function of duration D and annuality T (return period). The data are available in an 8.5 km × 8.5 km grid. Each model site is uniformly over-rained, i.e., one representative cell is assigned to each catchment. If a catchment is covered by two or more cells in equal proportions, the cell with the highest precipitation amounts is used.
Since there is usually a clear intensity variation for short durations, the intensity course was statistically determined using the long-term rain data of the monitoring station in Warnemünde (central north of the study area). The data have a temporal resolution of 5 min and were recorded by the German Weather Service. The characteristic precipitation pattern for the respective rainfall duration is obtained by normalising the measured natural rain events of the same duration, which is achieved by temporal centring of the 5 min peak intervals [29].
The application of design rain events in scenario simulation, selected based on stipulated flood reoccurrence intervals, is a pragmatic choice, typically applied in urban hydrology. There is a tendency where the return period of the flood or peak flow is smaller than that of the initializing rainfall event. This way, the choice is "on the safe side".
Initial Condition
Before the scenario simulations of the different rainfall events can be started, some preliminary work is necessary. Here, the generation of a start condition on which the model rainfall is based is of particular importance. This refers to all reservoir levels in the catchment, i.e., the water level above the terrain, the proportion of the soil pores filled with water, the groundwater level, and the water level in the stream network. For this purpose, the model is run with monthly average evaporation and precipitation data in order to generate a so-called hot start file at the end of the simulation. The final condition of this pre-simulation then forms the start condition for the scenario simulation. In this case, a condition was chosen that leads to average flows in the watercourse. Table 6 presents important model parameters for the dominant processes in the study area. It was the intention to reduce the number of individually calibrated parameters to a minimum and to assign as much as possible parameters directly based on geodata information. In particular, the parameters derived from high spatial resolution geodata, such as the soil maps, were taken as given. For example, infiltration-relevant physical soil properties, such as conductivities, porosity, wilting point, and field capacity, were determined based on the soil type. The most important hydrological processes affecting streamflow are surface runoff, which is responsible for peak flows, and groundwater inflow, which is the base or starting point for peak flows. Therefore, the most effort was put into the calibration of these processes. Groundwater flow is designed to simulate near-surface agricultural tile drainage. Strictly speaking, it imitates interflow. The threshold water table elevation controls the extent to which water-level groundwater inflow to the stream occurs. It was assumed (or calibrated) that the drainage pipes are on average 1.2 m below ground level. In dealing with the material properties, borehole profiles were surveyed. Many of these contained a near-surface aquifer with an underlying impounding boulder clay layer, which is typical for the northeastern German lowlands. As part of a generalization, the material properties were assumed to be uniform for the entire study area and checked against literature values.
Parameterization
When calibrating surface runoff, the roughness, detention storage, and flow length (=area size/width) play a role. The latter was derived as a function of the area size to ensure parameter transfer. For the former, a distinction was made between sealed and pervious surface portions.
However, soil infiltration and evapotranspiration have an indirect influence on groundwater flow, since they control how much water reaches the groundwater and thus fill the reservoir. Further details can be found in [16]. Table 7 presents the corresponding error measures and performance criteria. The visual impression shows a good to very good match between the measured and simulated values. The volume error (E vol ) of 2.4% is very small, as is the mean absolute error (MAE) of 0.03 m 3 s −1 . The dynamics are well reproduced (correlation coefficient R = 0.84) and the coverage of simulated and measured flows (Nash-Sutcliffe efficiency NSE = 0.84) is overall in the very good range. However, since the focus of this work is primarily on flood characteristics, high flows are of particular interest here. These occur in the catchment area of the Schmarler Bach mainly in the summer months. In particular, the months of June and July 2017 exhibited the highest flows in the observation period. The events of mid/late July even led to local flooding of streets and cellars in the inner city of Rostock [31,32]. Table 7 presents the corresponding error measures and performance criteria. The visual impression shows a good to very good match between the measured and simulated values. The volume error (Evol) of 2.4% is very small, as is the mean absolute error (MAE) of 0.03 m 3 s −1 . The dynamics are well reproduced (correlation coefficient R = 0.84) and the coverage of simulated and measured flows (Nash-Sutcliffe efficiency NSE = 0.84) is overall in the very good range. However, since the focus of this work is primarily on flood characteristics, high flows are of particular interest here. These occur in the catchment area of the Schmarler Bach mainly in the summer months. In particular, the months of June and July 2017 exhibited the highest flows in the observation period. The events of mid/late July even led to local flooding of streets and cellars in the inner city of Rostock [31,32]. While the model apparently reproduces the more frequent, smaller rainfall events very well, there are nevertheless differences between the observed and simulated peak While the model apparently reproduces the more frequent, smaller rainfall events very well, there are nevertheless differences between the observed and simulated peak flows for the larger events (Figure 8, diagram b and Table 8). Events 1, 2, and 4 only deviate by a maximum of 10% from those measured, but event 3 (20 July 2017) shows a significant difference as it is more than three times as large as the observed maximum value. Here, it can be assumed that the precipitation centre was directly above the rain gauge (7 km south of the Schmarler Bach) and the catchment itself was located rather on the edge of the rain field at that time. In fact, heavy rainfall events are often short and very localised, especially in urban areas, which was also reflected in the data of different rain gauges in the city of Rostock (cf. Figure 10). The duration of the rain event is considered relatively short (Table 9) and reinforces the thesis. For this reason, the event of 20 July 2017 is classified as less relevant for the Schmarler Bach. Table 8. Peak error of the four largest flows in the observation period at the monitoring station in the Schmarler Bach. Figure 9 shows the simulated hydrographs with SWMM-UrbanEVA and the observed stream flows at the monitoring station of the validation site "Carbäk". Table 10 lists the corresponding error measures and performance criteria. While the cumulative flows in the Schmarler Bach are only slightly too low (E vol = 2.4%), they are 9.6% too high in the Carbäk. The MAE is also higher by 0.013 m 3 s −1 . However, the dynamics of the flows are reproduced well by the model (R = 0.88). Nevertheless, the individual observed values are less well-met overall compared to the Schmarler Bach. Thus, the Nash-Sutcliffe efficiency coefficient is at the upper (good) edge of the satisfactory range (NSE = 0.59). the corresponding error measures and performance criteria. While the cumulative flows in the Schmarler Bach are only slightly too low (Evol = 2.4%), they are 9.6% too high in the Carbäk. The MAE is also higher by 0.013 m 3 s −1 . However, the dynamics of the flows are reproduced well by the model (R = 0.88). Nevertheless, the individual observed values are less well-met overall compared to the Schmarler Bach. Thus, the Nash-Sutcliffe efficiency coefficient is at the upper (good) edge of the satisfactory range (NSE = 0.59). Looking more closely at the events of June/July 2017 (Figure 9b; Table 11), an overestimation of flows is noticeable here for the extreme events. Events 1 and 2 were overestimated by 26% and 27% respectively, whereby a data gap is to be found for event 2 during the increase in flow. The observed flows therefore probably do not represent the maximum value. As for the Schmarler Bach site, the rainfall event of 20.07.2017 (no. 3) is also classified as not relevant for the Carbäk site. Here, however, it leads to a significant increase in the base flow and thus influences the subsequent event of 25.07.2017 (no. 4). Table 11. Peak error of the four largest flows in the observation period at the monitoring station in the Carbäk stream.
Error Discussion
The quality of the results depends to a large extent on the input data. Therefore, important input variables and other possible sources of error are discussed here: The input precipitation is the crucial input variable and has a decisive influence on the model result. In both cases, Schmarler Bach and Carbäk, the input precipitation was measured outside the model sites, i.e., approximately 7 km south of the Schmarler Bach and 9 km west in the case of the Carbäk, respectively. As mentioned above, there are several precipitation gauges in the study area-but not all of them are set up professionally or they are at least positioned very differently (e.g., on top of a building, underneath a tree, next to a building). Due to the therefore very different systematic measurement error (especially wind error), the measured data are not directly comparable. The only measurement series that is available without gaps in a high temporal resolution (5 min) and could be corrected for the systematic measurement error is the rain gauge "Uni Rostock Hy" of the Department of Hydrology and Applied Meteorology. Therefore, the rain gauge series was applied to the entire study area. However, this does not mean that the measurement series is equally representative for every location in the study area. In particular, heavy rain cells appear in a very localized manner and intensify or weaken significantly along their path. This is exemplified by Figure 10, which illustrates the event on 20 July 2017 (event no. 3) for the different gauges. The gauge "Uni Rostock Hy", which was used for both model sites, shows the highest measured rainfall intensities, while others closer to the model sites measured less precipitation. Whether this is a consequence of the measurement error cannot be clarified. However, especially against the background of the observed flows, it is very likely that less precipitation actually fell in the two model domains on 20 July. • Storm water disposals. With regard to the maximum flows, the throttling via the discharge points plays a significant role. It is possible that not all existing storm water disposals were included in the model setup, but only the officially documented ones. In addition, the diameters of the discharge pipes within the city limits are less well known, so that the throttling was carried out almost exclusively via the approved maximum permissible discharge. In the case that the approved discharge is greater than the actual possible discharge due to existing diameters, it comes to an overestimation of peak flows. • Size of subcatchments. Furthermore, the size of the generated subcatchments might affect the resulting maximum flows. A comparison of the two model sites shows that the subcatchments of the Schmarler Bach are, on average, 0.036 km 2 in size, while the subcatchments of the Carbäk are a little larger (0.051 km 2 on average). In the case of the Carbäk, the overestimation of runoff peaks of the individual drainage units could be explained by the retention function of small-scale hydrological structures (runoff barriers, small inner basins), which cannot be sufficiently taken into account by the model in large subcatchments.
• Measured flows. The measured flows themselves can also be subject to errors. Particularly high flows often have to be extrapolated and are usually not verified by comparative multipoint • Storm water disposals.
With regard to the maximum flows, the throttling via the discharge points plays a significant role. It is possible that not all existing storm water disposals were included in the model setup, but only the officially documented ones. In addition, the diameters of the discharge pipes within the city limits are less well known, so that the throttling was carried out almost exclusively via the approved maximum permissible discharge. In the case that the approved discharge is greater than the actual possible discharge due to existing diameters, it comes to an overestimation of peak flows.
Furthermore, the size of the generated subcatchments might affect the resulting maximum flows. A comparison of the two model sites shows that the subcatchments of the Schmarler Bach are, on average, 0.036 km 2 in size, while the subcatchments of the Carbäk are a little larger (0.051 km 2 on average). In the case of the Carbäk, the overestimation of runoff peaks of the individual drainage units could be explained by the retention function of small-scale hydrological structures (runoff barriers, small inner basins), which cannot be sufficiently taken into account by the model in large subcatchments.
The measured flows themselves can also be subject to errors. Particularly high flows often have to be extrapolated and are usually not verified by comparative multipoint measurements. In the case of the Carbäk monitoring gauge, an ultrasonic doppler flow meter was used to continuously measure the water level and flow velocity in order to calculate the flow rates from the two parameters. Since the device only measures the flow velocity in the central lamella, a calibration function was set up based on regular comparative manual multi-point measurements to obtain the average flow velocity of the complete cross-section. This way, flow rates of up to 0.6 m 3 s −1 are confirmed by manual measurements. The highest flows recorded by the continuously measuring device in the timespan June/July 2017 are 0.8 m 3 s −1 and thus lie in the extrapolated range of flows. However, since the velocity recorded in the central lamella by the measuring device and the mean profile velocity manually measured have a very strong linear correlation (R = 0.99), the potential error caused by extrapolation is classified as rather small.
Initial Condition
As a starting condition, a state was chosen that leads to average flows in the watercourse. Under the given climatic conditions (highest mean flows in January/February, lowest mean flows in June), such a state arises in March/April, which is why 31 March was chosen as starting point. In order to ensure that a particularly wet or dry month is not picked at random, monthly mean values for the period 2007 to 2017 were calculated for the climate data evaporation and precipitation. The model was initialized with these average data ( Figure 11) until the annual course of the flows did not change anymore, which was the case after 2 years. In this way, a hot start file was created for the (average) 31 March, in which the status of all subcatchments, junctions, and conduits is stored. With the hot start file and the introduced model rain, the scenario simulation can now be started.
FOR PEER REVIEW 20 of 25
Initial Condition
As a starting condition, a state was chosen that leads to average flows in the watercourse. Under the given climatic conditions (highest mean flows in January/February, lowest mean flows in June), such a state arises in March/April, which is why 31 March was chosen as starting point. In order to ensure that a particularly wet or dry month is not picked at random, monthly mean values for the period 2007 to 2017 were calculated for the climate data evaporation and precipitation. The model was initialized with these average data (Figure 11) until the annual course of the flows did not change anymore, which was the case after 2 years. In this way, a hot start file was created for the (average) 31 March, in which the status of all subcatchments, junctions, and conduits is stored. With the hot start file and the introduced model rain, the scenario simulation can now be started. Figure 11. Comparison of the simulated flows (m 3 s −1 ) based on monthly average evaporation and precipitation data and the observed flows using the example of the Carbäk stream. Figure 12 shows the applied intensity course for the 1-h event. The centering of the maximum volume intervals results in a clear intensity course with the highest value during the 35 min interval, in which almost a quarter of the total rain falls. By multiplying the percentages with the total rainfall volume from the KOSTRA atlas, the amount for each interval will be attained. Figure 12 shows the applied intensity course for the 1-h event. The centering of the maximum volume intervals results in a clear intensity course with the highest value during the 35 min interval, in which almost a quarter of the total rain falls. By multiplying the percentages with the total rainfall volume from the KOSTRA atlas, the amount for each interval will be attained. Figure 11. Comparison of the simulated flows (m 3 s −1 ) based on monthly average evaporation precipitation data and the observed flows using the example of the Carbäk stream. Figure 12 shows the applied intensity course for the 1-h event. The centering o maximum volume intervals results in a clear intensity course with the highest value ing the 35 min interval, in which almost a quarter of the total rain falls. By multiplyin percentages with the total rainfall volume from the KOSTRA atlas, the amount for interval will be attained. Since the intensity variability is less prominent for longer durations, a block rain was assumed for the durations ≥ 3 h, i.e., the total amount of precipitation is distributed evenly over the 5-min intervals.
Flood Characteristics for the Current State of Land Use
Once the models are set up, they can be used to generate a wide range of results. Some of these are listed in Table 12. In the context of this work, the focus was primarily on determining the extent to which the watercourses are already at load during defined statistical rainfall events, or how much capacity is still available before flooding sets in. If flooding occurs, it is important to know how much volume will flow out (Table 12 max_Volume_stored_ponded), so that (decentralized) retention measures can be planned if necessary. For the planning of the development of new sites and the associated storm water discharges, these data and information must be available. Figure 13 illustrates the free capacities in m 3 s −1 of the 50 m segments of the Schmarler Bach system at the example of a 1 h rain event with a 100 a return period. Values smaller than zero (dark red) indicate that the segment is already overloaded and overflowing. This is particularly critical when it affects vulnerable land uses and their infrastructural facilities that should not be flooded during a 100-year event, as is the case for example in the northwest of the area. Measures should be introduced here to reduce peak flows. As well, redensification should only be approved if the proportions of the water balance variables are not shifted towards intensification of surface runoff at the expense of infiltration and evaporation.
R PEER REVIEW 22 of 25 Figure 13. Q_free-Flow rate (m 3 s −1 ) that would additionally fit into the cross profile at maximum flow rate regarding a rainfall event of 1 h duration and 100 a return period.
Conclusions and Outlook
The present study shows how robust hydrological/hydraulic models can be set up for small rivers relatively quickly on the basis of geodata and parameter transfer. These can be used to generate spatially highly resolved information of flow and water level for flood risk analysis based on statistical rainfall scenarios.
In the course of parameterization and calibration, the surface runoff and the groundwater interflow turned out to be the most influential processes regarding stream flow. Groundwater parameters, such as conductivities, porosities, etc., were adjusted in the process of calibration and then globally applied to the entire area. A spatially higher resolution would be conceivable, but this would require a very good knowledge of the subsur- Figure 13. Q_free-Flow rate (m 3 s −1 ) that would additionally fit into the cross profile at maximum flow rate regarding a rainfall event of 1 h duration and 100 a return period.
Conclusions and Outlook
The present study shows how robust hydrological/hydraulic models can be set up for small rivers relatively quickly on the basis of geodata and parameter transfer. These can be used to generate spatially highly resolved information of flow and water level for flood risk analysis based on statistical rainfall scenarios.
In the course of parameterization and calibration, the surface runoff and the groundwater interflow turned out to be the most influential processes regarding stream flow.
Groundwater parameters, such as conductivities, porosities, etc., were adjusted in the process of calibration and then globally applied to the entire area. A spatially higher resolution would be conceivable, but this would require a very good knowledge of the subsurface layers or involve a complex spatial interpolation. With respect to surface runoff, flow length, roughness, and detention storage in particular were subject to calibration. If the methods were transferred to differing areas, these parameters as well as groundwater parameters would have to be recalibrated. With respect to detention storage, it would also be possible to specify it individually for each subcatchment based on a DEM analysis. This way it would not have been necessary to calibrate the parameter. The average slope, the degree of sealing, and the soil parameters were also derived directly from geodata without calibration. In general, the higher the spatial resolution of the model parameters, the less sense it makes to calibrate individual values of them, since individual small subcatchments sometimes have hardly any visible effect on the results at the observation point. A spatially high model resolution is therefore only recommended with qualitatively good data.
In the study area, the parameter transfer to the validation site only led to a slight loss of model accuracy. Even better model results could have been expected regarding the impact of the suboptimal position of the rain gauge. Provided there is a comparably good geodata situation, the approach offers a good chance to set up fairly reliable models, including hydrodynamic processes, namely for the numerous small rivers without any monitoring.
The construction of river section profiles from laser scanning data introduces a certain error, since only the profile above the water level can be sampled. Still, for small rivers with small water depth, the method seems to be sufficiently exact, since the investigated statistical events create a multiple times higher flow than the flow filling the profile at the scanning date. For larger streams and significant water depth, error compensation strategies could be advisable, like assuming mean flow conditions at the scanning date and subtracting it from the simulated flows to achieve even better results regarding water levels.
For the purpose of flood risk analyses, the main advantages compared to a simple GIS-based flood regionalization are (i) the physically integrated and highly distributed land use and (ii) the inclusion of stream hydrodynamics. This way, even small-scale land-use changes can be directly incorporated and analyzed. The hydrodynamic functionality not only provides water level but can also be used in targeted development of the river system and its infrastructures.
With its extension UrbanEVA, SWMM also provides the functionality of a full water balance model. Accordingly, the model can be used as well to quantify alterations in the water balance for planned land use changes, particularly the surface runoff that potentially triggers flooding. Recently, the new mandatory German standard DWA-A 102-1 [33] requires that spatial planning must not fundamentally change the quantitative proportions of water balance variables. This will lead to a significant boost for low-impact design (LID) in urban areas. Initially, SWMM-UrbanEVA was exactly developed for the purpose of better describing LID structures in urban hydrology. In our study, detailed urban drainage infrastructure is purposely not included, but the model environment would allow for such refining.
Funding: This study was conducted within the framework of the Project PROSPER-RO, funded by BMBF, grant number 033L212. We acknowledge financial support by Deutsche Forschungsgemeinschaft and Universität Rostock within the funding program Open Access Publishing.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data sharing is not applicable. | 13,315.4 | 2021-07-08T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A Comparison Principle for Some Types of Elliptic Equations
as proven in 1 for convex operator and in 2 for uniformly elliptic ones. Some years later, Jensen in 3 , using his known approximation functions, proved such a kind of principle between a viscosity subsolution and a viscosity supersolution, both in W1,∞ Ω , for operators which grow linearly in the gradient term and could be uniformly elliptic and nonincreasing in the t variable or degenerate elliptic and decreasing in t. In the same time, in 4 , Trudinger was able to compare solutions which are C Ω ∩ C0,1 Ω and
Introduction
The aim of this paper is to study some fully nonlinear uniformly elliptic equations, where the gradient term could be noncontinuous and growing like some BMO functions.Given an equation F X, p, t, x 0, in the case of classical solutions, the comparison principle states the following: i let u, v ∈ C 2 Ω be, respectively, a sub and a supersolution of the equation, if u ≤ v on ∂Ω, then u ≤ v in Ω, as proven in 1 for convex operator and in 2 for uniformly elliptic ones.Some years later, Jensen in 3 , using his known approximation functions, proved such a kind of principle between a viscosity subsolution and a viscosity supersolution, both in W 1,∞ Ω , for operators which grow linearly in the gradient term and could be uniformly elliptic and nonincreasing in the t variable or degenerate elliptic and decreasing in t.In the same time, in 4 , Trudinger was able to compare solutions which are C Ω ∩ C 0,1 Ω and C Ω ∩ C 0,α Ω .
Then Jensen et al. 5 extended these results considering a zero order term and sub-and supersolutions which are only BUC Ω .Soon after, Ishii in 6 and Jensen in 3 ISRN Mathematical Analysis independently proved a comparison principle for only continuous bounded functions, where in the first paper the author considers continuous degenerate operators of Isaacs type, which grow linearly in the p variable, while the second concerns uniformly elliptic operators which are Lipschitz in the gradient term and nonincreasing in t.
Then Ishii and Lions in 7 obtain this kind of result between bounded viscosity suband supersolution for strictly elliptic operators which grow quadratically in the p variable and are nonincreasing in t.In the same article, these two authors weakened the structure conditions on F and compared continuous bounded functions where at least one has to be locally Lipschitz; then, this result was sharped by Crandall in 8 see also 9 .
Crandall et al., in their pioneering paper 10 were able to prove such a kind of results between viscosity solutions for degenerate elliptic equations, nonincreasing in t, extending the results obtained before see also 11,12 .Then Koike and Takahashi in their work 13 compared L p -viscosity sub-and supersolutions, when at least one of them is L p -strong.
In the last years, Bardi and Mannucci in 14 prove a comparison principle for fully nonlinear degenerate elliptic equations that satisfy some conditions of partial nondegeneracy, with linear growth in the gradient term see also 15 and Sirakov,16 , has the same result for fully nonlinear equations of Hamilton-Jacobi-Bellman-Isaacs type with unbounded ingredients and the most quadratic growth in p.
Also, it is interesting to mention the series of papers by Birindelli and Demengel, 17-19 , where they investigate on singular fully nonlinear equations.
The paper is organized as follows: in the first section some auxiliary results are stated; the second one is characterized by an overview on inf and supconvex envelope; the proof of the main result is given in the third section; finally, in the last one, some examples which justify the interest on this kind of operators are listed.
Preliminaries and Auxiliary Results
First of all, it is useful to give some definitions.We say that P is a paraboloid of opening M when where M is a positive constant, l 0 is a constant, and l x is a linear function.P is a convex paraboloid if there is the sign in 2.1 , concave otherwise.Given two functions u and v on an open set A, v touches u from the above in x 0 ∈ A when In this case, one could also say that u touches v from below.
Consider the following: From 3 , we have the following.
Lemma 2.1.Assume that w ∈ C Ω ∩ W 1,∞ Ω and that D 2 λ w ≥ −K 0 (in the sense of distribution) for all direction λ.If w has an interior maximum then there exist two constants c 0 > 0 and δ 0 > 0 such that Then some lemmas from 3 are needed for the sequel.
Then there exists a function M ∈ L 1 Ω; S n and a matrix valued measure Γ ∈ M Ω; S n such that 2 Γ is singular with respect to Lebesgue measure, 3 Γ(S) is positive semidefinite for all Borel subsets S di Ω, If w has an interior maximum then there exists a constant δ 0 > 0 such that for D 2 w M Γ (as in the previous lemma) as
ISRN Mathematical Analysis
Finally, set Q x, y distance x, y , graph u , 2.9
Sup and Inf Convex Envelope
The aim of this paper is to consider equations of the following form: where F and H are such that the following hold: 2 there exist two constants c 0 and c 1 such that for all M ≥ N and p, q, t ∈ R n × R n × R; 3 F M, p, t ≤ F M, p, s for all t > s and M, p ∈ S n × R n ; 4 there exists a positive function g on R n × R n such that H p − H q ≤ γg p, q p − q 3.3 for γ > 0, all p, q ∈ R n , where g has to satisfy the following then there exists a constant δ 0 > 0 such that for 0 < δ < δ 0 and M, N are the functions defined in Lemma 2.2.
We say that the structure condition holds if and only if 2.1 -4.4 are fulfilled.Define, as in 20 , the convex envelope of a function.
Definition 3.1.Let Ω be a bounded domain of R n , A a subset of Ω such that A ⊂ Ω, u ∈ C Ω .We call, respectively, sup and inf convex envelope of u as in the following objects: x − y 2 .
3.5
Now it is possible to give some properties of the sup convex envelope, noting that similar ones hold for the inf convex envelope.
Theorem 3.3 see 20 .Let A be an open set such that A ⊂ Ω, we have 2 for all x 0 ∈ A there exists a concave paraboloid of opening 2/ε which touches u ε from below in x 0 ∈ A.
In particular u ε is pointwise differentiable to the second order for almost every x ∈ A.
Before going further, it is useful to give the definition of viscosity solution.
Definition 3.4.A viscosity subsolution of F D 2 u, Du, u, x H Du 0 is a function u ∈ USC Ω such that ∀x 0 ∈ Ω and ∀φ ∈ C 2 Ω .If u − φ has a local maximum in x 0 , then the following holds A continuous function u is a viscosity solution of F D 2 u, Du, u, x H Du 0 if and only if u is both a viscosity sub-and supersolution.
ISRN Mathematical Analysis
Remember the following.i USC Ω is the set of upper semicontinuous function u in Ω such that u < ∞.
ii LSC Ω is the set of lower semicontinuous function v in Ω such that v > −∞.Now, to complete this section, note that, as stated in the following theorem see 5, 20 , the convex envelope of a viscosity solution is a viscosity solution of the same equation.
Comparison Principle
Now it is possible to prove the comparison principle Proof.Suppose that the contrary holds true as u < v in Ω.
4.3
Define v v ε − ε and u u − ε ε.By Theorem 3.5, v and u are, respectively, viscosity sub-and supersolution of the previous equation.By the properties of sup and inf convex envelope we know that for all direction λ
4.9
Moreover, by the definition of ζ δ , we have Applying the definition of viscosity subsolution and supersolution, it is possible to write
Examples
It
Theorem 3 . 5 .
Let u, v ∈ C Ω be bounded functions which are, respectively, viscosity subsolution and supersolution of F M, p, t H p 0. If F is uniformly elliptic and nonincreasing then there exist two Lipschitz continuous and bounded functions u * e v * and an open set A 1 , with A 1 ⊂ A, such that u * is semiconvex, v * is semiconcave on A 1 which are, respectively, viscosity subsolution and supersolution of F M, p, t H p 0 in A 1 .
Theorem 4 . 1
Comparison Principle .Let u, v ∈ C Ω .Assume that u is a viscosity supersolution and v is a viscosity subsolution of F D 2 w x , Dw x , Suppose that u ≥ v on ∂Ω.If F and H satisfy the structure condition, then u ≥ v in Ω. 4.2
Cg D u x , D v x |D w x | for every x ∈ ζ δ \ E 2 , 4.16 where meas E 2 0.Then, for E E 1 ∪ E 2 , since meas E 0, we have Remark 4.2.Note that in the last line it is essential that D u L ∞ and D v L ∞ are finite.
− ≥ F M x , D v x , v x H D v x < F M − x , D u x , u x H D u x for almost every x ∈ ζ δ \ E ⊂ ζ δ ,4.17 which contradicts 4.3 .So u ≥ v in Ω. | 2,408 | 2012-12-02T00:00:00.000 | [
"Mathematics"
] |
Considerations on the Current Harmonics of Plate-Type Electrostatic Precipitators Power Supplies
Plate-type electrostatic precipitators are the main installations of separating particles from industries (especially for large gas flow) and must operate in its electromagnetic environment without interfering with the operation of other equipments. The power supplies have non-linear elements that cause distortions of the sources currents. The main objective of this work is to analyze measure and simulate currents, voltages and powers from the sources that are used (thyristor-controlled reactor type) taking into account a new electric model, more close to reality, of the ESP sections. It’s analyzed the modification of the current waves shape and THD in case of using passive filters. DOI: http://dx.doi.org/10.5755/j01.eee.19.5.2344
I. INTRODUCTION
In the last decades the use of power electronics equipments has quickly increased.The power electronics technology and its progress depend on the advantages of semiconductor technology [1], [2].
The plate-type electrostatic precipitators (ESPs) have been used to collection the dust, fume, and mist particles from following processes: electrical power plants, cement industry, iron steel works, glass industry, and other industries.
The ESPs are made from a number (usually, three or four in the most applications) of series sections.Each section is energized by its own thyristor-controlled reactor (TCR), high voltage transformer, rectifier set bridge, and has its own hopper.By mechanical shock, the dust particles are removed from the collecting plates into receiving hopers [3]- [5].
The most popular power sources used for supplying ESP sections are those which are supplied from two phases, with two thyristors connected as anti-parallel which, by controlling in the transformer's bootstrap primary and directly the voltage on the ESP's section.
The two thyristors are in heavy-duty regime and they must be properly dimensioned [6], [7].
An important problem in simulations' achievement is the modeling, as accurate possible, of the ESP's sections.
The current absorbed by the electronic power sources is non-sinusoidal and is more deformed as the ignition angle is higher [6]- [10].
A solution to reducing currents harmonic is to use passive filters, that have LC filters and high pass filter.Passive filter have some disadvantages: it is possible to fall into series or parallel resonance with the source and produces harmonic currents, and, also, the passive filter performances depends on source impedance (usually unknown) [8]- [10].
The power sources that supply the ESP sections, depending on the constructive solution, the operational diagram and the voltage's adjusting regime determine and increased safety and efficiency for collecting the dust from the gas result further different industrial processes [11]- [13].
Today, there is powerful software (PSCAD/EMTDC, MATLAB) and PCs that provide many useful low cost solutions in the design and analyze the power electronic sources.Usually, the electric power systems are complex.
PSCAD/EMTDC is a simulator of electric networks with the capability of modeling complex power electronics and controls of the non-linear networks.When run under the PSCAD graphical user interface, the PSCAD/EMTDC 4.2 combinations becomes a powerful means of visualizing the enormous complexity of portions of the electric power systems [14].
II. ANALYSIS OF A TRADITIONAL DC ENERGIZATION FOR ESP SECTIONS
The A plate-type ESP consists of emission electrodes (of different shapes) connected to a high negative voltage of tens of kV, installed between the collecting plates (collecting electrodes) which are connect to the ground.Supplying the electrodes with high voltage generates an electric field.The electric field is stronger in vicinity of the discharge electrodes (10-60 kV/cm) and has low values near the collecting plates (2-4 kV/cm).It is the area near the discharge electrodes where the electrostatic charging process develops.The shape of the discharge electrodes has an important role in electric charge carriers generating.
A. Mathematical model of the current
One of the main structure power electronic is TCR that is a group of anti-parallel thyristor controlled an inductor and the reactance is varied continuous by partial conduction control of the thyristor (Fig. 1).The TCR is supplied at A.C. voltage and the control principle is called phase control.In general, instead of reactance L is an impedance Z [6].In Fig. 2, α is firing angle and γ is conduction angle.The instantaneous current is [6], [7]: ( ) If α increase on the one hand the power on the impedance decrease and on the other hand the current waveforms through impedance becomes less sinusoidal.For the same α angle for both thyristors the even order harmonics and the D.C. current components are zero and are generated only the odd order harmonics.
The RMS current value for n th current harmonic is given by [7] ( ) ( ) ( ) ( ) In (2), n=3, 5, 7, ... The current conduction angle is maximum 180 0 in anti parallel thyristors configurations.The effective value for a periodic current is The effective value also may be compute with a D.C. (I DC ) and harmonic currents where I 1 is the effective current at 50 Hz, and I n is the effective current at n x 50 Hz.In practice, n is limited to 40.The total harmonic distortion (THD) for source current is:
B. Traditional DC energization of a section
The traditional D.C. energization is obtained with the electrical power supply presented in Fig. 3 [4].
The line voltage is regulated by a thyristor controller T 1 -T 2 through phase control by a pair of anti parallel thyristors.After that, the voltage is applied to the primary of the high voltage transformer VT 2 .The primary voltage is raised to the desired secondary level and the A.C. voltage is rectified by a high voltage bridge rectifier P. The rectified secondary voltage is applied to a section of electrostatic precipitators through current filter R 2 -L 2 .The high voltage rectifier is connected in such a way that the discharge electrodes have a negative polarity (negative Corona is generated in the precipitator to the discharge wires) and the collecting plates are earthed.
III. MEASUREMENTS RESULTS
The parameters were measured (with different devices) at a section of a large electrostatic precipitator (appendix) from a thermal power station [4].
Any periodical waveform can be composing into a sinusoidal waveform at the fundamental frequency (50 or 60 Hz, depending on the network area) plus a number of sinusoids at harmonics frequencies.For symmetrical waveforms (positive and negative half cycles are the same shape and magnitude), the even numbered harmonics are zero.In practice, may be occurs smalls even harmonics and D.C. component.
The measured primary source current and voltage, for a section of ESP are present in Fig. 4 (current through section was 0.902 A), the spectrum for primary source current and voltage in Fig. 5. Measurements form Fig. 6-Fig.8 were made with threephase power quality analyzer CA 8334 B [15].The spectrum of currents (Fig. 6), made along one minute (at different time), show that the dynamical operation of a section.The high order harmonic 2 to 11 exceed the maximum values accepted by standards and the total harmonic distortion (THD) of current has values between 40-77 % (Fig. 8).The spectrum of voltage it is constant and has low harmonics (Fig. 7).THD voltage is below 2.4% (Fig. 9).
The values from fig 8-9 were measure with one second resolution.The section supply has nonlinear characteristics and is responsible for injecting harmonic currents and voltages into the electrical network.The direct results of harmonics cause power system problems like heating, solidstate malfunctions, and communication interference, resonance condition (especially when are use capacitor in the network) errors at measurement devices and sometime catastrophic failure [15]- [19].
IV. SIMULATIONS OF A TRADITIONAL DC ENERGIZATION
FOR ESP SECTIONS WITHOUT PASSIVE FILTER The power supply of one ESP section was simulated with PSCAD/EMTDC using three-phase voltage source, TCR structure (Fig. 10).The firing angle α can be modify between 15...150 0 .Practically, the value of firing angle α is above 45 0 and bellow 145 0 .Were simulated the source voltage (V source ), source current (I source ), ESP voltage (V esp ), and current (I esp ) are simulated for different firing angle (α).With FFT block is compute the harmonic current (up to 15th harmonic) and with THD block is compute the THD current [14].
A better solution to modeling the ESP sections is to use a capacitor and measured I-V characteristic from industry ESP.The capacitor (C=0.3µF) is the electrical capacitance of the precipitator section depends on the geometrical dimensions of the sections and the dielectric proprieties of the process gases.I-V characteristic data were takes from industrial measurement of ESP with three sections (see appendix).
V. SIMULATIONS OF A CLASSICAL DC ENERGIZATION FOR ESP SECTIONS WITH PASSIVE FILTER
The power quality defects are divided into five categories: harmonic distortion; blackouts; under or over voltage; sags voltage and surges; transient phenomena.Each of these has a different cause.For example, harmonics occur into customer's own installation and may propagate into the network and affect other consumers.Harmonic can be avoid by a good design practice and well made diminish equipment.One possibility to compensate harmonic current is using the passive filters.
Passive filters have been used to diminish harmonics generated by large loads and have low cost and high efficiency.In general, passive filters have lower impedance Z F (for a tuned harmonic frequency) than source impedance Z S and harmonic currents flowing into the source are reduced (Fig. 14).The filter characteristics depends on impedance of the source (Z s ) and the impedance of the filters (Z F ) [12], [13].
The passive filter has the structure in Fig. 15, for 3 rd , 5 rd , 7 rd current harmonics and high pass filter.
The sizing of filters components for n th current harmonic are made with The values of the passive filter components are give in Table I [8].It was make simulations for electrical installation (Fig. 10) and passive filter (Fig. 15) in following situations: − Parallel passive filter (the filter was connected in parallel with source); − Series passive filter (the filter was connected in series with source).From Fig. 16 the THD has lower values for the parallel connection of passive filters and for the firing angle lower than 120 0 .The series connection of passive filter does not improve the waveforms of the currents like parallel connection of passive filter.
VI. CONCLUSIONS
Ideally, for a resistive load, the current and voltage waveforms are sinusoidal.In reality, the current in the industrial loads is not linear.An electrostatic section is a non-linear load and will produce lot of current harmonics that can be measure with modern devices and can be done rapid diagnosis and harmonic evaluation.The electrostatic sections have a dynamic operation and from this reason the current it is not constant.The THD of current modifies along of section operation.Using tuned passive filters, in parallel and series connections, the THD can be diminish bellow to 20%.Because of the dynamical operation, the active or hybrid filters may be use to achieve better waveforms of the supply current.
Fig. 9 .
Fig. 9. THD line voltage of ESP power supply for a few hours operation.
Fig. 10 .
Fig.10.Electrical installation of the ESP section using in simulations.
TABLE I .
THE VALUES OF THE PASSIVE FILTER (FIG.15) | 2,574.6 | 2013-03-13T00:00:00.000 | [
"Engineering",
"Physics"
] |
Splenic Arterial Pulsatility Index to Predict Hepatic Fibrosis in Hemodialysis Patients with Chronic Hepatitis C Virus Infection
The clinical utility of the splenic arterial pulsatility index (SAPI), a duplex Doppler ultrasonographic index, to predict the stage of hepatic fibrosis in hemodialysis patients with chronic hepatitis C virus (HCV) infection remains elusive. We conducted a retrospective, cross-sectional study to include 296 hemodialysis patients with HCV who underwent SAPI assessment and liver stiffness measurements (LSMs). The levels of SAPI were significantly associated with LSMs (Pearson correlation coefficient: 0.413, p < 0.001) and different stages of hepatic fibrosis as determined using LSMs (Spearman’s rank correlation coefficient: 0.529, p < 0.001). The areas under receiver operating characteristics (AUROCs) of SAPI to predict the severity of hepatic fibrosis were 0.730 (95% CI: 0.671–0.789) for ≥F1, 0.782 (95% CI: 0.730–0.834) for ≥F2, 0.838 (95% CI: 0.781–0.894) for ≥F3, and 0.851 (95% CI: 0.771–0.931) for F4. Furthermore, the AUROCs of SAPI were comparable to those of the fibrosis index based on four parameters (FIB-4) and superior to those of the aspartate transaminase (AST)-to-platelet ratio index (APRI). The positive predictive value (PPV) for ≥F1 was 79.5% when the Youden index was set at 1.04, and the negative predictive values (NPVs) for ≥F2, ≥F3, and F4 were 79.8%, 92,6%, and 96.9%, respectively, when the maximal Youden indices were set at 1.06, 1.19, and 1.30. The diagnostic accuracies of SAPI with the maximal Youden index for a fibrosis stage of ≥F1, ≥F2, ≥F3, and F4 were 69.6%, 67.2%, 75.0%, and 85.1%, respectively. In conclusion, SAPI can serve as a good noninvasive index in predicting the severity of hepatic fibrosis in hemodialysis patients with chronic HCV infection.
Introduction
Despite the adoption of universal precautions and blood safety, chronic hepatitis C virus (HCV) infection remains a significant health problem in hemodialysis patients. While the global prevalence of HCV infection is about 0.7%, the prevalence of HCV infection ranges from 4% to 20% in hemodialysis patients [1][2][3][4][5][6]. Hemodialysis patients with chronic HCV infection have higher hepatic and extrahepatic morbidity and mortality than those without chronic HCV infection [7][8][9]. In contrast, the long-term prognosis is improved once HCV is eradicated with effective antiviral treatment [10].
In the era of interferon (IFN), treatment uptake of HCV is low because the treatment response and tolerance are far from satisfactory [11][12][13]. The advent of IFN-free directacting antivirals (DAAs) after 2014 has made a paradigm shift in the care of hemodialysis patients with chronic HCV infection because the efficacy and safety are excellent with DAA treatment. Numerous clinical trials and real-world studies have indicated that more than 95% of hemodialysis patients with chronic HCV infection can achieve a sustained virologic response (SVR) with a short course of DAAs [14][15][16][17][18][19][20]. Although the stage of hepatic fibrosis does not significantly affect the overall response rates in hemodialysis patients with HCV receiving DAAs, an accurate diagnosis of the stage of hepatic fibrosis is still mandatory to assist in optimizing clinical decisions [21].
Percutaneous liver biopsy is the gold standard to assess the severity of hepatic fibrosis in patients with HCV infection. However, it is an invasive procedure that may cause deaths, major bleeding, biliary injuries, or pain [22]. The risk of bleeding in hemodialysis patients following percutaneous liver biopsy ranges from 1.3% to 5.9%, much higher than the risk of 0.16% in nonuremic patients [23][24][25]. Moreover, the biopsy specimens are prone to sampling and interpretation variations [26]. Therefore, using simple and easily accessible noninvasive indices to determine the therapeutic and surveillance plans is paramount for hemodialysis patients with chronic HCV infection [27].
Duplex Doppler ultrasonography (DDU) is an easily accessible noninvasive tool to evaluate the vascular dynamics in various organs. Clinically, physicians can perform DDU at a routine gray-scale ultrasonography screening. Prior studies have shown that the splenic arterial pulsatility index (SAPI), which measures the arterial resistance by placing the Doppler cursor within the main branches of the splenic artery at the splenic hilum, is highly correlated with the severity of hepatic fibrosis and portal hypertension in patients with chronic HCV infection, taking percutaneous liver biopsy and hepatic vein catheterization as the reference standards [28][29][30][31]. However, data regarding the value of SAPI to predict the stage of hepatic fibrosis in hemodialysis patients with chronic HCV infection are limited. We aimed to conduct a cross-sectional study to evaluate the clinical utility of SAPI to stage hepatic fibrosis in this special population, taking transient elastography (TE), which generates a shear wave in the liver tissue to directly determine the liver stiffness, to be the reference standard [32].
Patients
We conducted a retrospective, cross-sectional study to include hemodialysis patients with chronic HCV infection at the National Taiwan University Hospital (NTUH) and NTUH Yun-Lin Branch who underwent a liver stiffness measurement (LSM) with TE (FibroScan ® , Echosens, Paris, France) and SAPI with duplex Doppler ultrasonography (Aplio 500 ® , Canon Medical Systems Incorporation, Tokyo, Japan) between January 2010 and June 2022. Hemodialysis patients were defined as those who had an estimated glomerular filtration (eGFR) rate <15 mL/min/1.73 m 2 using the chronic kidney disease-epidemiology collaboration (CKD-EPI) equation and were on maintenance dialysis through vascular routes [33][34][35]. Chronic HCV infection was defined as patients who presented detectable HCV antibodies (anti-HCV; Abbott HCV EIA 2.0, Abbott Laboratories, Abbott Park, IL, USA) and quantifiable serum HCV RNA (Cobas TaqMan HCV Test v2.0, Roche Diagnostics GmbH, Mannheim, Germany, lower limit of quantification [LLOQ]: 15 IU/mL) for 6 months or more. Patients were excluded from the study if they had hepatitis B virus (HBV) or human immunodeficiency virus (HIV) coinfection, decompensated cirrhosis (Child-Pugh B or C), a history of hepatocellular carcinoma (HCC), a failed or unreliable LSM with TE, or a failed SAPI assessment due to splenectomy.
Study Design
We collected baseline demographic data, including age, sex, history of HCC, and body mass index (BMI). Blood tests, including hemogram, serum albumin, total bilirubin, aspartate transaminase (AST), alanine transaminase (ALT), creatinine, anti-HCV, anti-HIV (Abbott Architect HIV Ag/Ab Combo, Abbott Laboratories, Abbott Park, IL, USA), HBV surface antigen (Abbott Architect HBsAg qualitative assay, Abbott Laboratories, Abbott Park, IL, USA), HCV RNA, and HCV genotype (Abbott RealTime HCV Genotype II, Abbott Laboratories, Abbott Park, IL, USA) were assessed [36]. The upper limits of normal (ULN) AST and ALT levels were 30 U/L for men and 19 U/L for women [37]. We also calculated the AST-to-platelet ratio index (APRI) and fibrosis index based on four parameters (FIB-4) for all patients [38,39]. LSM was performed with the patients lying in a supine position with their right arms tucked behind the head. The probe was placed on the skin of the right intercostal space at the level of the right hepatic lobe. The results of LSM were expressed in kPa with a median value and interquartile range (IQR) of at least 10 valid measurements and a successful rate of more than 60%. LSM failure was defined as a zero valid measurement, and unreliable examinations were defined as less than 10 valid measurements, a successful rate of less than 60%, or the IQR of more than 30% of the median LSM value. Patients with an LSM of ≤6.0 kPa, 6.1-7.0 kPa, 7.1-9.4 kPa, 9.5-12.4 kPa, and ≥12.5 had a fibrosis stage of F0, F1, F2, F3, and F4, respectively [40]. SAPI was measured by placing the ultrasound probe on the skin of the left intercostal space and sampling the signals in the main branches of the intrasplenic arteries at the splenic hilum. The SAPI was calculated using the following formula: (peak systolic velocity-end-diastolic velocity)/mean velocity [35].
Statistical Analysis
All statistical analyses were performed using the Statistical Program for Social Sciences (SPSS Statistics Version 26.0, IBM Corp., Armonk, NY, USA). Baseline characteristics were shown as a median (range) and number (percentage) when appropriate. We analyzed the relationship between SAPI and LSM with the Pearson correlation. Furthermore, we analyzed the relationship between SAPI and different hepatic fibrosis stages (F0, F1, F2, F3, and F4) with Spearman's rank correlation. The receiver operating characteristic (ROC) curves to predict patients with a fibrosis stage of ≥F1, ≥F2, ≥F3, and F4 were constructed for SAPI, APRI, and FIB-4. The areas under the ROC curves (AUROCs) with a 95% confidence interval (CI) of SAPI, APRI, and FIB-4 were shown according to different fibrosis stages [41]. The Youden index with a maximal value (sensitivity + specificity − 1) was selected to distinguish different fibrosis stages. All statistics were two-tailed, and the results with a p-value < 0.05 were considered statistically significant.
Patient Characteristics
Of 335 hemodialysis patients with chronic HCV infection, 296 were eligible for the study after excluding 39 because of HBV coinfection (n = 17), decompensated cirrhosis (n = 2), a history of HCC (n = 3), a failed or unreliable LSM (n = 7), or a failed SAPI assessment due to splenectomy ( Figure 1).
index based on four parameters; eGFR, estimated glomerular filtration rate. a Data are shown as a median (range) unless otherwise indicated. b Assessed using TE. c The LSM cutoff values for a hepatic fibrosis stage of F0, F1, F2, F3, and F4 are ≤6.0 kPa, 6.1-7.0 kPa, 7.1-9.4 kPa, 9.5-12.4 kPa, and ≥12.5 kPa, respectively. d The upper limit of normal (ULN) is 30 U/L for men and 19 U/L for women. e Assessed using the chronic kidney disease epidemiology collaboration (CKD-EPI) equation.
AUROCs of SAPI to Predict the Severity of Hepatic Fibrosis
The ROC curves of SAPI, APRI, and FIB-4 according to different stages of hepatic fibrosis are shown in
Selective Cutoff Values for SAPI to Predict the Severity of Hepatic Fibrosis
The maximal Youden indices of SAPI to predict patients with a fibrosis stage of ≥F1, ≥F2, ≥F3, and F4 were 1. 04, 1.06, 1.19, and 1.30
Discussion
The noninvasive tools to stage hepatic fibrosis in hemodialysis patients with HCV infection are expected to prevail in clinical practice because liver biopsy is invasive and associated with various complications [27]. Our study demonstrated that in hemodialysis patients with chronic HCV infection, the levels of SAPI tended to increase with increasing LSMs and stage of hepatic fibrosis. By comparing the two commonly used biochemical indices, the APRI and FIB-4, we revealed that the overall diagnostic power of SAPI was similar to FIB-4 but was superior to APRI in staging hepatic fibrosis in our patients. In nonuremic patients with HCV, FIB-4 has been shown to perform better than APRI in distinguishing the severity of hepatic fibrosis [29,40]. We also observed a superior diagnostic power of FIB-4 over APRI to stage hepatic fibrosis in hemodialysis patients with chronic HCV infection.
In our study, the AUROCs of SAPI increased with more severe hepatic fibrosis. When we examined the maximal Youden index to diagnose a fibrosis stage of ≥F1, ≥F2, ≥F3, and F4, we found that the main power of SAPI was to diagnose patients with ≥F1 to reach a PPV of 79.5% at a cutoff value of 1.04, and patients with ≥F2, ≥F3, and F4 with NPVs of 79.8%, 92.6%, and 96.9%, respectively, at cutoff values of 1.06, 1.19, and 1.30. Using these cutoff values, physicians can correctly diagnose the stage of hepatic fibrosis in more than two-thirds of hemodialysis patients with chronic HCV infection [42]. Although SAPI was considered a valuable index to predict the severity of hepatic fibrosis in hemodialysis patients with chronic HCV infection, the diagnostic performance of SAPI seemed to be inferior to nonuremic patients with HCV, which revealed ARUOCs of 0.86 to 0.89 to predict a fibrosis stage of ≥F2, and 0.90 to 0.92 to predict a fibrosis stage of F4 [29][30][31]. Because body fluid status may significantly affect portal hemodynamics in hemodialysis patients, we speculated that the efficiency of hemodialysis might contribute to the lower diagnostic accuracies of SAPI in predicting the stage of hepatic fibrosis [43,44].
Because TE may misclassify the stage of hepatic fibrosis in a small proportion of hemodialysis patients with chronic HCV infection, prior studies have demonstrated that the diagnostic accuracies of APRI and FIB-4 to stage hepatic fibrosis were better if they took percutaneous liver biopsy rather than LSM to be the reference standard [40,[45][46][47]. Based on the similar AUROCs of APRI and FIB-4 between our and prior reports using TE as the reference standard, we confirmed that SAPI had comparable diagnostic accuracies to FIB-4 and was superior to APRI to assess the stage of hepatic fibrosis in hemodialysis patients with chronic HCV infection [47][48][49].
Compared to computed tomography (CT) or magnetic resonance imaging (MRI)based techniques to assess the hepatic fibrosis, most physicians can easily complete the measurement of SAPI within 5-10 min using ultrasonographic machines equipped with the automatic Doppler tracing function through placing the cursor on the main branches of the splenic artery [31]. Furthermore, DDU can be concomitantly performed at routine gray-scale ultrasonographic HCC surveillance without additional costs and concerns for radiation or contrast-related injuries with CT or MRI. While SAPI may play a role in improving the diagnostic yield of hepatic fibrosis in hemodialysis patients with chronic HCV infection, further studies should target cost-effectiveness and the cost-utility of combining SAPI, FIB-4, and LSM to optimize the care of these patients.
To our knowledge, this study was the first to assess the clinical utility of SAPI, an easily-performed DDU analysis, to diagnose the stage of hepatic fibrosis in hemodialysis patients with chronic HCV infection. The strengths of our study include (1) a sizable number of hemodialysis patients in this analysis; and (2) a homogeneous population excluding HBV or HIV infection, decompensated cirrhosis, or a history of HCC. However, our study has some limitations. First, this retrospective study could not standardize the patients' fluid status during the SAPI assessment. Second, we were unable to assess the intra-and inter-observer variations of SAPI assessment due to the retrospective nature of this study. However, all SAPI measurements in our study were performed by welltrained physicians, who demonstrated low intra-and inter-observer variations in previous reports [26,28]. Third, we did not adopt a percutaneous liver biopsy, an invasive procedure seldom performed for hemodialysis patients due to the concerns of bleeding events, as the reference standard.
In conclusion, our study demonstrates that SAPI is a useful noninvasive index to stage the severity of hepatic fibrosis in hemodialysis patients with chronic HCV infection. The diagnostic performance of SAPI is comparable to FIB-4 and superior to APRI. Using the maximal Youden indices for SAPI, the stage of hepatic fibrosis can be correctly diagnosed in more than two-thirds of hemodialysis patients with chronic HCV infection. Independent studies are needed to validate the value of SAPI to predict the stage of hepatic fibrosis in this special population. Informed Consent Statement: Informed consent was waived in the study. Data Availability Statement: Data for this study, though not available in a public repository, can be made available upon reasonable request. | 3,432.8 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Communicative competence of students in technical specialties
This article discusses the teaching of foreign languages the future technology of foreign language specialties in the textile industry.
INTRODUCTION
An analysis of current trends and the development of national in systems of higher education and project in new state educational standards of higher professional education, regulations in the field of engineering education in the field of foreign languages training of Students in textile specialties shows actually lingvo-cultural orientation of training in line with the formation of intercultural competence and training specialties. Training future engineers lingvocultural competence facilitates integration of acquired knowledge into a coherent picture of the world, thus helping future professional to solve communication problems in the sphere of professional activity. This unit contributes to the application of the requirements of time to spend future professional to search and analyze information necessary for studying advanced foreign experience, as well as work with the technical literature, in particular the textile industry, and documentation in a foreign language. However, it should be noted that in the current practice in Uzbekistan, teaching discipline «Foreign language» in technical colleges are obvious gaps.
1. No divergent courses students toward specialties relevant department. 2. Not defined structure, content and conditions of foreign language professionally oriented communicative competence.
3. Not resolved the complex communicative professionally oriented tasks specific to a particular level of education and continuous multi-level structure of the course (in years of schooling).
All of this leads to the fact that graduates of technical colleges, even with a fairly high level of formation in foreign language communicative competence, experience difficulties in the process of professional dialogue with specialists from other countries, due to the aborted linguistic competence profile, insufficient multicultural outlook and insufficient personal behavioral characteristics which lead to a number of conflicts in addressing quality of training. Formation of professional foreign language training of future specialists in technical colleges requires a high level of professional and foreign language competence, which provides intercultural communication and build effective communication links with foreign partners, and a successful career in a foreign environment.
Therefore, in these conditions, the problem of improving foreign language training in technical specialties in universities is particularly important. The importance of linguistic-cultural competent in the process of intercultural communication is defined by special role in integrating its continent and ensuring the relationship of language and culture. If we consider that in terms of intercultural communication learner perceive it through comparison with their own culture, where foreign language acts as a mediator of culture, ect… linguistic-cultural competent, as a key element in the formation of intercultural competence and being characterized by multi-faced and integrative capacity emphasizes its holistic nature. Possession of linguistic-cultural competent of intercultural communication means possession of studying a foreign language and a system of knowledge about the culture of the target language country, as reflected in the national language, knowledge and skills to use them in intercultural communication, hence the ability to follow not only the rules of conduct of a linguistic society, but also language features, which include a terminological lexicon, terminology has been each language, in common brings it closer to other languages, something of their own specific, due not only to its internal laws of development, but also the nature of the phenomena of perception of reality.
Achieving the purpose of education is provided by the Students mastering the content of vocational and foreign language training, consisting of basic and vocational-oriented courses. This is due to the fact that the basic course provides the formation of a common foreign language competence, ect, foundation training underlying the state educational standarts of higher educational and is common for all technical fields.
Vocational and foreign language communicative competence is the «integrative quality of the future specialist with complex structure organization».
It includes three components of motivational value (interest in professional foreign language training and awareness of its importance for future activites), cognitive activity (combining) foreign language communicative competence in the field of professional and general (on the subject) competence is a professional a-important qualities and the ability to engineer, emotionally-volitional (associated with selfesteem and skills generation of feelings of responsibility for success in academic and future professional activity).
As a basis for the organization of the process in formation of professional competence of foreign language students in technical colleges is the technology of contextual learning that accelerate the formation on the basic of general and specific competencies and professional foreign language competence.
The term «competence» is understood intellectually and personally due to a persons ability to practice, and «competence» is defined as a substantial component of this ability in the form of knowledge, skills (I.A.Zimnyaya, M.Y.Yerdakinov, ect…). According I.A.Zimnyaya, competence always is actual manifestation of competences.
On the one hand, foreign language professional communicative competence based on the unity oriented with professional, informational, linguistic, communicative, social, behavioral, cultural, ethical, strategic, ideological and individual personality components. On the other hand, the future specialist should possess integrated system of categories, which includes the following components: a key qualification, professionalism, professor, skills, readiness, professional, competence, professional education.
Consequently, in the modernization of the system of higher education quality the aim of training future professionals is the formation of competence, ensuring effective interaction with the environment in a particular area. Competence is an integrative concept and of motivational , cognitive, behavioral, value semantic emotional and volitional components.
In developing the methodological technique should be clear: 1. Material (technical text), which is the subject of study.
2. The amount of material that is used for acquisition of terminological lexicon the scientific literature. A is important that the number of root words and their derivatives are often used in technical texts, do not overload closes.
3. The concept of "term". Terms defined in German as "Fachwort", it is essentially a word or phrase a special (scientific, technical of language produced (host borrowed etc.,) for exact expression of specific concepts and notations specific subjects.
Define the concept of the term means to describe the method of its construction from the constitutional equal: For example, it is difficult a word "Farbzusammenstellung" often used in the textile industry consists of two components: the first component which "Farb" is a truncated word. "Farbe" means:1.color, paint,complexion.
2. paint (colorant). 3. as a symbol of the identity of the party or organization. 4.quatry,color. 5.color Compound words "Zusammenstellung" consisting of two components zusammen +stellung and meaning "the selection of colors" is an indication that the amount of information of all the components the whole words are not all the information is a compound word because it is the first of "zusammen" means together, and indicates a convergence on the connection. Consequently, substantial aspect is the mastery of a foreign language and studying the system of background knowledge related to the culture and history of the people. At the same native language and culture are the "terms" for comparison (L.V.Sherba). As the practice of training terminological lexicon of the textile industry, a methodical approach to our students has always proved fruitful, so that when it is the systematic use of students began to understand the meaning of new words in a logical way. These requirements imply comprehensive training of future professionals in engineering activities in accordance with international standarts and national specifics of the language in vocational training, which plays a leading role terminological competence. These requirements should ensure that: -Understanding the nature and significance of specialist in a particular area.
-Ability to use different method to communicate effectively in professional environment and society (writing reports, presentation materials, delivery and reception of clear and accessible instructions) -At least professional competence speaking another language, terminology base necessary for communication when working with colleagues from other countries or in international teams. -Awareness of the practical issues of project activities. -Creative research within the profession. -The need for and ability to scientific growth and selfeducation. Given the lack of vocabulary for the textile industry and the importance of learning the vocabulary of terminology need a holistic approach to the problem of selection of terms based on the thematic division of scientific knowledge, which is impossible without applying the method of logicalsemantic analysis of the conceptual and terminological textile industry and the establishment of terminological lexicon in teaching logic, conceptual and lexical relationships between the terms and structuremorphemic analysis terms. This will facilitate their adequate perception and correct translation from German into Uzbek or Russian. In addition the exact vocabulary semantization terminology facilitates the perception and learning vocabulary , as well as contributes to the formation of terminology "baggage" of students etc….Their professional and foreign language competence, in particular the formation of reading and understanding the communicative sense of special technical text, thus the professional communication. In other words, the formation of professional and foreign language competence of students in the textile specialty involves such components as target, substantial, procedural and efficient competent.
The theoretical foundations of the process in formation. Terms line lexicalization nomination Action process. Literature. | 2,081.8 | 2020-11-05T00:00:00.000 | [
"Linguistics"
] |
Magnetocaloric effect in the spin-1/2 Ising-Heisenberg diamond chain with the four-spin interaction
The magnetocaloric effect in the symmetric spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction is investigated using the generalized decoration-iteration mapping transformation and the transfer-matrix technique. The entropy and the Gruneisen parameter, which closely relate to the magnetocaloric effect, are exactly calculated to compare the capability of the system to cool in the vicinity of different field-induced ground-state phase transitions during the adiabatic demagnetization.
Introduction
The magnetocaloric effect (MCE), which is characterized by an adiabatic change in temperature (or an isothermal change in entropy) arising from the application of the external magnetic field, has been known for more than a hundred years [1]. This interesting phenomenon has also got a long history in the cooling applications at various temperature regimes. The first successful experiment of the adiabatic demagnetization, which was used to achieve the temperatures below 1K with the help of paramagnetic salts, was performed in 1933 [2]. Nowadays, the MCE is a standard technique for achieving the extremely low temperatures [3].
It should be noted that the MCE in quantum spin systems has again attracted much attention of researchers. Indeed, various one-and two-dimensional spin systems have recently been exactly numerically investigated in this context [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. The main features of the MCE which have been observed during the examination of various spin models include: an enhancement of the MCE owing to the geometric frustration, an enhancement of the MCE in the vicinity of quantum critical points, the appearance of a sequence of cooling and heating stages during the adiabatic demagnetization in spin systems with several magnetically ordered ground states, as well as a possible application of the MCE data for the investigation of critical properties of the system at hand.
In this paper, we investigate the MCE in a symmetric spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction, which is exactly solvable by combining the generalized decorationiteration mapping transformation [20][21][22] and the transfer-matrix technique [23,24]. As has been shown in our previous investigations [25], the considered diamond chain has a rather complex ground state, which predicts the appearance of a sequence of cooling and heating stages in the system during adiabatic demagnetization. The main aim of this work is to compare the adiabatic cooling rate of the system (an enhancement of the MCE) near different field-induced ground-state phase transitions. Bearing in mind this motivation, we investigate the entropy and the Grüneisen parameter during the adiabatic demagnetization process, as well as the isentropes in the H − T plane. The paper is organized as follows. In section 2, we first briefly present the basic steps of an exact analytical treatment of the symmetric spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction. Exact calculations of the quantities related to the MCE, such as the entropy and the Grüneisen parameter, are also realized in this section. In section 3, we briefly recall the ground state of the system, and then the most interesting results for the entropy as a function of the external magnetic field, the isentropes in the H − T plane and the adiabatic cooling rate of the system versus the applied magnetic field are also presented here. Finally, some concluding remarks are drawn in section 4.
Model and its exact solution
Let us consider a one-dimensional lattice of N inter-connected diamonds in the external magnetic field, which is defined by the Hamiltonian (see figure 1) Here, the spin variablesŜ γ k (γ = x, y, z) andσ z k denote spatial components of the spin-1/2 operators, the parameter J H stands for the XXZ Heisenberg interaction between the nearest-neighbouring Heisenberg spins and ∆ is an exchange anisotropy in this interaction. The parameter J I denotes the Ising interaction between the Heisenberg spins and their nearest Ising neighbours, while the parameter K describes the Ising four-spin interaction between both Heisenberg spins and two Ising spins of the diamond-shaped unit. Finally, the last two terms determine the magnetostatic Zeeman's energy of the Ising and Heisenberg spins placed in an external magnetic field H oriented along the z-axis. It is worth mentioning that the considered quantum-classical model is exactly solvable within the framework of a generalized decoration-iteration mapping transformation [20][21][22] (for more computational details see our recent works [25] and [7]). As a result, one obtains a simple relation between the partition function Z of the investigated symmetric spin-1/2 Ising-Heisenberg diamond chain with the four-spin interaction and the partition function Z IC of the uniform spin-1/2 Ising linear chain with the nearest-neighbour coupling R and the effective magnetic field H IC The mapping parameters A, R and H IC emerging in (2.2) can be obtained from the "self-consistency" condition of the applied decoration-iteration transformation, and their explicit expressions are given by relations (4) in reference [7] with the modified G function, which is given by equation (6) of reference [25].
It should be mention that the relationship (2.2) completes our exact calculation of the partition function because the partition function of the uniform spin-1/2 Ising chain is well known [23,24].
At this stage, exact results for other thermodynamic quantities follow straightforwardly. Using the standard relations of thermodynamics and statistical physics, the Helmholtz free energy F of the symmetric spin-1/2 Ising-Heisenberg diamond chain with the four-spin interaction can be expressed through the Helmholtz free energy F IC of the uniform spin-1/2 Ising chain (we set the Boltzmann's constant k B = 1). Subsequently, the entropy of the investigated diamond chain can be calculated by differentiating the free energy (2.3) with respect to the temperature T . In our case, the resulting equation for the entropy behaves numerically better if the derivation is taken with respect to the inverse temperature β = 1/T Here, the functions ∂ β ln Z IC and ∂ β ln A satisfy in general the equations with s = sinh(βH IC /2), c = cosh(βH IC /2) and Q = sinh 2 (βH IC /2) + exp(−βR). For x = β the partial derivatives ∂ x lnG ∓ and ∂ x lnG 0 emerging in equations (2.5) and (2.6) read Next, let us calculate the quantity called Grüneisen parameter for the investigated model, which closely relates to the MCE. In general, the Grüneisen parameter Γ H can be coupled with the adiabatic cooling rate (∂T /∂H ) S by using basic thermodynamic relations [26,27]: where M is the total magnetization of the system and C H is the specific heat at a constant magnetic field H . In our case, a direct substitution of the entropy (2.4) into expression (2.9) yields to the following comprehensive form of the Grüneisen parameter Γ H for the symmetric spin-1/2 Ising-Heisenberg diamond chain with a four-spin interaction (2.1): (2.10) The first two functions ∂ H ln Z IC and ∂ H ln A occurring in the numerator of the fraction (2.10) satisfy the general equations (2.5) and (2.6), respectively, where the derivatives ∂ x lnG ∓ and ∂ x lnG 0 are given as
13001-3
follows for x = H : Other functions ∂ 2 βH ln Z IC , ∂ 2 βH ln A, ∂ 2 ββ ln Z IC and ∂ 2 ββ ln A that emerge in (2.10) can be obtained by differentiating (2.5) and (2.6) with respect to H and β, respectively, provided that x = β. However, the resulting expressions for these functions are too cumbersome to be written down here explicitly.
Results and discussion
In this section, we present the results for the entropy as a function of the external magnetic field, isentropes in the H − T plane and the cooling rate during the adiabatic demagnetization for the symmetric spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction. We assume the Ising and Heisenberg pair interactions J I and J H to be antiferromagnetic (J I > 0, J H > 0), since it can be expected that the magnetic behaviour of the model with the antiferromagnetic interactions in the external longitudinal magnetic field should be more interesting compared to its ferromagnetic counterpart.
Ground state
In view of a further discussion, it is useful firstly to comment on possible spin arrangements of the investigated diamond chain at zero temperature. two other interesting phases QAF and FRI 2 with a perfect antiferromagnetic order in the Ising sublattice can also be found in the ground state depending on whether the four-spin interaction K is considered to be ferromagnetic (K < 0) or antiferromagnetic (K > 0), respectively. For more details on the magnetic order of relevant ground states see our recent work [25].
Entropy
Now, let us turn our attention to the entropy of the investigated diamond chain as a function of the external magnetic field. Figure 3 shows several isothermal dependencies of the entropy per one spin S/3N (recall that the system is composed of N Ising spins and 2N Heisenberg spins) versus the magnetic field H /J I , corresponding to the spin-1/2 Ising-Heisenberg diamond chain with the fixed interaction ratio J H /J I = 1.0 and the fixed ferromagnetic (antiferromagnetic) four-spin interaction K /J I = −0.5 (K /J I = 0.5). It should be mention that the values of the exchange anisotropy parameter ∆ are chosen so as to reflect all possible field-induced ground-state phase transitions. Evidently, the plotted entropy isotherms are almost unchanged down to temperature T /J I = 0.5 for any choice of the parameters ∆ and K . In the limit T /J I → ∞, the entropy per spin approaches its maximum value S max /3N = ln 2 ≈ 0.69315 for any finite value of the applied magnetic field H /J I , since the spin system is disordered at high temperatures, while it monotonously decreases upon an increase of H /J I when the temperature T /J I is finite. Below T /J I = 0.5, the entropy as a function of the magnetic field exhibits irregular dependencies that develop into pronounced peaks located around the transition fields as the temperature is lowered. Finally, almost all these peaks split into isolated lines at critical fields when the temperature reaches the zero value. The only exception is the low-temperature peak observed around the critical field H c /J I = 2.0, corresponding to the field-induced phase transition between the phases FRI 1 and SPP, which completely vanishes at T /J I = 0 [compare the lines for T /J I = 0.03 and 0 in figure 3 (a)]. The residual entropy takes the finite
13001-5
value S res = ln 2 at this critical point, because just one Ising spin is free to flip in the system and the spin arrangement of its nearest Ising neighbours (and consequently all others) is unambiguously given through the Ising four-spin interaction. Of course, this contribution vanishes in the thermodynamic limit N → ∞, and the residual entropy normalized per spin is S res /3N = 0, which implies that the mixed-spin system is not macroscopically degenerate at the phase transition FRI 1 -SPP. However, the macroscopic non-degeneracy of the investigated diamond chain found at H c /J I = 2.0 can be observed merely if the four-spin interaction K is ferromagnetic, since the ground-state phase transition FRI 1 -SPP occurs only for K < 0 according to the ground-state analysis (see figure 2 as well as figure 2 in the reference [25]).
By contrast, isolated lines appearing in zero-temperature entropy isotherms at other critical fields for K > 0 as well as K < 0, whose heights are given by the values of the residual entropy S res /3N = ln 2 1/3 ≈ 0.23105 and/or ln[(1 + 5)/2] 1/3 ≈ 0.16040, clearly point to the macroscopic ground-state degeneracy of the system at these points. The former residual entropy S res /3N = ln 2 1/3 found at the ground-state phase transition QFI-SPP is the result of breaking up (forming) the antisymmetric quantum superposition of updown states of the Heisenberg spins at each unit cell, whereas the latter one S res /3N = ln[(1 + 5)/2] 1/3 is closely associated with destroying (forming) a perfect antiferromagnetic order in the Ising sublattice at critical fields during the (de)magnetization process. zero-temperature phase transitions. It should be pointed out that this relatively fast cooling/heating of the system near critical points clearly indicates the existence of a large MCE. As can be also found from figure 4, the temperature of the system reaches the zero value at critical fields if the entropy is less than or equal to its residual value at these points (see also figure 3 showing the isothermal dependencies of the entropy versus the external magnetic field at various temperatures for better clarity). figure 3) and, therefore, we can say that it tracks the accumulation of the entropy due to the competition between neighbouring ground states. Moreover, it is also evident from figure 5 that high-field peaks of the T Γ H (H ) curves plotted for the values of ∆ = 1.2 in figure 5 (a) and ∆ = 1.1 in figure 5 (b), emerging at the fields H /J I ≈ 2.049 and 2.129, respectively, are significantly higher than the others. According to the ground-state phase diagrams shown in figure 2, these peaks, whose heights are T Γ
Conclusions
In the present paper, we have studied the MCE for the symmetric spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction, which is exactly solvable by combining the generalized decoration-iteration transformation and the transfer-matrix technique. Within the framework of this approach, we have exactly derived the entropy and Grüneisen parameter, that closely relates to the MCE.
We have also obtained the isentropes in the H − T plane.
We have illustrated that the MCE in the low-entropy and/or low-temperature regimes indicate the field-induced phase transition lines seen in ground-state phase diagrams. More specifically, field-induced ground-state phase transitions perfectly manifest themselves in the form of maxima in low-temperature isothermal dependencies of the entropy versus the external magnetic field, or equivalently in the form of minima in low-entropy isentropes plotted in the H − T plane. This leads to a pronounced cooling of the system during the adiabatic demagnetization in close vicinity of quantum phase transitions when low temperatures are reached. As a consequence, we have found large positive values of the adiabatic cooling rate (the Grüneisen parameter multiplied by the temperature) for magnetic fields slightly above critical points. In addition, we have concluded that the MCE observed just around field-induced groundstate phase transitions is extremely sensitive to the nature of the degeneracy of the model at these points. The most rapid cooling (approximately twice as fast as others) has been observed just around the field-induced ground-state phase transition QFI-SPP, where strong thermal excitations of the decorated Heisenberg spins are present at low temperatures due to breaking up the antisymmetric quantum superpositions of their up-down states at zero temperature, regardless of the nature of the Ising four-spin interaction. By contrast, the effect of Ising four-spin interaction on the adiabatic cooling rate of the system is the same in the vicinity of all field-induced phase transitions. Namely, the increasing Ising four-spin interaction (ferromagnetic as well as antiferromagnetic) accelerates the cooling of the system around phase boundaries during the adiabatic demagnetization.
The considered spin-1/2 Ising-Heisenberg diamond chain with the Ising four-spin interaction, thanks to their simplicity, has enabled the exact analysis of the MCE. Although to our knowledge there is no particular compound which can be described by the model investigated, our results might be useful in comparing the effects of ground-state phase transitions of different origin on the enhancement of the MCE. On the other hand, the comparison between theory and experiment may be resolved in future in connection with further progress in the synthesis of new magnetic chain compounds. | 3,683.4 | 2013-10-01T00:00:00.000 | [
"Physics"
] |
Benchmarking virus concentration methods for quantification of SARS-CoV-2 in raw wastewater
Wastewater-based epidemiology offers a cost-effective alternative to testing large populations for SARS-CoV-2 virus, and may potentially be used as an early warning system for SARS-CoV-2 pandemic spread. However, viruses are highly diluted in wastewater, and a validated method for their concentration and further processing, and suitable reference viruses, are the main needs to be established for reliable SARS-CoV-2 municipal wastewater detection. For this purpose, we collected wastewater from two European cities during the Covid-19 pandemic and evaluated the sensitivity of RT-qPCR detection of viral RNA after four concentration methods (two variants of ultrafiltration-based method and two adsorption and extraction-based methods). Further, we evaluated one external (bovine corona virus) and one internal (pepper mild mottle virus) reference virus. We found a consistently higher recovery of spiked virus using the modified ultrafiltration-based method. This method also had a significantly higher efficiency (p-value <0.01) for wastewater SARS-CoV-2 detection. The ultracentrifugation method was the only method that detected SARS-CoV-2 in the wastewater of both cities. The pepper mild mottle virus was found to function as a potentially suitable internal reference standard.
Introduction
The first cases of the current global pandemic of severe acute respiratory syndrome corona virus 2 (SARS-CoV-2) infections were reported in December 2019, in China (WHO, 2020). Survival of other coronaviruses in water and wastewater has been previously confirmed
Contents lists available at ScienceDirect
Science of the Total Environment j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / s c i t o t e n v (Gundy et al., 2009), making wastewater based epidemiology (WBE) a possible tool in developing an early warning or surveillance system for infections or rises of the SARS-CoV-2 virus. WBE has been previously used as a successful approach to grasp the severity and prevalence of pathogenic outbreaks in Sweden (Hellmér et al., 2014) and Israel (Kopel et al., 2014). Therefore, several studies have focused on identification of SARS-CoV-2 in wastewater during the current pandemic (La Rosa et al., 2020b;Medema et al., 2020;Randazzo et al., 2020).
One of the hurdles is the recovery methods, which are primarily developed for nonenveloped viruses. However, the novel SARS-CoV-2 belongs to the coronaviridae family (Gorbalenya et al., 2020) of enveloped viruses with single-stranded RNA. Different functional groups on the outer layer of enveloped and nonenveloped viruses impact wastewater recovery methods (Ye et al., 2016). Ahmed et al. (2020aAhmed et al. ( , 2020b have recently compared the efficiency of seven different concentrations methods for the recovery of SARS-CoV-2. Nevertheless; recovery methods of the enveloped viruses, their efficiencies, and internal and external surrogates (reference viruses) require further research (La Rosa et al., 2020a) as the existing information indicate different recovery efficiencies for each method (Ahmed et al., 2020b).
In the current study, the sensitivity of four virus concentration methods were assessed in the perspective of external and internal reference viruses for wastewater samples from two countries: Sweden and Italy. The regions of Stockholm and North of Italy were chosen, as they represented regions with high case numbers of SARS-CoV-2 infection. The four different virus concentration techniques examined in this study were; 1) ultrafiltration 1.A) modified ultrafiltration 2) adsorption-vacuum filtration and 2.A) centrifugation combined with adsorption-vacuum filtration. Bovine coronavirus (BCoV), from betacoronavirus genus, the same genus as SARS-CoV-2 is endemic in cattle. This single-stranded positive-sense enveloped RNA was used as an external reference virus. Pepper mild mottle virus (PMMoV), a nonenveloped single-stranded RNA virus, was assessed as internal reference virus. PMMoV, from the tobamovirus family, is an indicator of fecal contamination in wastewater as it is found abundantly in various aquatic environments (Kitajima et al., 2018). Following recovery of virus, RNA isolation and one-step reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) were conducted to determine the efficiency of each virus concentration technique.
Sampling and sample preparation
Untreated municipal wastewaters were sampled from three different regions in Stockholm, Sweden, and one region from the North of Italy in May and June 2020. The Stockholm samples were kept at +4°C, and the experiments were performed within 24 h. The North of Italy samples were kept at −20°C and delivered to the KTH Lab (Sweden) on dry ice. Twenty μl of BCoV (stock prepared in human colorectal tumor cell line HRT-18G, ATCC CRL-11663 (Christensen and Myrmel, 2018)) were spiked to 50 ml of all wastewater samples as an external reference.
Concentration methods
In this study, four approaches were tested. Method 1-Ultrafiltration: wastewater samples were centrifuged at 4600 ×g for 30 min at 4°C in order to remove large and coarse particles, and the supernatant (approx. 40-50 ml) was filtered through 10 kDa cut off centrifugal ultrafilters (Sartorius) at 1500 ×g for 15 min (Megastar 1.6R benchtop centrifuge) (Medema et al., 2020). Method 1.A-Double Ultrafiltration (Method 1 modified): The obtained concentrate from Method 1 was centrifuged a second time (10 kDa cut off Sartorius centrifugal ultrafilters, 1500 ×g, 15 min, 4°C). The obtained concentrate of method 1 and 1.A varied between 3 and 5 ml and 0.5 to 1.5 ml, respectively.
Method 2-Adsorption-Extraction: MgCl 2 (final concentration 25 mM) was added to the wastewater samples, followed by filtration through 0.45-μm pore size electronegative membranes (Supor 450, plain) (Ahmed et al., 2020b). Method 2.A-Centrifugation combined with adsorption-extraction (Method 2 modified): Wastewater samples were centrifuged at 4600 ×g for 30 min at 4°C in order to remove the large and coarse particles before addition of MgCl 2 (final concentration 25 mM). The obtained concentrate was then passed through 0.45-μm pore size electronegative membranes (Supor 450, plain). All concentrated samples were stored at −80°C until further analysis.
RNA extraction
RNA from Method 1 and 1.A processed municipal wastewater was extracted by adding three volumes of Trizol LS reagent for liquid samples (Thermofisher Scientific) to one volume of concentrated wastewater. For each ml of Trizol-wastewater mixture 0.2 ml of chloroform (Sigma-Aldrich) was added, and the aquous phase purified by miRNeasy Mini Kit (Qiagen, Chatsworth, CA). RNA from filter papers obtained from Method 2 and 2.A were isolated using RNeasy PowerMicrobiome Kit (Qiagen, Chatsworth, CA). The RNA was eluated in 50 μl, and stored at −80°C.
Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR)
Primers targeting the nucleocapsid (N) gene were used to detect the SARS-COV-2 gene. The specificity of N-gene primer set against human corona viruses and other viruses (respiratory) has been previously reported by Medema et al. (2020). For the internal municipal wastewater virus reference, primers targeting PMMoV were used, and for the external (spike) reference virus, primers targeting BCoV were used. All primers are listed in Table 1. Preliminary experiments of waste water samples showed that inhibition of the RT-qPCR reaction was reduced by addition of Bovine Serum Albumin (BSA) to the reaction mixture. Therefore, in all reaction 2 μl of 4 mg/ml BSA (Sigma-Aldrich) was added. For each reaction either 8 μl (N gene detection) or 2 μl (PMMoV and BCoV detection) of RNA template was used. This corresponds to 8 ml initial wastewater volume per SARS-CoV-2 RT-qPCR reaction, and 2 ml for PMMoV detection. Since the same initial volume of wastewater sample were used for all samples and methods (for the same virus), the results are directly comparable. SYBR Green chemistry was used to detect the expression of genes, and RNA from inactivated cultured human SARS-CoV-2 (gift from the Public Health Agency of Sweden) and BCoV were used as positive controls. Negative controls were included in each qPCR run. The reaction was performed according to the manufacturer's recommendations using iTaq universal SYBR Green one-step kit (Bio-Rad) and a final reaction volume of 20 μl (SARS-CoV-2) or 10 μl (PMMoV, BCoV). Thermal cycling (50°C 10 min, 95°C 30 s, followed by 40 cycles of 95°C 10 s, 60°C for 30 s) on a CFX96 Touch System (Bio-Rad) machine were performed. Melting Table 1 Primer sets and targeted genes. BCoV FW: 5′-TGGTGTCTATATTCATTTC TGCTG-3′ RV: 5′-GGCCACTGCCTAGGAT ACA-3′ (Christensen and Myrmel, 2018) curve detection (65°C to 95°C with increment of 0.5°C for 5 s) were analyzed for all included genes and compared to positive controls, to ensure specific amplification. Reactions were considered positive if the cycle threshold (Ct) was below 40 cycles with a single melting peak at correct temperature.
2.5. RT-qPCR amplification efficiency, limit of detection and inhibition RNA was extracted from 200 μl of cultured SARS-CoV-2 at 6 × 10 5 plaque-forming unit (PFU)/ml, and 80 μl of BCoV at 4.5 × 10 5 50% tissue-culture-infective dose (TCID50)/ml. Ten-fold serial dilutions of the RNAs were prepared and RT-qPCR performed as described above. Standard curves were generated from the log-linear regression of Ct values of replicates, and the amplification efficiencies for SARS-CoV-2 and BCoV were calculated (Nolan et al., 2013). The lowest number of diluted standards detected in duplicate assays was considered limit of detection (LOD) for the RT-qPCR assay. The presence of qPCR inhibitor in the concentrated municipal wastewater RNA sample was subsequently assessed using the PMMoV qPCR assay. RNA templates were added in series of 1 μl, 2 μl and 4 μl. The qPCR reaction was set up as described (Section 2.4). The expression of PMMoV gene was analyzed alongside non-template controls and corresponding amplification efficacy calculated (Nolan et al., 2013) and compared to that of RNA from cultured samples.
Statistical analysis
Average Ct value and standard deviation (SD) was calculated for each sampling points. Student's t-test was used for comparison, and pvalue <0.05 was considered statistically significant.
Detection of SARS-CoV-2
First, we investigated if SARS-CoV-2 were present in the collected samples. Four out of five samples tested positive for SARS-CoV-2 with N-gene primers ( Table 2). The N gene was detected in both replicates of the Stockholm 3 sample, however, for the three remaining positive samples (Stockholm 1, Stockholm 2 and North of Italy 2) it was only detected in one of two technical replicates. This suggests border limit detection in the assay, possibly due to low occurrence of SARS-CoV-2 in the municipal wastewater during the sample collection week or indicative of varying presence of inhibitors. To be noted, in quantitative gene expression analysis, SYBR Green and TaqMan are the two commonly used methods. The one used here, SYBR Green, is cheaper and does not require additional probes (Valasek and Repa, 2005). These are benefits during the pandemic, as there was less shortage of these reagents. This method can be less accurate, but by including melting curve analysis and positive controls, accuracy can be ensured.
Evaluation of the concentration methods
Next, we compared the detection sensitivity taken the different concentration methods into account. We evaluated three viruses, PMMoV, the spiked BCoV, and SARS-CoV-2. As presented in Table 3 and Fig. 1, the detection of virus was highly dependent on the concentration methods used. PMMoV, a well-known potential viral indicator in municipal wastewater (Rosario et al., 2009), was readily detected in all samples and all replicates using Method 1 or 1.A (Ct 20-29, Table 3, Fig. 1A). Method 2 and 2.A detected significantly lower levels of PMMoV in all samples (Fig. 1A). The average Ct values for PMMoV detection were 24.6 ± 2 by Method 1.A; 26.4 ± 2 by Method 1, 32.8 ± 3 by Method 2.A and 34.0 ± 2 by Method 2, as determined by five sampling points. Thus, Method 1.A was the most sensitive method for PMMoV detection. The external reference virus spiked into the municipal wastewater (BCoV) was positive in all replicates using Method 1 and 1.A (Ct 22-27, Table 3). Again, the adsorption and extraction methods 2 (33.5 ± 2) and 2.A (34.0 ± 2) presented lower detection efficiencies. The recovery rate of the spiked virus was further calculated, by comparing the RT-qPCR detection of RNA extracted from equivalent amount of spiked virus. The recovery rate was substantially higher for Method 1 and 1.A, compared to method 2 and 2.A (Fig. 1B). Of note is that the recovery was low (less than 10% in most samples) also by Method 1 and 1. A. The p values of the comparison of each method were calculated and the results showed that method 1 and 1.A have significantly higher efficiency (p-value <0.01) than Method 2 and 2.A for all wastewater viruses. SARS-CoV-2 (N gene) was detected by Method 1 in one sample (Ct: 36.4) and by Method 1.A in three samples (average Ct: 37.5 ± 0.8), whereas detection could not be obtained by Method 2 or 2.A Table 3 Mean amplification cycles of targeted genes for four different methods (Method 1: Ultrafiltration; Method 2-Double Ultrafiltration (Method 1 was modified); Method 3-Adsorption-Extraction; Method 4-Centrifugation combined with adsorption-extraction (Method 3 was modified). White boxes show not tested samples and ND: not detected. ( Fig. 1C). Thus, the centrifugal ultrafilter methods, Method 1 and 1.A, enabled higher recovery rate and virus detection ability.
Evaluation of RT-qPCR inhibitors
As we concluded that Method 1 and 1.A were more sensitive in regards to detection of all three viruses, but still did not detect SARS-CoV-2 in all replicates, we were interested in whether inhibitors affected the detection. We had already added BSA to counter inhibition, but in order to investigate remaining inhibition we calculated qPCR amplification efficiencies for the SARS-CoV-2 and BCoV virus primers used in this set up. We first calculated amplification efficiency using RNA purified from cultured viruses. With a theoretical optimal doubling of DNA molecules for each replication cycle, the amplification efficiency should be 100%. Desired amplification efficiencies range from 90% to 110%. Our calculations showed that amplification of pure SARS-CoV-2 and BCoV virus RNA (not from waste water) resulted in the desired range of 90% and 99.6%, respectively ( Fig. 2A). Further, the limit of detection (LOD) for SARS-CoV2 was found to be 0.12 PFU/reaction and for BCoV 0.045 TCID 50 /reaction. It has to be noted that PFU represent the number of infectious virus particle capable of lysing the host cell and TCID 50 represent the dose that infects 50% of the cells (Dulbecco and Vogt, 1954;Khatib et al., 1980), and they do not directly indicate virus copy number. Next, we calculated amplification efficiencies for PMMoV in two municipal wastewater RNA samples (Stockholm 3 and North of Italy 1) following three virus concentration methods (Method 1, 1.A and 2). This yielded amplification efficiencies of 67% to 116% (Fig. 2). An efficiency below 90% is indicative of non-optimal conditions, such as presence of inhibitors. Method 1 (single ultracentrifugation) or Method 1.A (double centrifugation) generated samples with the highest qPCR efficiencies, whereas the filter paper concentration (Method 2) had the lowest efficiency for both the Stockholm (67.3%) and Italian (85.9%) samples. We conclude that inhibitors appear to impact the amplification using the adsorption -extraction method, which likely contributes to the markedly higher Ct values and resulting lower sensitivity.
Internal or external calibrators
BCoV, which belongs to the Coronavirus genus in the Coronaviridae family, order Nidovirales, has similar genetic and serological properties as well as host range with other mammalian coronaviruses (Valarcher Fig. 3. Internal and external references for comparing virus levels between methods and samples. A. PMMoV detection normalized to input and recovery rate of spiked BCoV. B. SARS-CoV-2 N gene detection normalized to input and internal PMMoV reference. C. SARS-CoV-2 N gene detection normalized to input and recovery rate of spiked BCoV. Student's t-test were used to calculate statistical significance (*p < 0.05, **p > 0.01, ***p < 0.001). and Hägglund, 2010). Therefore, BCoV is classified in the same group of other mammalian coronaviruses, such as rat coronavirus, human enteric coronavirus, and human coronaviruses (Valarcher and Hägglund, 2010). In the current study, BCoV was selected as surrogate to calculate the recovery rate during sample processing and filtration, based on the similar properties. By adding a known amount of BCoV before filtration, we could estimate loss during filtration, RNA extraction, and RT-qPCR analysis. Furthermore, PMMoV is an indicator for fecal contamination in the water sources owing to its global distribution and its presence in various water sources in greater abundance than human pathogenic viruses, without substantial seasonal fluctuations (Kitajima et al., 2018). PMMoV has been used for the detection of pathogenic enteric viruses because of increased concentrations of PMMoV tend to be correlated with increased fecal contamination (Kitajima et al., 2018). In the current study, PMMoV was selected for normalization of external factors such as flow rate of the wastewater, which is changed based on wet and dry season/periods. Such fluctuation would affect the concentration of both PMMoV and SARS-CoV-2 in the samples. After concluding that the methods exhibited varying recovery rates for the spiked BCoV, and that they varied significantly in their detection of PMMoV and SARS-CoV-2 detection, we explored whether the internal or external reference viruses were suitable as calibrators in order to compare virus levels between samples. First, we plotted the level of PMMoV detection in relation to BCoV recovery (Fig. 3A). This showed that normalizing to the spike recovery rate can adjust the comparison to some extent, but not completely. A perfect callibration would result in equal levels of PMMoV in relation to the spike, but this was not achieved. Next, we normalized SARS-CoV-2 detection to PMMoV (Fig. 3B) or BCoV recovery rate (Fig. 3C). Both normalizations indicate that Stockholm 3 had higher levels of SARS-CoV-2 than Stockholm 1 and 2. However, the two normalization strategies rendered different relative values for the Italian sample (higher level in North of Italy 2 than the Stockholm 3 in relation to PMMoV, but lower when normalizing to spike recovery only).
Concluding remarks and recommendations
Since March 2020, there are many ongoing studies relating to the detection of SARS-CoV-2 in municipal wastewater (Ahmed et al., 2020a;Daughton, 2020;Nghiem et al., 2020). However, there is no standardized method. The most important need in wastewater-based epidemiology is to develop or improve and evaluate a standardized and sensitive method for the detection (Kitajima et al., 2020). Ahmed et al. (2020aAhmed et al. ( , 2020b) compared seven concentration methods by using murine hepatitis virus as external reference (surrogate) and found that the best recovery was obtained from MgCl 2 adsorption and vacuum filtration (Method 2 in the current study), in wastewater samples from Australia. This approach is cheaper and easier. However municipal wastewater characteristics might change across different countries, regions, etc. (Pons et al., 2004). The methods may also perform differently depending on type of virus (Lu et al., 2020).
In our study, municipal wastewater samples and SARS-CoV-2 from two different countries (Sweden and Italy) were tested using four different approaches in order to find out the best applicable and sensitive virus concentration method for these conditions. We also tested two reference viruses: 1) PMMoV which naturally exist in municipal wastewater, and 2) spiked animal pathogen BCoV which belongs to same genus as SARS-CoV-2. In the light of our findings: 1-Ultrafiltration and modified ultrafiltration (Method 1 and 1.A) had higher recovery efficiencies than adsorption and extraction methods (Method 2 and 2.A). The latter two appeared to accumulate more PCR inhibitors. Thus, in opposition to the results obtained by Ahmed et al. (2020aAhmed et al. ( , 2020b, the current study showed a better performance of the ultracentrifuge-based methods in terms of recovery efficiency and capacity for viral detection. 2-Double ultrafiltration (Method 1.A) provides a reduced volume of water as starting material for the RNA isolation step, making RNA extraction less laborious and time consuming. Furthermore, detection of both SARS-CoV-2 and PMMoV were generally better with this method. 3-Internal reference virus (PMMoV) was found to be a sensitive and representative standard, and adding external surrogate (BCoV) did not add meaningful additional information. We conclude that PMMoV internal standard is sufficient to inform of relative recovery and to normalize between samples.
CRediT authorship contribution statement
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 4,650.8 | 2020-10-14T00:00:00.000 | [
"Engineering"
] |
Research Online Research Online Impact of the MLC on the MRI field distortion of a prototype MRI-linac Impact of the MLC on the MRI field distortion of a prototype MRI-linac
Abstract Purpose: To cope with intrafraction tumor motion, integrated MRI-linac systems for real-time image guidance are currently under development. The multileaf collimator (MLC) is a key component in every state-of-the-art radiotherapy treatment system, allowing for accurate field shaping and tumor tracking. This work quantifies the magnetic impact of a widely used MLC on the MRI field homogeneity for such a modality. Methods: Results: Conclusions: This work shows that the MRI field distortions caused by the MLC cannot be ignored and must be thoroughly investigated for any MRI-linac system. The numeric distortion values obtained for our1.0T magnet may vary for other magnet designs with substantially different fringe fields, however the concept of modest increases in the SID to reduce the distortion to a shimmable level is generally applicable. Purpose: To cope with intrafraction tumor motion, integrated MRI-linac systems for real-time image guidance are currently under development. The multileaf collimator (MLC) is a key component in every state-of-the-art radiotherapy treatment system, allowing for accurate field shaping and tumor tracking. This work quantifies the magnetic impact of a widely used MLC on the MRI field homogeneity for such a modality. Methods: The finite element method was employed to model a MRI-linac assembly comprised of a 1 . 0 T split-bore MRI magnet and the key ferromagnetic components of a Varian Millennium 120 MLC, namely, the leaves and motors. Full 3D magnetic field maps of the system were gen-erated. From these field maps, the peak-to-peak distortion within the MRI imaging volume was evaluated over a 30 cm diameter sphere volume (DSV) around the isocenter and compared to a maximum preshim inhomogeneity of 300 μ T. Five parametric studies were performed: (1) The source-to-isocenter distance (SID) was varied from 100 to 200 cm, to span the range of a compact system to that with lower magnetic coupling. (2) The MLC model was changed from leaves only to leaves with motors, to determine the contribution to the total distortion caused by MLC leaves and motors separately. (3) The system was configured in the inline or perpendicular orientation, i.e., the linac treatment beam was oriented parallel or perpendicular to the magnetic field direction. (4) The treatment field size was varied from 0 × 0 to 20 × 20 cm 2 , to span the range of clinical treatment fields. (5) The coil currents were scaled linearly to produce magnetic field strengths B 0 of 0.5, 1.0, and 1 . 5 T, to estimate how the MLC impact changes with B 0 . Results: (1) The MLC-induced MRI field distortion fell continuously with increasing SID. (2) MLC leaves and motors were found to contribute to the distortion in approximately equal measure. (3) Due to faster falloff of the fringe field, the field distortion was generally smaller in the perpendicular beam orientation. The peak-to-peak DSV distortion was below 300 μ T at SID ≥ 130 cm (perpendicular) and SID ≥ 140 cm (inline) for the 1 . 0 T design. (4) The simulation of different treatment fields was identified to cause dynamic changes in the field distribution. However, the estimated residual distortion was below 1 . 2 mm geometric distortion at SID ≥ 120 cm (perpendicular) and SID ≥ 130 cm (inline) for a 10 mT/m frequency-encoding gradient. (5) Due to magnetic saturation of the MLC materials, the field distortion remained constant at B 0 > 1 . 0 T. Conclusions: This work shows that the MRI field distortions caused by the MLC cannot be ignored and must be thoroughly investigated for any MRI-linac system. The numeric distortion values obtained for our 1 . 0 T magnet may vary for other magnet designs with substantially different fringe fields, however the concept of modest increases in the SID to reduce the distortion to a shimmable level is generally applicable. © 2013 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
INTRODUCTION
Intrafraction organ motion is one of the major challenges in current radiation therapy treatments. During treatment, both the tumor and organs at risk (OAR) may undergo translation, rotation, and deformation, as is well-established in the literature. [1][2][3][4] In recent years, considerable progress has been made in the field of image-guided radiation therapy (IGRT) to compensate for these effects. 5,6 The term IGRT is broadly defined and includes techniques which allow only pre-or posttreatment imaging as well as such techniques which can provide real-time image guidance during treatment. The focus of this work is solely on the latter techniques which will be referred to as real-time IGRT and their potential for addressing intrafraction organ motion. Despite the wide variety of different real-time IGRT methods, common shortcomings of all methods are the use of ionizing radiation for the imaging, thus contributing extra dose to the patient, and the reliance on internal and/or external surrogates for tracking the tumor motion. In the case of internal surrogates, the implantation of fiducial markers is necessary which is an invasive procedure and not suited for all tumor sites. Furthermore, only the target is tracked, whereas adjacent OARs may also undergo (uncorrelated) motion. The inadequacy of current real-time IGRT techniques to fully address the challenges of intrafraction organ motion has motivated the design of MRI-guided radiotherapy systems as the logical next step (Table I). At present, there are currently two second-generation MRIlinac prototypes being developed (UMC Utrecht 7 and University of Alberta 8 ), a MRI-guided 60 Co radiotherapy system (Viewray 9 ), and our groups own first-generation prototype MRI-linac is under construction (Australia). In these systems, the MRI-based image guidance has a number of advantages compared to existing tumor tracking techniques: MRI is noninvasive, nonionizing, and produces images of superior softtissue contrast. While these characteristics in theory make MRI an ideal modality for image guidance, the integration of MRI device and linear accelerator (linac) creates several technical challenges. These can be grouped into two categories: (1) the influence of the MRI on normal linac operation and (2) the influence of the linac on normal MRI operation. In the former case, several studies have investigated the influence of the MRI field on the electron gun, 10, 11 the waveguide, 12 and the multileaf collimator (MLC). 13 In essence, normal linac operation could be restored with appropriate magnetic shielding 14 or magnetic decoupling of the MRI and linac. 15 In the latter case, various studies exist which looked at the ability to take MRI images of phantoms while during linac irradiation 7,16 and investigated the effect of the radiation and RF noise from the linac on the gradient RF coils. 17,18 Recently, a proof-ofconcept study for tracking of a 1D pencil-beam navigator 19 and a study on tracking of phantom motion on 2D MRI images 20 have demonstrated that image acquisition is possible with MRI-linac prototypes incorporating a MLC.
However, to the best of our knowledge, the magnetic impact of a ferromagnetic MLC has not been studied and reported in the literature for its impact on the MRI field distortion inside a MRI-linac system. For our first-generation MRI-linac system being constructed at the Liverpool Hospital (Sydney, Australia), a Varian Millenium 120 leaf MLC will be used as the final beam collimation method. Although all ferromagnetic parts of the linac are expected to induce some kind of distortion in the MRI imaging volume, a number of reasons suggest to start with simulating the MLC impact. First, the MLC will be the closest ferromagnetic component to the MRI, and therefore experience the strongest magnetic field. Second, unlike for steel parts, the magnetic properties of the tungsten-alloy MLC leaves have not been investigated before and hence acquiring this information will be invaluable. Third, we consider replacing the MLC with a nonferromagnetic version a difficult task. The ferromagnetic binders in the tungsten alloy are essential in the manufacturing process to improve machinability of the leaves, whereas the function of other linac steel parts is mostly structural, i.e., replacing them is simpler and will be done regardless.
In this work, we characterize the impact of a widely used MLC on the field homogeneity of a split-bore MRI magnet suitable for MRI-Linac systems as a function of sourceto-isocenter distance, implemented MLC components, linac beam orientation, treatment field size, and magnetic field strength.
2.A.1. Magnet model
A 1.0 T split-bore MRI magnet was modeled in COMSOL Multiphysics TM (Version 4.2a). The magnet model used was that of the design for the Australian MRI-linac prototype being constructed by Agilent Technologies. The magnet is essentially comprised of an actively shielded superconducting 82 cm diameter bore magnet wound in a split-pair configuration. The bore aperture, which is the gap between the two halves of the split-bore magnet, is 50 cm. A key design aspect was to allow two possible linac beam orientations with respect to the MRI magnetic field. In the inline configuration, the treatment beam is oriented parallel to the magnetic field direction; in the perpendicular orientation, the beam is perpendicular to the magnetic field direction. The manufacturer specification for the imaging field of the shimmed magnet is a uniformity in B z of <1 μT and <10 μT over a 20 and 30 cm diameter sphere volume (DSV), respectively. The model is represented in COMSOL by its coil configuration and the values for the coil currents were defined in external current density nodes according to the manufacturer specifications. Nonferromagnetic hardware components such as the gradient coils and cryostat were not included in the model. A virtual cylindrical air enclosure with a diameter of 20 m and a length of 20 m along the z axis was used to surround the device for the definition of boundary conditions. At this distance, magnetic insulation n · B = 0 was enforced, i.e., the assumption that the component of the magnetic field normal to the boundary will have fallen to zero. To investigate the impact of the MLC for different magnetic field strengths, the 1.0 T coil currents were linearly scaled to achieve 0.5 and 1.5 T systems. Although this approach is unlikely to produce optimal fringe fields at these field strengths, it was employed in order to only change one variable at a time. The results at these field strengths should be considered as upper limits for the real MLC induced distortion; with a magnet design optimized by a magnet vendor to produce best possible fringe fields at 0.5 and 1.5 T, lower field distortions could potentially be obtained. However, as there are no readily available split-bore magnet designs at 0.5 and 1.5 T, the linear scaling approach gives a first-order estimate of how the magnetic impact of the MLC changes with field strength.
2.A.2. MLC model
A Varian Millennium 120 MLC (Varian Medical Systems, Palo Alto) was incorporated into the magnet model. Positioned as a tertiary system below the lower jaws, the centroid of the Varian MLC is at a distance of 50.8 cm from the radiation source. This distance varies across vendors, with typically 33.6 cm for Elekta MLCs (positioned as upper jaw replacement) and 33.2 cm for Siemens MLCs (positioned as lower jaw replacement). 21 Hence, note that for the same source-to-isocenter distance (SID) the Varian MLC is positioned around 17 cm closer to the isocenter than MLCs of the other two vendors. These other MLC devices will also possess different geometry, materials, and masses. In this work, the focus is on the Varian Millennium 120 MLC which will be used for our first-generation prototype setup. However, the details about geometry, materials, and masses given below will allow to roughly estimate the impact of other MLC devices.
Only the key ferromagnetic components of the MLC were modeled, namely, the MLC leaves and motors. The MLC leaves are made from a sintered heavy tungsten alloy, whereas the DC brushed MLC motors comprise of steel casings and drive screws as well as neodymium-iron-boron (Nd-FeB) rare-earth magnets. To keep simulation of the combined model of MRI magnet and MLC practical, a range of simplifications were made, as shown in Fig. 1. For instance, fine geometric details of the MLC leaves such as rounded leaf tips, steps, and rails were neglected. Interleaf air gaps between adjacent leaves were set to zero and the leaves fused to a single solid to facilitate the meshing of the MLC leaf banks. Fusing of the leaves was the last step in building the MLC model, thereby allowing individual positioning of the leaves beforehand which is needed to simulate the delivery of different treatment fields. Furthermore, the permanent magnets of the motors were not included in the model after a preliminary simulation study confirmed that their impact on the MRI field was of negligible order, i.e., their contribution to the total field inhomogeneity was <1 %. Due to their high complexity, the MLC motors and the drive screws were represented by two blocks of their equivalent ferromagnetic mass and the true distribution of ferromagnetic material in space was approximated with the help of inner air cavities [ Fig. 1 (b)]. The outer dimensions of the motor block were 6 × 20 × 6 cm 3 , whereas the drive-screw block was 14 × 17 × 3 cm 3 . The dimensions of the inner air cavities were scaled such that all walls of both steel blocks were 0.5 cm thick. In total, the model contained 68 kg of heavy tungsten alloy and 4 kg of steel. Compared to the results of manual measurements on a decommissioned MLC, the model intentionally overestimated the real mass of the MLC components by a safety margin of 7 %. To justify the usage of the mass-equivalent approach, the simplified model of MLC motors and drive screws was compared with a more realistic model of 60 individual motors and drive screws, comprising one half of the MLC (20 full-leaf and 40 halfleaf motors). The latter model implemented the single motors and drive screws as solid structures very similar to Fig. 1 (a) but with squared instead of circular cross sections, to achieve better meshing and faster convergence. The 20 full-leaf and 40 half-leaf motors had approximate ferromagnetic masses of 45 and 30 g, respectively, in the aggregate matching the steel mass of the mass-equivalent model. For this simulation, the two different models of MLC motors and drive screws were placed in a uniform background field of 1.0 T and the agreement of the resulting magnetic field distributions was assessed locally and in the far-field regime.
2.B. Magnetization (BH) curves
The BH curves used for the simulations are displayed in Fig. 2. The BH curve for the MLC steel parts was based on the curve for 1010 steel as reported in the literature, 22,23 whereas the BH curve for the heavy tungsten alloy was measured experimentally from a sample cut from a decommissioned MLC leaf. The exact elemental composition of this material is confidential, however it is expected to be similar to the various heavy tungsten alloy grades typically used for radiation shielding which contain <10% total of a combination of copper, nickel, and iron binders. These additions act to aid the sintering process and machinability.
A superconducting quantum interference detector (SQUID) magnetometer (Magnetic Property Measurement System 5XL, Quantum Design) was used to determine the BH curve of the heavy tungsten alloy. The measurements were carried out at a temperature of 300 K. Starting with a fully demagnetised sample, the magnetometer measured the magnetic moment induced in the sample of the heavy tungsten alloy as a function of applied external field H in the range of (0-8) × 10 5 A/m. The sensitivity of the magnetometer was 10 −5 A/m. Data points were acquired with a stepwidth of 4 × 10 3 A/m in the low-field range from 0 to 8 × 10 4 A/m; above 8 × 10 4 A/m, the external field H was increased in bigger steps of 36 × 10 3 A/m. The magnetic moment was normalized by the sample volume, yielding the volume-independent magnetisation M. For cross-calibration purposes, three samples of the dimensions 3 × 3 × 3, 3 × 3 × 4, and 3 × 3 × 6 mm 3 were measured. Then, the magnetic flux density B was derived according to the fundamental relation (1)
2.C. Simulations
COMSOL Multiphysics was used to implement the full 3D models of the MRI magnet and MLC components described in Sec. 2.A. Simulations were set up using the magnetic fields (mf) interface which is part of the AC/DC Physics module.
With the main MRI coil currents being steady over time, the problem could be solved as a magnetostatic problem. Hence, a stationary solver with the magnetic vector potential A as the solution variable was chosen. Using the iterative FGM-RES solver with the COMSOL default settings, the solution was numerically approximated on the basis of the applicable Maxwell's equations, namely, ∇ · B = 0 and ∇ × H = J , where J stands for the electric current densities in the MRI coils. A relative error below 0.001 was defined as the convergence criterion, at which the software terminated the computation and returned a solution.
The nonlinear magnetic permeability of the ferromagnetic materials was incorporated into the COMSOL solution via their respective BH curves (Fig. 2) added under the Material Properties node.
The primary quantity of interest for the data analysis is the magnetic field B which was automatically derived within COMSOL from the magnetic vector potential A according to the relation B = ∇ × A.
The finite element method (FEM) mesh used to discretize the geometry was gradually refined until mesh independence was reached for the computed solution. This point was defined by the criterion that further increases in the mesh resolution did not improve the accuracy of the MRI field uniformity evaluated in the 30 cm DSV imaging volume. The final mesh contained a total of 16 × 10 6 mesh elements, of which 12 × 10 6 elements were inside a volume of 3 × 3 m (diameter) symmetric cylinder surrounding the MRI coils. The maximum element size within the 30 cm DSV was set to 1.0 cm, giving rise to 2.5 × 10 6 elements inside the DSV. For the MLC components, the minimum and maximum element sizes were 0.01 and 1.0 cm, respectively; the 70 × 25 × 10 cm 3 block volume encompassing the MLC structures contained 0.5 × 10 6 elements.
When solved, a simulation of the bare MRI magnet took around 20 h on 12, 2.6 GHz AMD cores. Adding the MLC leaf banks to the model increased the solution time to around 30 h on the same number of cores; for the full model including the mass-equivalent MLC motors and drive screws, the solution time went further up to around 52 h. The steep increase in solution time is due to the nonlinearity of the solving process for ferromagnetic objects. The RAM required per simulation was in the range of 180-250 GB.
In both the inline and perpendicular configuration, simulations were performed for a range of SIDs as the principal parameter of investigation. Starting from a SID of 100 cm, which is typically used in modern radiotherapy treatment systems, the SID was gradually increased in steps of 5 cm up to a maximum value of 200 cm, thus moving the MLC further away from the MRI magnet.
At each SID, only the MLC leaves were implemented in a first simulation, before the simulation was solved again for the model incorporating both MLC leaves and motors (including the drive screws).
The simulations were repeated at three different magnetic field strengths B 0 of 0.5, 1.0, and 1.5 T. Furthermore, to investigate whether or not active shimming techniques would be necessary, variations in the MRI field homogeneity for different treatment field sizes were studied by simulating field sizes of 0 × 0, 5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm 2 . Note that the MLC aperture required to achieve a given field size at the isocenter decreases as the MLC and linac are positioned at larger SID. Therefore, the MLC aperture was scaled inversely with increasing SID to keep the field size constant.
2.D. Data analysis
The magnetic field inhomogeneity is typically stated as peak-to-peak distortion over the MRI imaging volume. The shimming for the Australian MRI-linac will be performed by the University of Queensland. Based on recent work, the criterion of 300 μT distortion over a 30 cm DSV has been adopted as the maximum preshim inhomogeneity in the 1.0 T system. 24 For each magnetic field simulation, the inhomogeneity in the resultant magnetic flux density B was hence analyzed on the surface of the 30 cm DSV around the isocenter in a spherical coordinate system. 28 322 data points were taken on the DSV surface with an angular resolution of φ = 1.5 • and θ = 1.5 • . The magnetic field vectors were dominated by the B z component as the static magnetic field was applied along the z axis. The concomitant B x and B y components were close to zero within the DSV, i.e., below 10 −6 T, and are generally not considered in shimming. Therefore, the field inhomogeneity was quantified for the B z component as peak-to-peak distortion, i.e., as the absolute difference (in μT) between the maximum and minimum value of B z on the DSV surface according to
3.A.1. Model of MRI magnet
Figure 3(a) shows a magnetic field magnitude (| B|) plot through the magnet center for the 1.0 T MRI system as obtained by the manufacturer (fill plot). Overlaid on this image is a contour line plot from our COMSOL model. Two lowfield regions are also clearly identified (dashed boxes) and are where the linac and MLC will reside in either the inline or perpendicular configuration. In this plot, regions with a magnitude below 0.06 or above 2.0 T are shown as white. An excellent agreement is seen between the contour and fill plots at selected values between 0.06 and 2.0 T. Only the 1.0 T contour line does not exactly match the Agilent field at the center of the magnet because the mean B z value within the DSV is 0.999782 T, which is 218 μT lower than 1.0 T. The fact that this is not exactly 1.0 T is not important as all coil currents can be scaled accordingly to get exactly 1.0 T. This procedure is essentially what is done after installation of a MRI system inside a building to correct for magnetization of the surrounding ferromagnetic objects once operational. In our modeling results, we have not scaled the coil currents to get a mean B z of 1.0 T in the DSV, but decided to keep the coil currents identical to the manufacturer specifications. In Fig. 3(b), the spectrum of B z values within the MRI imaging volume obtained for the COMSOL model are displayed. For a 30 cm DSV, the field distortion is 6.8 μT; for the 20 cm DSV, the spread is 1.5 μT. This matches the manufacturer specification for the shimmed magnet of 10 μT over the 30 cm DSV, however is slightly off at 20 cm DSV compared with the specification of 1 μT. Note that a match at the 20 cm DSV can be achieved by further refining the mesh. However, as the distortion is exclusively evaluated over the larger 30 cm DSV throughout this work and the inhomogeneity is higher from 20 to 30 cm than from 0 to 20 cm, increasing the number of mesh elements was not pursued.
It is clear from Fig. 3 that an accurate model of our MRI system has been developed inside COMSOL which matches the manufacturer specifications. Together with the accurate measurement of the BH curve as described in Sec. 2.B, this gives us the ability to predict the impact of the ferromagnetic MLC components on the DSV field homogeneity with high confidence.
3.A.2. Model of MLC motors
In a uniform background field of 1.0 T (in B z direction), the simplified motor model of mass-equivalent blocks was compared with a model of 60 square steel motors. Comparing both models, Fig. 4(a) shows the resultant magnetic field obtained when the MLC motors and their drive screws were placed in this background field. The local field in the proximity of the MLC motors is clearly different for the two models since the motor geometry and the distribution of steel strongly influence the field characteristics in this region. However, with increasing distance from the MLC motors, the field distributions become gradually more similar and are in good agreement in the region in which the field distortion is determined for SIDs in the range of 100-200 cm. Evaluated over a virtual 30 cm DSV, Fig. 4 (b) compares the field distortions for both models as a function of distance. At all distances, the simplified model of mass-equivalent blocks with inner air cavities produces a higher inhomogeneity than the model of 60 square motors, with a maximum difference of 8% at 100 cm SID. Hence, our simplifications can be considered as giving an upper limit for the real field distortion; the difference is below 3% for SID ≥ 120 cm. The mass-equivalent approach was applied throughout the remainder of this work, keeping solving of the simulations feasible.
3.B. MLC bank and motors
For qualitative assessment, Fig. 5 z direction (y direction). The MLC is implemented in a zero-treatment-field configuration, i.e., the MLC banks are completely closed. This scenario is of particular interest as it represents the default MLC configuration before and after treatment. Clearly, the MLC distorts the field homogeneity within the DSV, gradually lifting B z across the DSV. Qualitatively, the impact of the MLC banks on the MRI field drops continuously with increasing SID. Figure 6 extends on these observations showing the quantitative results for both treatment beam orientations. MLC leaves and motors contribute similar orders of magnitude to the total field distortion: The 4 kg of 1010 steel exert approximately the same but slightly higher impact on the DSV field inhomogeneity than the 68 kg of the heavy tungsten alloy comprising the MLC leaves. The total B z caused by MLC leaves and motors (dotted curves in Fig. 6) drops below the preshim threshold of 300 μT at 140 cm SID and at 130 cm SID in the inline and perpendicular orientation, respectively. This means that, with respect to the typically used SID of 100 cm, the entire radiotherapy treatment unit must be moved further away from the isocenter by at least 40 cm (inline) or 30 cm (perpendicular). Comparing the two orientations, the distortion is generally smaller in the perpendicular orientation up to 160 cm SID due to the faster falloff of the fringe field along the y axis [see Fig. 3(a)]. As a consequence of the faster field falloff, B z drops further below zero in the perpendicular orientation. This higher magnetic fringe field (relative to the inline orientation) gives rise to a slightly higher distortion at SID larger than 160 cm. However, in this SID region, the B z values are well below 300 μT and therefore uncritical from a shimming perspective in both beam orientations. Worth noting is the particularly low B z at 145 cm SID in the perpendicular orientation due to the positioning of the MLC in the low-field region around the zero crossing at 95 cm from the isocenter in y direction for this SID [see Fig. 3(a)]. Figure 7 displays the effect of varying field sizes on the peak-to-peak distortion at a magnetic field strength B 0 of 1.0 T. The plots illustrate that different treatment fields change the distortion patterns during treatment to some extent. The shown changes are solely caused by repositioning of the MLC leaves. MLC motors were neglected in this scenario as they remain stationary during treatment, meaning their unchanged contribution can be shimmed by appropriate passive shimming.
3.C. Treatment field size
At any given SID, B z is maximum for the 0 × 0 cm 2 field and continuously decreases with increasing field size. As a general trend, the difference in distortion with field size becomes less pronounced for larger SID. This shows that the geometric details of the distribution of ferromagnetic material, such as the exact MLC leaf positions, lose importance with larger distance from the isocenter.
In the following, an optimized passive shim set is assumed for the 10 × 10 cm 2 field, being in the middle of the spectrum of simulated field sizes. We evaluated in which scenarios this passive shim set produced a sufficiently uniform DSV at the other field sizes, i.e., lead to negligible residual distortions. In the cases where the residual distortion introduced by other field sizes is of an order that cannot be neglected, active shimming is needed to address this extra distortion. In practice, the optimized passive shim set will not completely null the field inhomogeneity but realistically result in a remaining distortion of about 10 μT for the 10 × 10 cm 2 field. This estimate is in conformity with the manufacturer specification for the DSV field uniformity of the shimmed 1.0 T magnet (Sec. 2.A.1) and has to be considered together with the residual distortion at other field sizes.
To determine the residual distortion, full 3D analysis of the distortion pattern is required rather than looking at the peakto-peak distortion. This is necessary as any specific value for the peak-to-peak distortion can be produced by innumerable different 3D field distributions, each of which needs to be shimmed with a different passive shim set. Thus, for a 3D analysis, the spatial difference in the magnetic field component B z (x, y, z) is calculated for each field at all SIDs with respect to the 10 × 10 cm 2 reference field and the maximum value within the DSV determined (Table II).
In general, the results for the maximum spatial difference in Table II display the same trends with regards to SID and field size as the peak-to-peak distortion in Fig. 7. The corresponding geometric distortion depends on the strength of the applied frequency-encoding gradient. The proposed 1.0 T system will operate with a gradient strength on the order of 10 mT/m. This means that the 10 μT distortion after passive shimming together with a maximum spatial difference of 2 μT (marked in italics in Table II) gives rise to 1.2 mm geometric distortion over the 30 cm DSV, which is considered acceptable for our purposes. However, under the assumption of statistical independence, summing the two contributions in quadrature would allow a maximum spatial difference of 6 μT for the same total geometric distortion (bold). Differences >6 μT (underlined) would require the implementation of active shimming to restore MRI image quality.
Note that the gradient strength may vary in practice. Depending on which particular MRI acquisition sequence is used, the effective gradient strength could be lower. The geometric distortion is proportional to the inverse of the gradient strength and would hence be higher for smaller gradients.
For field sizes up to 20 × 20 cm 2 , the 6 μT criterion is met for SID ≥ 120 cm (perpendicular) and SID ≥ 130 cm (inline). In Sec. 3.B, the closest realizable SIDs meeting the 300 μT criterion were found to be 130 cm (perpendicular) and 140 cm (inline). For these SIDs, the sole use of passive TABLE II. Maximum spatial difference in B z (μT) within 30 cm DSV for various field sizes with respect to 10 × 10 cm 2 field and B 0 of 1.0 T in (a) inline and (b) perpendicular orientation. Differences are classified as ≤ 2 μT (italics), ≤ 6 μT (bold), and > 6 μT (underlined). Based on the use of a 10 mT/m frequencyencoding gradient, these limits together with 10 μT remaining distortion after passive shimming correspond to geometric distortions of ≤ 1.2 mm, when summed linearly (italics) or in quadrature (bold), and > 1.2 mm (underlined). Geometric distortions up to 1.2 mm can be tolerated for our purposes. Thus, passive shimming may be sufficient for SID ≥ 120 cm (perpendicular) and SID ≥ 130 cm (inline). shimming is sufficient according to Table II. Thus, the implementation of active shimming techniques can be avoided. Figure 8 displays the peak-to-peak distortion versus the SID for magnetic field strengths B 0 of 0.5, 1.0, and 1.5 T. The model contained all MLC components, with the MLC bank in a completely closed configuration (0 × 0 cm 2 field size). Other field sizes were not considered here; the distortion values are absolute and not normalized to a 10 × 10 cm 2 reference field size as in Sec. 3.C. The plots clearly show that higher magnetic field strength B 0 generally increases the field distortion introduced by the MLC. For example, the distortion is the lowest for 0.5 T, with the 300 μT criterion being met for a 5 cm smaller SID compared with strengths of 1.0 T and 1.5 T in both orientations. However, more interesting here is the onset of magnetic saturation becoming obvious in the data at 1.0 and 1.5 T. The saturation can be explained by a closer examination of the magnetic properties of the im-plemented ferromagnetic materials (Fig. 2). The heavy tungsten alloy comprising the MLC leaves is saturated well below 0.5 T and hence contributes a similar absolute distortion for all three examined field strengths. The BH curve of the 1010 steel changes from positive to negative curvature between 0.5 and 1.0 T. In consequence of operating in a region of negative curvature, approximately the same absolute distortion is produced above 1.0 T.
3.D. Magnetic field strength
From an image guidance point of view, higher magnetic field strength is desirable due to improved signal-to-noise ratio. Therefore, higher B 0 could provide better image quality. With this in mind, image quality could be gained without rendering the shimming of the MRI magnet in the presence of the MLC more difficult at B 0 > 1.0 T. However, a variety of problems associated with higher B 0 such as a failure of the MLC motors, 13 electron gun operation, 10 or a more severe electron return effect 25 would have to be overcome. Furthermore, the presented results were derived from scaled coil currents (see Sec. 2.A.1). The applicability of this to a technically feasible model of 1.5 T or higher field strength would have to be investigated in future work.
CONCLUSION
In this work, the finite element method was used to predict the magnetic impact of the Varian Millennium 120 MLC on the DSV field homogeneity for a prototype MRI-linac system. The presented studies showed that the MRI field distortion caused by the MLC cannot be ignored and must be thoroughly investigated for any MRI-linac system. In cases where the field distortion is found to be problematic, increases in the SID can be used to reduce the distortion to an acceptable level, meeting the preshim inhomogeneity threshold of 300 μT and limiting the geometric distortion to <1.2 mm after passive shimming. For our particular 1.0 T magnet design, this was achieved at a SID of 130 cm (perpendicular) or 140 cm (inline). Although the numeric results may vary for other magnet designs due to very different magnetic fringe fields, the concept of modest increases in the SID to reduce the distortion to a shimmable level is generally applicable. | 8,292.2 | 2013-12-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
A simple macro-scale artificial lateral line sensor for the detection of shed vortices
Underwater robot sensing is challenging due to the complex and noisy nature of the environment. The lateral line system in fish allows them to robustly sense their surroundings, even in turbid and turbulent environments, allowing them to perform tasks such as shoaling or foraging. Taking inspiration from the lateral line system in fish to design robot sensors could help to power underwater robots in inspection, exploration, or environmental monitoring tasks. Previous studies have designed systems that mimic both the design and the configuration of the lateral line and neuromasts, but at high cost or using complex procedures. Here, we present a simple, low cost, bio-inspired sensor, that can detect passing vortices shed from surrounding obstacles or upstream fish or robots. We demonstrate the importance of the design elements used, and show a minimum 20% reduction in residual error over sensors lacking these elements. Results were validated in reality using a prototype of the artificial lateral line sensor. These results mark an important step in providing alternate methods of control in underwater vehicles that are simultaneously inexpensive and simple to manufacture.
Introduction
Complex fluid dynamics in underwater environments make it difficult to design sensors that are effective at detecting and interpreting the surroundings of underwater robots. As a result, many of the underwater robotic platforms in use at the moment require the use of tethers and an operator to complete the sensory tasks required of them [1,2]. Other platforms are able to use visual processing techniques to effectively navigate autonomously, but in areas with low light or high turbidity this is more difficult [3]. Additional challenges exist when attempting to form swarms of underwater robots as they must also be able to sense and communicate with each other [4][5][6][7]. However, the benefits that swarms can bring are numerous, including increases in fault tolerance, highly beneficial in the extreme ocean environment, and parallel processing capacity, useful given the ocean's vast area [8]. Applications could then include underwater inspection, search and rescue, exploration, or environmental monitoring [9,10].
To help deal with these problems, we turn to nature for inspiration. Many of the creatures found in the sea have adapted their senses for better use underwater and even developed entirely new ones. An adaptation of the auditory systems in many cetaceans allows them the use of sonar to aid in navigation [11], while fish like the Peters' elephantnose fish [12] and the black ghost knifefish [13] are able to actively generate an electric field for the same purpose. Another sense that teleost fishes posses is the lateral line [14][15][16][17]. The lateral line is comprised of two types of sensory units, superficial neuromasts and canal neuromasts [15,16,18], that detect the surrounding flow velocities and accelerations [18,19]. Neuromasts are small hair-like structures that either exist on the skin (superficial neuromast) or in a system of canals that sit beneath the skin (canal neuromast); these sub-dermal canals have small openings, pores, that allow the canal neuromasts to gather information from their surroundings [14,20]. It has been shown that some fish are able to navigate, hunt and shoal by relying solely on the lateral line [20][21][22][23][24][25][26][27]. The lack of light available to cave dwelling fish and the issues of scattering and absorption that autonomous underwater vehicles (AUVs) at depth or in highly turbid water experience [28,29] share some overlap, and the cave dwellers rely heavily on their lateral lines to overcome this [25,26]. As such, a new type of sensory suite inspired by the lateral line seems a prudent way to overcome these issues in AUVs too.
A number of biologically inspired artificial lateral line sensors do exist and have been shown to be effective [30][31][32][33][34][35][36][37][38], but they often use micro-mechanicalelectro systems (MEMS) which can be difficult to manufacture [39][40][41]. While many MEMS processes are batch processes, and as such they are able to produce large numbers of sensors for relatively low cost, the complexity of the manufacturing processes involved can require specialist equipment. Given that the ultimate aim of this project is to bring the artificial lateral line to underwater robotic swarms, a complex design or manufacturing process, with its complex manufacturing equipment, will increase both the time and the cost required to reach the final product. The large scale of the ocean environment dictates that many agents are required for any swarm operating there to be effective and as such all efforts must be made to reduce these, otherwise the swarm will require too many resources to ever be feasible. A number of other systems exist that are able to combine off-the-shelf sensing units, particularly pressure sensors, into an effective artificial lateral line that is less complex than the systems above [42][43][44][45]. However, even these systems require the use and subsequent coordination of multiple pressure sensors for effective flow sensing, which can again drive up cost. Further work is needed to create a novel design of very simple sensor that can operate effectively without requiring multiple instances of itself.
Here, we propose a new design for a simple and low-cost artificial lateral line system composed of a neuromast and bio-inspired canal structure. Our design differs from other more conventional canal neuromast designs due to its u-bend shaped front and rear facing pores; this helps filter background flow more effectively. These design changes are introduced as a result of study by the authors looking at how variations in the shape of the lateral line affect functionality [46]. Additionally, it is macro scale in size which makes it significantly easier to manufacture. In fluid dynamics, varying scales can drastically alter flow properties, and as a result much of the work that has come before this has stayed true to the expected biological scales. However, this project wanted to test if macroscopic artificial lateral lines could also function effectively. A potential risk of this design choice was that the larger sensor body could alter the flow field, but as we demonstrate here, the differences, if any, are not enough to prevent the sensor from functioning properly. We also deem the benefits of being able to manufacture this sensor using mostly 3D printing and without the need for special equipment to outweigh the downsides. The design also differs from previous work by using a highly flexible membrane as a fulcrum about which a stiff element will rotate, transferring force from the fluid within the sensor to the other side of the membrane. The sensor is designed to detect shed vortices, which are often formed by the swimming motions of upstream fish (natural or robotic), or obstacles. Being able to detect the vortices shed behind obstacles would give an AUV improved environmental awareness, while detecting swimming neighbours can aid in navigation and coordination. We highlight the ability of this novel canal lateral line sensor to filter background flow and its sensitivity to the shed vortices left in the wake of swimming neighbours, building on some preliminary work. We demonstrate in simulation that the sensor can detect shedding vortex patterns emanating from a cylinder used to mimic an upstream fish or an obstacle [47][48][49]. We then justify the design specifications by comparing our sensor to variations reflecting each design decision. Results show the importance of all design decisions in improving similarity between the signal detected by the sensor and the original signal. The end result showed a 20% improvement over the previous one [50]. Based on these results, we produced a physical sensor using off-the-shelf low cost parts and 3D printing and demonstrated its ability to sense shed vortices in water.
Optimising the lateral line
Our previous work has detailed an initial investigation into the use of a canal structure to filter background flow and into the effects of varying pore size on sensitivity [50]. The lateral line structure in fish resembles a long tube with a number of openings along a roughly 1D line on one side. The pores allow information from the external fluid flow to be transferred to the fluid within the canal where it can be interpreted by the fish. The sensor designed in the previous work is meant to represent a closed section of this canal, and used a cuboid with two circular holes on the same face (to represent pores), and a hair cell within the sensor on the internal face opposite these holes. Flow velocities were measured within the sensor and also within the flow field in the absence of a sensor. Effectiveness was measured by finding the detected (the flow velocities within the sensor) and expected (flow velocities in open flow) time series' residuals through Euclidean distance measures. Data from this showed that the canal structure was effective at filtering background flow and that increasing pore size increases internal flow speed, in turn resulting in increased sensitivity. However, signal integrity was not well maintained.
From the work by Scott et al, we hypothesised that three key design changes were needed to reduce residual error, namely the use of circular channels, the use of rear-facing pores, and the use of increased spacing between pores [50]. These changes stemmed from observations made in simulations with two major issues being noted: that passing vortices were not affecting pores in the expected way, and that turbulence was being established within the sensor and sometimes persisting after the vortex was gone. The change from a square canal design to a circular canal design was to help reduce the internal turbulence and was inspired by both biology, the canals seen in the lateral line not being square in shape [51], and aerodynamics, where flow around corners in square ducts are associated with high turbulence and vorticity [52,53]. The expected mechanism of the original sensor was that the passing vortex would move flow in the downstream pore and then back out of the upstream pore, but this was not always the case, as vortices often affected only the upstream pore or both pores at the same time. The use of a rear facing pore was also inspired by biology, with the channels that lead from pore to sub-dermal canal tending to face backwards and with scales as well often occluding the pore's upstream side [51], and while there is limited data on how the angle of the canal affects how flow and turbulence is interpreted, we hypothesise it will help to reduce interference from background flow. The increased distance between pores was introduced to help prevent passing vortices affecting both pores at the same time and ensure that any vortex was being detected only by the desired pore.
To verify that the design changes improved the sensor, additional designs, with one of these elements missing or altered, were created and compared to the optimised sensor. AutoDesk fusion was used to create all designs.
Artificial lateral line design
Inspired by its natural counterpart, our optimised artificial lateral line sensor has a canal, two pores, and one neuromast embedded in the canal. The sensor is formed of a long curved tube with the bottom shorter part dedicated to detecting the vortex and hosting the artificial neuromast hair, while the longer part acts as the canal and neutral pressure reservoir ( figure 1(a)). This neutral pressure space then allows a pressure differential to be created when the negative pressure at the centre of a vortex passes the mouth, resulting in fluid accelerating out of the canal and deflecting the artificial neuromast (figure 1(c)). The design hinges around the idea of using a stiff rod to act as the neuromast, with half of this rod inside the canal and half outside, and a thin elastic membrane, positioned at the midpoint of the rod, to act as a fulcrum ( figure 1(b)). In this way, fluid motion within the sensor causes the external end of the rod to exhibit an equal and opposite motion. The highly elastic membrane offers almost no resistance, allowing the neuromast to swing freely, and removing the need to simulate a neuromast; fluid velocity is taken as a proxy for neuromast deflection as with minimal resistance the two will be very closely related. Dimensions were chosen to be easy to produce using 3D printing and off-the-shelf components: 140 mm long, 39 mm wide and 32 mm in height ( figure 1(b)). The sensor is meant to align with the fluid direction, with pores opening in the opposite direction. This prevents interference on the neuromast due to water displacement (e.g. river flow) or robot motion from impacting the sensory readout (figure 1(c)).
It is worth noting the difference in scale between this design and that of the biological lateral line: with changing scales, the associated Reynolds number will also change and with it, the flow properties of the system as a whole. The Reynolds number are calculated using: Within the sensor, ρ = 1 for the density of water, μ = 1 for the dynamic viscosity of water at 18 degrees, V = 0.0025 m s −1 as the maximum speed of water flow within the sensor, taken from data extracted from simulations, and D as 15 mm from channel diameter. As flow velocity within the sensor varies with time, this gives: 0 < Re < 375. Externally: 10 000 < Re < 70 000.
With water density and dynamic viscosity being maintained, but using V = 0.5 m s −1 and with D being either 20 mm or 140 mm, referring to the lengths of the short and long canals respectively.
Simulation-based experiments
To predict the expected sensory readout from the artificial lateral line and optimise its design, we simulate an environment with flow (due to robot motion or water motion), and a cylinder to generate vortices in lieu of an upstream fish, robot or obstacle. Simulations were created with OpenFOAM at 0.5 m s −1 . The simulated area was either 400 or 500 mm (depending on the length of the sensor in the simulation) by 200 mm by 250 mm, using 800 103 cells. A cylinder of 100 mm diameter was included as a vortex generator. Our sensor was placed 200 mm behind the cylinder and 40 mm to the side of the cylinder's centre line, as analysis of flow behind the cylinder indicated that this point has the maximum flow velocity variation, and hence is the point from which the most information can be extracted [50]. Simulations used the semi-implicit method for pressure linked equations algorithm coupled with a Reynolds-averaged Navier-Stokes equations to get a steady-state approximation of the Kármán vortex street. The SST k-omega model is used for turbulence calculations. Data was extracted along a line in the vertical plane that extended between the two internal walls of the sensor. The data was extracted in the form of x velocities along the length of the line, which was then averaged. No neuromast is simulated as in the experiment the combination of a stiff hair coupled with a highly flexible membrane as a base makes the neuromast extremely sensitive to velocities, and as such the measured velocity can be considered a proxy for neuromast deflection. Experiments are performed using the computational fluid dynamics simulator OpenFOAM. Data was visualised using ParaFOAM, the post-processing element of the Open-FOAM software, but additional analysis was done in MATLAB: velocity data from each point in the mesh was exported to an excel file, which was then imported into MATLAB.
Signal analysis
Signal processing analysis was done to determine which of the sensor designs was most effective at both detecting vortices and retaining signal integrity. For each design, Euclidean distance measures are used to compare the residual error between the time series of detected velocities inside the sensor and the time series of expected velocities in the absence of the where v p and v q are the timeseries p and q, and d is the total residual between them. Time series are normalised (with mu = 0 and std = 1) before residuals are calculated to make comparison between residuals easier: where v p is the time series,v p is the normalised time series, v p is the mean of the time series and σ is the standard deviation of the time series.
The design with the lowest residual is then said to be the most optimised. Identical signals will have a residual of zero. For reference, comparison with the inverse of the expected time series (v * p (−1)) gives a residual of 0.0282. Stl files were exported from AutoDesk and into OpenFOAM using the snappyhexmesh capability. A water flow of 0.5 m s −1 was used as a realistic speed for an AUV [54], and the simulation was allowed to run for 400 s to allow it to reach a steady state solution.
Artificial lateral line production and experiments
Our optimised sensor (figure 1(c)) was 3D printed using a Form 2 printer and Clear V4 resin. A 7 × 7 mm window was cut into the short side of the sensor to add the artificial neuromast. Our neuromast consists of an unpowered LED with a layer of cloth adhesive-tape attached between the pins to increase the surface area affected by the flows; an LED was used for convenience, but a 3D printed structure with a high contrast head for tracking and a plate to receive fluid force could have been used. The LED bulb was coloured with black to increase contrast, and is henceforth referred to as the visual tracker. A 10 × 10 mm square of elastic material was used to cover the window, and the pins of the LED were pushed through this, so that the visual tracker remains outside the sensor; the fabric was added after this stage. The taped pins of the LED are exposed to the flow within the sensor and deflect in response to changing velocities. The 10 × 10 mm elastic square represents the highly elastic membrane discussed earlier and acts as a fulcrum, allowing the neuromast hair to swing freely. The movement of the visual tracker indicates deflections of the pins inside the sensor as a response to shed vortices. A camera is used to record the motion of the visual tracker to produce the sensory output. Several of the design decisions discussed here were due to the Covid-19 pandemic limiting access to certain resources.
Preliminary experiments were undertaken to demonstrate the physical sensor's ability to filter the laminar background flow and to detect shed vortices. To demonstrate the former, the sensor was moved through a static container of water at a steady speed, in approximation of having a steady laminar flow being passed around the sensor. This was done to demonstrate that the laminar flow, typically background flow, would be filtered out and not result in immediate saturation of our neuromast; saturation occurs when the visual tracker reaches its position of maximum deflection. This is undesirable as after this point no additional information can be extracted from the surroundings, such as from turbulent flows. To demonstrate the latter, the sensor was held stationary in a large container of water and a cylinder measuring 100 mm in diameter was pulled through the water alongside the sensor. Speeds and distances were varied to test the conditions over which the sensor was able to function without information saturation and to detect a vortex. Each pass was recorded. Any motion of the visual tracker was tracked as a way of indicating that the sensor had been successful.
Further experiments were also conducted in a custom-built flow tank, consisting of a test area and a water reservoir (figure 2). The reservoir was filled using an external water source, before being allowed to flow through the test area. A 100 mm diameter plastic cylinder was placed into the centre of the test area to generate the necessary vortices. Windows were cut into the top surface of the test area to allow the sensor to be positioned in different locations behind the cylinder (figure 2). A camera was positioned 150 mm away from the top of the arena and set to record in 1080p at 30 fps. Footage was analysed frame by frame in the image processing software GIMP, where each video was given a global coordinate system centred on the bottom of the sensor opening, and movement of a fixed point on the visual tracker was recorded against this system; the fixed point varied between videos so all graphs were adjusted to be centred around 0.
Results
Simulations were run to verify if the design changes discussed above, namely changing from square channels to fixed diameter channels, using pores that face backwards instead of sideways, and increasing the distance between the pores. Across all of these results we saw a mean improvement (measured by percentage difference) of 25%, and a maximum improvement of 39% (fixed diameter channel vs square channel). We then demonstrate experimentally that the optimised design is able to filter background flow and detect shed vortices, both when stationary and in flow.
Fixed diameter internal channels prevent internal turbulence
We aimed to minimise internal turbulence by setting a fixed circular diameter for the internal channels. Our observation was that the internal shape of the sensor, i.e. using a square canal vs a circular canal, could cause this. This issue was particularly pronounced in structures with corners, as internal turbulence had a tendency to linger between vortices, adding significant noise to multiple cycles of the waveform. We theorised that removing the empty corners and instead having just a channel of fixed radius would reduce or even remove the internal turbulence levels. Figure 3 shows the improvement seen when using a circular canal design (a), as opposed to a square canal design (b). While there are parts of the square canal's time-series that appear to overlap well with that of our design's, the majority of it is different to both ours and the original signal. Comparison between the residuals reveal that the square canal has a residual error of 0.0231, almost 40% more error than what is detected by the square channel compared to our final optimised design.
Rear facing pores reduce noise
The orientation of the sensor was chosen to reduce the amount of background flow (from sensor motion or laminar water flow) entering the sensor and disrupting the vortex measurement. To demonstrate Figure 2. The physical experiment used to test the 3D printed sensor. A flow tank, consisting of a large water reservoir and an observation area, was designed and built to test the response of the 3D printed sensor to vortices shed by a cylinder in flow. Responses are measured through observation of the motion of the visual tracker (an LED bulb covered with black tape) that deflects in response to a passing vortex. The cylinder was placed in the observation area, and the 3D printed sensor was affixed in a number of locations (marked by the lettered circles) downstream. A chimney structure was used to allow the sensor to be placed fully into the flow without having the flow interfere with the tracker response, and also to affix the sensor in place; this was done by fixing the top of the chimney to the underside of a clear perspex sheet, which was then placed into the observation window and fixed in place. The long side of the sensor is shorter here than in the simulations, as it was found that this length did not matter, as long as a neutral pressure reservoir was maintained. Images are not to scale. the effect of orientation, we compare the optimised sensor design against a sensor designed with a single side-facing pore and a single forward-facing pore (figure 4). Comparing errors reveals that the optimised design is approximately 20% better than either (b) or (c). The design in figure 4(c) also experiences significantly higher velocity on the neuromast which effectively removes the filtering property of the sensor. The side-facing pore (figure 4(b)) offers significant improvement over this but is, again, not as effective as the rear-facing pore.
Further spaced pores result in less interference
Increasing the spacing between pores was done to reduce the ability of a vortex to affect both pores at the same time, leading to interference. We observed that when pores were in close proximity to each other, vortices affected both simultaneously. In real terms, this meant that both pores were experiencing an in-flow (or out-flow) at the same time, particularly during the early (or late) stages of a passing vortex being detected, which resulted in either no velocity on the neuromast as the flows from the two pores cancelled each other out, or in a noisy signal. We theorised that having the two pores further apart would make it less likely that both pores could be affected by the same signal at the same time. Figure 5 shows that the design with the pores in close proximity (b) displays approximately 20% greater error than our design. Increasing the spacing between pores in a direction perpendicular to flow appears to reduce accuracy, with the widely spaced pore design (c) having over 25% more error than our optimised design. Both of these designs do retain their filtering properties though.
Our design is inspired by the structure of the lateral line, particularly the subdermal canal part of the lateral line, with the u-bend and opening imitating a single pore. The initial work by Scott et al was more focussed on including two pores, which were then connected to a limited section of the subdermal canal [50]. Based on the results above, it appears that the longer section of canal has a greater effect on allowing the sensor to extract information properly from the surroundings. This represents a shift from trying to capture bulk water movement to capturing changing pressures: the previous sensor attempted to sense water moving into the downstream pore of the sensor, deflecting the neuromast, and then exiting from the upstream pore. This mechanism was very susceptible to noise due to the sensor sometimes having flow enter the upstream pore instead of the downstream pore or enter both pores together. Both pores were also being affected by the negative pressure of the passing vortex too. Neglecting water motion allowed us to eliminate much of the noise in the original signal. The increased length of canal section then gives a more stable neutral pressure reservoir with which to create a pressure differential.
It also appears that the neuromast needs to be close to the canal for its benefits to be properly felt. A possible further design to experiment with would be a long canal section that was also widely spaced from the neuromast, so as to explore the effect that separation between the canal and the neuromast has on signal integrity.
An important part of this work was to determine if larger scale artificial lateral lines would still be effective at sensing flow information. As established, Reynolds numbers on the outside of the sensor are between 10 000 and 70 000; given that most fish swim with Reynolds number between ∼100 < Re < ∼100 000, this means that our sensor operates in a similar regime. Within the sensor, Re = 375. Reynolds numbers within the canals of a fish's lateral line system can be expected to be very low given the small diameter. Previous work has measured these canals at ∼100 μm [55], which would give Re = 2.5 at V = 0.0025 m s −1 . Despite the two orders of magnitude difference between Reynolds numbers seen in this system versus in a biological system, our sensor appears to still be effective at detecting shed vortices. This is likely because while Re < 2000, flow is considered fully laminar, so the systems are at least comparable.
Sensor operational envelope
Additional simulations were run to test the effectiveness of the sensor at different speeds and with different channel diameters ( figure 6). For the optimised sensor, the associated total residuals for 0.1 m s −1 and 1 m s −1 are 0.0243 and 0.0178, respectively. The result for our upper bound of 1 m s −1 shows only a slightly worse result than our standard speed of 0.5 m s −1 (0.0155). This is a positive result as it shows that the sensor is still effective for a faster underwater vehicle. The absolute upper bound at which a 10 cm diameter cylinder in water will produce a Kármán vortex street is at 3 m s −1 , so while it is difficult to say with certainty how the sensor will behave as velocities increase, the minimal increase here suggests that further increases in velocity may result in minimal increases again. More concerning is the quite significant increase in residuals that is seen at 0.1 m s −1 (almost 45% more error). However, looking closely at figure 6(b), it seems that the main source of this increase actually occurs due to a phase shift. At around 300 s, it is particularly evident that the two signals are 180 degrees out of phase, i.e. they are mirrored about the x-axis. Shifting the phase accordingly gives a new error of 0.0143, which is now better than our standard speed, and implies that the sensor will remain effective as velocities decrease, potentially even improving further. The shift in phase in an interesting result: it seems that for velocities above 0.5 m s −1 , vortices that are shed from the same side as the sensor result in an increase in velocity for the neuromast, while vortices shed from the other side of the cylinder result in a decrease in velocity; the same is true when no sensor is present. For velocities below 0.1 m s −1 , the opposite is true. At higher velocities, it seems that the primary mechanism is as a result of frictional forces within the water acting to drag the water within the sensor forward to cause the velocity increase, whereas at lower velocities, pressure differentials become the primary source of flow accelerations. It is also likely that as the energy within the flow decreases, the sensor itself acts to disrupt the flow more.
It has been noted that the pore size has an effect on sensitivity, with larger pore sizes giving greater responses to flow features, and so further simulations were run using the same design as the optimised sensor but with 10 and 20 mm diameter channels. The larger channel allows a larger volume of flow to move through it, resulting in a greater response to external stimuli. We theorise that at high velocities, the larger volume of water in the sensor will become more susceptible to noise due to the higher energy in the system, while the opposite will be true at lower velocities. A narrow channel is predicted to show a smaller response to flow stimuli, but, in contrast to the wide channel, will be less susceptible to noise in high velocity background flow. In the slow background flow, the response is expected to be minimal, to the point that it might not be possible to correctly capture the signal from the flow.
It is first of all interesting to note that there are no cases where either larger or smaller diameter sensor display better signal similarity ( figure 6). This seems to indicate that a 15 mm canal offers an optimal balance between reducing noise and increasing sensitivity. In the 1 m s −1 velocity flow, it seems to be the case that the narrow channel (g) is more similar to the desired result than the wide channel (i), as predicted; this is most evident towards the end of the series. If we maintain the observation above that the out of phase response is as a result of pressure differentials while the in phase response is due to flow velocity and associated frictional forces, we can note that the narrow canal is better at low velocity, likely due again to the smaller volume of water that need move to illicit a neuromast response. The wide canal appears to be less effective at low velocity for the same reasoning, as at lower background flow velocity, the detected signal amplitude is markedly less than for the narrow canal. As flow velocities increase, in the narrow canal there seems to be a definite switch between pressure differential and friction forces as the driving mechanism, while the wider canal seems to be subject to both, leading to the increased noise in the detected signal.
Physical sensor proof of concept
A prototype of the sensor was demonstrated in a static container of water. Figures 7(a)-(c) shows the sensor The sensor displays 0.8 mm of displacement over the course of the motion, but this is due to parallax, as the sensor moves into the water, and does not display the pulsed motion that would be expected if a vortex was detected. (D)-(G) Still images of a cylinder being pulled past the sensor. The sensor shows a strong deflection in (F) before returning back towards the neutral position. This indicates a vortex being detected. The vortex can also be seen in the water at the mouth of the sensor. Due to Covid-19 restrictions, these experiments were carried out in a home environment and under non-optimal conditions. moving through the water, using a ruler to highlight the distance covered, without displaying the signal on the neuromast. This result is as expected, as our rearfacing pores act to filter out the background flow or in this case, filter out the inertial effect of the static water.
Figures 7(d)-(g) shows the sensor immobile with the cylinder moving through the container and generating a vortex as it passes; this is seen by the recorded oscillation. Figure 7(f) shows the most deflection; it is also possible to see the vortex forming close to the mouth of the sensor in the ripples of the water. This panel is most similar to the setup used in the simulation, so the correlating results lend support to one another. Figure 8 shows the results of the sensor when tested within the flow tank. The sensor was placed at six different locations and the displacement of the visual tracker was measured during each run. All locations were chosen to be on one side of the flow tank, with two positions at the midline of the cylinder, 2 at 80% of the cylinder diameter (predicted to be the position of maximum variance [50]) and 2 at twice this distance in a region where vortices are predicted to have minimal effect, for these three positions, one is taken at 10 cm from the cylinder and the other at 20 cm to investigate the effectiveness of the sensor at detecting vortices as they degrade with distance from source. Three trials were done at each location to increase validity of results and these are shown in the different lines on the graphs at each location. It can be seen that the sensor shows deflection in response to the formation and shedding of vortices in all of the sampled area.
We see the best consistency between trials in the centre of the row closer to the cylinder (X1Y1, X1Y2 & X1Y3), which is the position that the simulations have predicted to be the best spot for the sensor to operate in. We also see the least deflection in the first row at the furthest y position (X1Y3) from the cylinder, which is also predicted by simulation, due to the vortex forming in the region behind the cylinder and then being shed downstream from there. X1Y3 In each graph, a single vortex is being detected, with the starting time occurring when water levels reach a given level within the tank; this is done to make comparisons easier. Prior simulations have suggested that (B) and (E) should give the greatest variation, and across the three results, we see the best consistency here. However, (D) actually shows the greatest deflection in one of the trials. Due to the complex nature of fluid dynamics and the variation in starting conditions present in the experimental set-up, it is unsurprising that the maximum deflection is not always in the position of greatest variation.
is further from the vortex than the other positions, hence the reduced deflection.
An interesting result to note is the difference in time when the deflection occurs that can be seen in multiple positions. This is most likely to be as a result of the vortex forming and being shed from different sides of the cylinder in different trials. This is corroborated by the fact that the later deflections tend to be reduced. The vortices shed on the far side of the cylinder are further from the sensor and also must travel further downstream before they can be detected, explaining the slight delay and indicating that the earlier troughs are likely from vortices shed on the sensor side of the cylinder, while the later troughs are from the opposite side. It is also possible that the body of the sensor is interfering, as some of it does sit in the path of the shed vortices and this could also cause both a delay and a reduced deflection.
Overall, these results indicate that the sensor is able to detect shed vortices in flow, and over a wider area than initially thought.
Conclusion
Our new optimised artificial line sensor design (figure 1) has been shown to be effective at detecting vortices shed by potential obstacles or upstream robots. We also explain and evidence our rationale behind the design choices, all of which can be rooted in biology, with a series of simulations where the particular traits we want to emphasise are removed to demonstrate an associated decrease in performance. In every case, we see significant increases in residuals and loss of similarity in signal shape. To validate simulated results in reality, we produce and test the proposed sensor and show that it is able to filter water flow caused by the sensor motion in water, and sense vortices shed by both a passing cylinder in static water and vortices shed behind a cylinder in flow. This sensor offers an alternative sensory suite that is both inexpensive and simple to manufacture that could be easily integrated onto pre-existing underwater robotic platform used to inspect underwater structures, explore canals, or perform environmental monitoring. It also offers the potential for an artificial lateral line system that is effective using only a single sensor, which could result in significant reductions in complexity and expense. Such a reduction could be exploited for swarming, to create large numbers of simple and inexpensive robots that use this artificial lateral line system to navigate and interact with each other, and in turn display complex emergent behaviours. We have already began to work towards this goal on both software and hardware fronts for example training a neural to read the flow fields behind a cylinder in simulation, and from this navigating said flow field to a given location by sampling only a single point per unit time. This is now being further developed into a multi agent model. A simple bio-inspired robotic fish with a compliant caudal fin has also been designed to host our optimised artificial lateral line sensor, that once completed will mark the beginnings of the development of the large scale swarm just mentioned. | 9,160.2 | 2022-07-27T00:00:00.000 | [
"Biology"
] |
Stem Cell-Based Neuroprotective and Neurorestorative Strategies
Stem cells, a special subset of cells derived from embryo or adult tissues, are known to present the characteristics of self-renewal, multiple lineages of differentiation, high plastic capability, and long-term maintenance. Recent reports have further suggested that neural stem cells (NSCs) derived from the adult hippocampal and subventricular regions possess the utilizing potential to develop the transplantation strategies and to screen the candidate agents for neurogenesis, neuroprotection, and neuroplasticity in neurodegenerative diseases. In this article, we review the roles of NSCs and other stem cells in neuroprotective and neurorestorative therapies for neurological and psychiatric diseases. We show the evidences that NSCs play the key roles involved in the pathogenesis of several neurodegenerative disorders, including depression, stroke and Parkinson’s disease. Moreover, the potential and possible utilities of induced pluripotent stem cells (iPS), reprogramming from adult fibroblasts with ectopic expression of four embryonic genes, are also reviewed and further discussed. An understanding of the biophysiology of stem cells could help us elucidate the pathogenicity and develop new treatments for neurodegenerative disorders. In contrast to cell transplantation therapies, the application of stem cells can further provide a platform for drug discovery and small molecular testing, including Chinese herbal medicines. In addition, the high-throughput stem cell-based systems can be used to elucidate the mechanisms of neuroprotective candidates in translation medical research for neurodegenerative diseases.
Introduction
Stem cells are classified into three types according to their abilities to differentiate. The first type is totipotent stem cells, which can be implanted in the uterus of a living animal and give rise to a full organism. The second type is pluripotent stem cells such as embryonic stem (ES) cells and induced pluripotent stem (iPS) cells. They can give rise to every cell of an organism except extraembryonic tissues, such as placenta. This limitation restricts pluripotent stem cells from developing into a full organism. The third type is multipotent stem cells. They are adult stem cells which only generate specific lineages of cells [1]. Neural stem cells (NSCs) are multipotent stem cells which are derived from neural tissues, either from the central nervous system or peripheral nervous systems [1]. These cells are self-renewing and can give rise to all cell types (neurons, astrocytes and oligodendrocyes) of the nervous system through asymmetric cell division [1].
In the adult brain, NSCs are primarily located in the subventricular zone (SVZ) of the lateral ventricle and the subgranular zone (SGZ) of the hippocampal dentate gyrus (Figure 1). In general, the quiescent or dormant NSCs might be present and can be derived from multiple areas of the adult brain [2][3][4]. The SVZ and SGZ niches have common cellular niche components which include astroglia, ependymal cells, vascular cells, NSC progeny and mature neurons, and common extracellular niche signals which include Wnt, Sonic Hedgehog, bone morphogenic protein antagonists, membrane-associated Notch signaling, leukemia inhibitory factor, transforming growth factor-alpha, fibroblast growth factors, neurotrophins and extracellular matrix. [3]. These cellular and extracellular components regulate the behaviors of NSCs in a region-specific manner [3]. For example, SVZ NSCs give rise to Dlx2 + Mash1 + intermediate progenitor cells which subsequently give rise to PSA-NCAM + doublecortin + (DCX + ) neuroblasts and migrate towards the olfactory bulb (OB). In contrast, SGZ NSCs do not differentiate into interneuron-lineage cells like those in the OB, but give rise to local glutamatergic excitatory dentate granule cells [3]. The region-specific development of these NSCs is not only due to intrinsic characteristics of the NSCs themselves, but also due to the dictation of local microenvironment (i.e., the niche). A detail summary of the neurogenic niche can be found in a recent review by Ma et al. [3]. NSCs give rise to local glutamatergic excitatory dentate granule cells. RMS: rostro-migratory stream; GL: granular layer. Adapted from Ma et al. [3] and Taupin and Gage [5].
Neurogenesis derived from adult NSCs is critical for a plethora of central nervous functions, such as spatial learning and memory, mood regulation and motor controls. Growing evidence also suggests the significant contribution of adult NSCs to pathological conditions like seizures, brain tumors, mood disorders or neurodegenerative diseases [3]. If the biopathological role of adult NSCs can be better understood, therefore, the therapeutic strategies that assist neuroprotection and neurorestoration can be framed and tested through collaborative efforts of both basic and translational research. In the following sessions, we will introduce the roles of NSCs in the pathogenesis in some psychiatric and neurological diseases, and the application of stem cell-based therapies.
Depression and Neurogenesis: Evidence from Neural Stem Cells
Depression is one of the most common psychiatric disorders, with 10-20% lifetime prevalence [6,7].
However, the etiology and pathophysiology of depression still remain unclear. Preclinical and clinical studies suggest the involvement of hippocampus in the pathogenesis of depression. Hippocampus plays an important role in learning, memory and emotionality [8,9]. It is also one of the primary niches of NSCs. Reduction of hippocampal volume was found in patients with posttraumatic stress disorders [10]. Magnetic resonance imaging studies also showed a consistent reduction in hippocampal volume in patients with depression [11]. Two meta-analyses have demonstrated a reduction in hippocampal volume in patients with recurrent depression in comparison to age-and sex-matched controls [12,13].
In addition, most antidepressants and environmental interventions that confer antidepressant-like behavioral effects stimulate adult hippocampal neurogenesis [11].
Based on these findings, impaired hippocampal neurogenesis was considered to be one of the etiologies of depression. However, recent studies have shown some controversial evidences against the previous findings. First, preclinical and pathohistological studies showed that the reduction of hippocampal volume might be a result of decreased dendritic complexity and changes in neurophil and glial number rather than impaired hippocampal neurogenesis [14][15][16]. Besides, the ablation of neurogenesis did not induce or affect depression-like or anxiety-like behaviors in animals [14,[17][18][19].
To date, hippocampal neurogesis is not thought to be involved in the pathogenesis of depression [11,20], although the regulation of neurogenesis in adult brain may be required for antidepressant treatment [11].
Most antidepressant drugs increase the levels of monoamines serotonin (5-hydroxytrytamine; 5-HT) and/or noradrenaline (NA); this suggests that biochemical imbalances within the 5-HT/NA systems may cause mood disorders. In addition to the regulation of neurotransmitters, antidepressants also have both neuroprotective and neurorestorative effects on hippocampal cells. For example, monoamine oxidase-A inhibitor moclobemide (MB) can upregulate proliferation of hippocampal progenitor cells in chronically stressed mice [21]. MB can also provide neuroprotection by reducing intracellular pH and neuronal activity of CA3 hippocampal neurons [22]. A selective serotonin reuptake inhibitor, fluoxetine, was used to treat rats with maternal separation. Compared to the rats that did not receive fluoxetine, cell proliferation was increased and apoptosis was decreased in the dentate gyrus of the rats that receive fluoxetine [23]. To elucidate the molecular mechanism of the neuroprotective and neurorestorative effects of antidepressants, NSCs derived from the hippocampal tissues of adult rats can be used as a model for the in vitro drug-effect test [24].
Antidepressant and Neuroprotection: Interaction with Neural Stem Cells
Clinical findings have shown evidence that hippocampal volume in patients with depression is reduced in comparison to the volume in healthy people [10]. Furthermore, the clinical studies and magnetic resonance imaging (MRI) survey demonstrated that the hippocampal volume decreases in patients with depression and post-traumatic stress disorder [6,10]. Increased neurogenesis in the hippocampus by the administration of antidepressant drugs can result in altered behavior in stress-induced models and patients [14,23]. Moreover, Chen et al. showed the evidence that desipramine can promote neurogenesis in hippocampus and reverse the learned behavior in learned helplessness rats [25]. Taken together, these observations implicated that adult hippocampal neurogenesis is decreased by stress and this process of neuron loss may be involved in both the pathogenesis and treatment of mood disorders.
Neural stem cells (NSCs), derived from hippocampus and other germinal centers of the brain, have been isolated and defined as cells with the capacity of self-renewal and multilineage differentiation [1].
NSCs also possess the utilizing potential to develop the transplantation strategies and to screen the It has been well-documented that antidepressants present the potential to upregulate the expression of brain-derived neurotrophic factor (BDNF) in animal models as well as the patients with depression Taken together, hippocampal neurogenesis is required for antidepressant therapies. Using cultured rat hippocampal NSCs, the molecular mechanisms of antidepressant effects are explored, which include the MAPK/ERK pathway, the PI3k/AKT pathway, and the upregulation of BDNF, Bcl-2 and c-FLIP.
Diseases of Central Nervous System and Neural Stem Cells -Stem Cell Therapy and the Development of New Target Drug
Diseases of the central nervous system (CNS) such as stroke, traumatic brain injury, dementia, Parkinson's disease or multiple sclerosis, usually cause morbidity and mortality as well as increase social and economic burdens of patients and caregivers. However, most treatments for these diseases are symptomatic or preventive, and are not effective. Many attempts have been made to develop a neuroprotective treatment to reduce the volume of brain injury, but the translation of neuroprotection from experimental therapies to clinical use has not been very successful [41]. Along with the development of stem cell studies and the discovery of neural stem cells in the adult brain, transplantation of stem cells or their derivatives, and mobilization of endogenous stem cells within the adult brain have been proposed as future therapies for the CNS diseases [42]. We herein introduce the role of stem cell-based therapies in the possible treatment for Parkinson's disease and ischemic stroke.
The two diseases have different etiologies and pathophysiologies, and therefore, different strategies of treatment are required.
Ischemic Stroke
Ischemic stroke is a major cause of morbidity and mortality worldwide.
The Hope and Hype of Induced Pluripotent Stem Cells in Cell Replacement Therapy of Neurological Diseases
The recent progresses in stem cell research have demonstrated that induced pluripotent stem (iPS) cells could be generated from mouse embryonic fibroblasts as well as from adult human fi broblasts via the retrovirus-mediated transfection of four transcription factors, that is, Oct3/4, Sox2, c-Myc, and Klf-4 [60][61][62]. The development of iPS cells provides an additional option for replacement therapy.
They are indistinguishable from ES cells in morphology, proliferative abilities, surface antigens, gene expression, epigenetic status of pluripotent cell-specific genes, and telomerase activity [62,63]. They are also capable of self-renewal and differentiation into three germ layers, offering potential for clinical cell therapies [64,65]. Because iPS cells can be derived from the somatic cells, potential and that transplantation of iPS cell-derived neuronal cells into the brain was able to improve behavior in a rat model of PD [66]. We also demonstrated an efficient method to differentiate iPS cells into astrocyte-like and neuron-like cells which displayed functional electrophysiological properties [67].
Our in vivo study showed that direct injection of iPS cells into damaged areas of rat cortex significantly decreased the infarct size, improved the motor function, attenuated inflammatory cytokines, and mediated neuroprotection after middle cerebral artery occlusion (MCAO) [67].
Subdural injection of iPS cells with fibrin glue was as effective and as the direct-injection method, and provided a safer choice for cell replacement therapy [67].
The ability to form teratomas in vivo has been a landmark and routine assay for evaluating the pluripotency of ES as well as iPS cells [64,68]. However, teratoma or tumor formation is a unacceptable adverse effect for cell transplantation therapy. Preventing teratoma formation or tumorgenesis has become an emergent issue [69][70][71][72][73]. One of the methods is elimination of nonneural progenitors, which can be achieved by the elaboration of differentiation protocols that allow maximal homogeneity of the transplant [74] or by cell sorting before transplantation [75][76][77][78]. Exclusion of poorly-differentiated ES or iPS cells can also reduce the rate of teratoma or tumor formation [79].
Some antioxidants may prevent tumorgenesis after cell transplantation. Resveratrol, a natural polyphenol antioxidant, is demonstrated that it can inhibit teratoma formation in vivo [65]. Our recent study result also found that docosahexaenoic acid can inhibit teratoma formation in addition to promoting dopaminergic differentiation in iPS cells in PD-like rats [80]. It has been only two years since the development of iPS cells. Enhancement of effectiveness and eliminating adverse effects of this cell-transplantation therapy required more extensive studies.
Diet and Neurogenesis
Recent reports suggested that the environmental factors, especially the detrimental factors induced by neuronal injury, have a critical impact on adult neurogenesis. Several environmental factors are also involved in adult neurogenesis, diet being one of them. Interested readers can refer to a recent comprehensive review by [81]. Briefly, The influence of diet on adult neurogenesis comes from four domains: meal content, meal texture, meal frequency and calorie intake [81]. With regards to meal content, zinc, thiamine and vitamine-A deficiencies decrease cell proliferation in adult hippocampus [81]. Similarly, excess in retinoic acid and increased homocysteine levels also decrease or inhibit cell proliferation in adult hippocampus. In contrast, low-dose curcumin and flavonoids have beneficial effects on adult hippocampal cell proliferation in rodents [81]. It is worthy noting that most flavonoids are extensively metabolized in vivo and the bioavailability of flavonoids after the consumption of flavonoid-rich food can only reach very low concentrations in human plasma [82]. In order for adult hippocampal neurogenesis to take place, the purity of flavonoid intake needs to be high. An example is the extract from a traditional Chinese herbal decoction Xiaobuxin-Tang [83]. It is also interesting that calorie restriction and extending the time between meals increase adult hippocampal neurogenesis while diets with high-fat content are noxious and weaken neurogenesis in male rates [81].
Neural Stem Cell, Chinese Herbs, and New Drug Screening
Natural plant products and phytochemicals have been used as medicinal agents for hundreds of years in oriental medicine [84]. Based on clinical experiences and recent studies, Chinese herbs and their constituents can be the sources for the development of new drugs for many important human disorders, such as cancers [85,86]. Accumulating evidences have pointed to the fact that some herb-derived substances have neuroprotective effects. For example, Lee et al. reported that wogonin, a flavonoid derived from the root of Scutellaria baicalensis Georgi, is neuroprotective in vitro and in vivo [87]. It has an anti-inflammation effect by inhibiting the activation of TNF-α, interleukin-1β, and nitric oxide (NO) production induced by LPS in cultured brain microglia, and protects co-cultured PC12 cells against microglial cytotoxicity [87]. In two experimental brain injury models, transient global ischemia by 4-vessel occlusion and excitotoxic injury by systemic kainite injection, wogonin reduced induction of inflammatory mediators (ex. iNOS and TNF-α) in hippocampus, inhibits micorgial activation, and attenuates ischemic death of hippocampal neurons [87]. Tetramethylpyrazine (TMP) is another example. It is an alkaloid extracted from the Chinese herbal plant Ligusticum wallichii Franchat (chuanxiong). Previous experimental studies have demonstrated its beneficial effects on cardic and cerebral blood flow and reperfusion, as well as its role on calcium antagonism, on vascular tissues, on ROS scavenger and on inhibition of inflammation [88]. In addition, systemic administration of TMP protects neuronal cells from ischemic or traumatic brain or spinal cord injury, promotes functional recovery and attenuates learning and memory impairment induced by D-galactose in animals [89][90][91][92]. Furthermore, systemic administration of TMP following the onset of seizure induced by kainite significantly reduced the number of TUNEL-positive cells in hippocampus and piriform cortex, indicating TMP attenuates neuronal degeneration and has neuroprotective efficacy against neuro-excitotoxic attack [88]. Another popular plant which is used in oriental food and medicine, ginger, is able to inhibit β-amyloid peptide-induced cytokine and chemokine expression in cultured monocytes [93]. This in vitro study suggests the potential role of ginger in delaying the onset and progression of neurodegenerative disorder involving chronically activated microglial cells in CNS [93].
It is also interesting to review the evidence of phytochemicals as sources of antidepressants. Lim et al. showed that ginger oil possessed antidepressant-like action by reducing immobility in the forced swim test (FST) in mice after the inhalation of ginger oil [94]. Xu et al. also showed that the mixture of honokiol and magnolol had an antidepressant effect because the mixture significantly attenuated the reduction of 5-HT levels in frontal cortex, hippocampus, striatum, hypothalamus and nucleus accumbens, and raised serum corticosterone concentration induced by chronic mild stress (CMS) in rats [95]. The mixture of honokiol and magnolol also decreased immobility time in the mouse FST and tail suspension test (TST) significantly, and reversed CMS-induced anhedonia in rats [95]. In our experiments, we also found that mice treated with Scutellaria baicalensis, Phellodendri Cortex and Ligusticum wallichii had increased number of Brd-U positive cells in dentate gyrus and reduced serum levels of corticosterone after the exposure to CMS. Compared with those which were exposed to CMS alone without the three traditional Chinese medicinal herbs, these animals had increased body weight and reduced immobility time in FST [96]. The cellular, biochemical and behavioral effects of the three herbs were similar to the effects of fluoxetine and duloxetine [96]. Furthermore, we also found that the three traditional Chinese medicinal herbs increased the cell viability of NSCs, with superior effect on the index than fluoxetine treatment [96]. These recent progresses not only support the future niche of Chinese medicinal herbs as the useful antidepressants, but also indicate the potential of the use of NSC-based screening system for new drug discovery and characterization from Chinese herbs and medicines.
Conclusions
The development of stem cell studies has provided a promising future for the treatment of neurological and psychiatric diseases in several ways. First, understanding the biology and pathology of NSCs will help us elucidate the pathophysiology of several neurological and psychiatric diseases, such as depression, Parkinson's disease or ischemic stroke. The growing knowledge also helps us develop neuroprotective and neurorestorative therapies. Second, NSCs can provide a platform to clarify the mechanism and to test the efficacy of drugs, including Chinese herbal medicines. Third, the | 4,110.4 | 2010-05-05T00:00:00.000 | [
"Biology"
] |
On the Novikov-Shiryaev Optimal Stopping Problems in Continuous Time
Novikov and Shiryaev (2004) give explicit solutions to a class of optimal stopping problems for random walks based on other similar examples given in Darling et al. (1972). We give the analogue of their results when the random walk is replaced by a Levy process. Further we show that the solutions show no contradiction with the conjecture given in Alili and Kyprianou (2004) that there is smooth pasting at the optimal boundary if and only if the boundary of the stopping reigion is irregular for the interior of the stopping region.
Introduction
Let X = {X t : t ≥ 0} be a Lévy process defined on a filtered probability space (Ω, F, {F t }, P) satisfying the usual conditions. For x ∈ R denote by P x (·) the law of X when it is started at x and for simplicity write P = P 0 . We denote its Lévy-Khintchine exponent by Ψ. That is to say E[e iθX1 ] = exp{−Ψ(θ)} for θ ∈ R such that Ψ(θ) = iθa + 1 2 where a ∈ R, σ ≥ 0 and Π is a measure supported on R\{0} satisfying Consider an optimal stopping problem of the form where q ≥ 0 and T 0,∞ is the family of stopping times with respect to {F t }.
The purpose of this short paper is to characterize the solution to (2) for the choices of gain functions G(x) = (x + ) n n = 1, 2, 3... under the hypothesis (H) either q > 0 or q = 0 and lim sup t↑∞ X t < ∞.
Note that when q = 0 and lim sup t↑∞ X t = ∞ it is clear that it is never optimal to stop in (2) for the given choices of G. This short note thus verifies that the results of Novikov and Shiryaev (2004) for random walks carry over into the context of the Lévy process as predicted by the aforementioned authors. Novikov and Shiryaev (2004) write: "The results of this paper can be generalized to the case of stochastic processes with continuous time parameter (that is for Lévy processes instead of the random walk). This generalization can be done by passage of limit from the discrete time case (similarly to the technique used in Mordecki (2002) for pricing American options) or by use of the technique of pseudo-differential operators (described e.g. in the monograph Boyarchenko and Levendorskii (2002) in the context of Lévy processes)".
We appeal to neither of the two methods referred to by Novikov and Shiryaev however. Instead we work with fluctuation theory of Lévy processes which is essentially the direct analogue of the random walk counterpart used in Novikov and Shiryaev (2004). In this sense our proofs are loyal to those of of the latter. Minor additional features of our proofs are that we also allow for discounting as well avoiding the need to modify the gain function in order to obtain the solution. Truncation techniques are also avoided as much as possible. Undoubtedly however, the link with Appell polynomials as laid out by Novikov and Shiryaev remains the driving force of the solution. In addition we show that the solutions show no contradiction with the conjecture given in Alili and Kyprianou (2004) that there is smooth pasting at the optimal boundary if and only if the boundary of the stopping reigion is irregular for the interior of the stopping region.
Results
In order to state the main results we need to introduce one of the tools identified by Novikov and Shiryaev to be instrumental in solving the optimal stopping problems at hand.
Definition 1 (Appell Polynomials)
Suppose that Y is a non-negative random variable with n-th cumulant given by κ n ∈ (0, ∞] for n = 1, 2, ... Then define the Appell polynomials iteratively as follows. Take Q 0 (x) = 1 and assuming that κ n < ∞ (equivalently Y has an n-th moment) given This defines Q n up to a constant. To pin this constant down we insist that E(Q n (Y )) = 0. The first three Appell polynomials are given for example by under the assumption that κ 3 < ∞. See also Schoutens (2000) for further details of Appell polynomials.
In the following theorem, we shall work with the Appell polynomials generated by the random variable Y = X eq where for each t ∈ [0, ∞], X t = sup s∈[0,t] X s and e q is an exponentially distributed random variable which is independent of X. We shall work with the convention that when q = 0, the variable e q is understood to be equal to ∞ with probability 1.
Then Q n (x) has finite coefficients and there exists x * n ∈ [0, ∞) being the largest root of the equation Q n (x) = 0. Let τ * n = inf{t ≥ 0 : X t ≥ x * n }. Then τ * n is an optimal strategy to (2) with G(x) = (x + ) n . Further, ) Theorem 3 For each n = 1, 2, ... the solution to the optimal stopping problem in the previous theorem is continuous and has the property that Hence there is smooth pasting at x * n if and only if 0 is regular for (0, ∞) for X.
Remark 4
The theory of Lévy processes offers us the opportunity to specify when regularity of 0 for (0, ∞) for X occurs in terms of the triple (a, σ, Π) appearing the Lévy-Khintchine exponent (1). When X has bounded variation it will be more convenient to write (1) in the form where d ∈ R is known as the drift. We have that 0 is regular for (0, ∞) for X if and only if one of the following three conditions are fulfilled.
Preliminary Lemmas
We need some preliminary results given in the following series of lemmas. All have previously been dealt with in Novikov and Shiryayev (2004) for the case of random walks. For some of these lemmas we include slightly more direct proofs which work equally well for random walks (for example avoiding the use of truncation methods).
Lemma 5 (Moments of the supremum) Fix n > 0. Suppose that the Lévy process X has jump measure satisfying Although the analogue of this lemma is well known for random walks, it seems that one cannot find so easily the equivalent statement for Lévy processes in existing literature; in particular the final statement of the lemma. None the less the proof can be extracted from a number of well known facts concerning Lévy process.
Proof. The fact that E((X + 1 ) n ) < ∞ follows from the integral condition can be seen by combining Theorem 25.3 with Proposition 25.4 of Sato (1999). The remaining statement follows when q > 0 by Theorem 25.18 of the same book. To see this one may stochastically dominate the maximum of X at any fixed time (and hence at e q ) by the maxium at the same time of a modified version of X, say X K , constructed by replacing the negative jumps of size greater than K > 0 by negative jumps of size precisely K. One may now apply the aforementioned theorem to this process. Note that one will use in the application that the assumption (5) implies that X K has absolute moments up to order n. For the case q = 0 and lim sup t↑∞ X t < ∞ the final statement can be deduced from the Wiener-Hopf factorization. By considering again the modified process X K one easily deduces that the descending ladder height process has all moments. Indeed the jumps of the descending ladder height process can be no larger than the negative jumps of X K and hence the latter claim follows again from Theorem 25.3 with Proposition 25.4 of Sato (1999) applied to the descending ladder height process of X K . On the other hand, X K has finite absolute moments up to order n and hence finite cumulants up to order n. Amongst other things, the Wiener-Hopf factorization says that the Lévy-Khintchine exponent, which is a cumulant generating function 1 , factorizes into the cumulant generating functions of the ascending and descending ladder height processes. The ascending ladder height process of X K is therefore forced to have finite cumulants, and hence finite moments, up to order n; see for example the representation of cumulant generating functions for distributions which do not have all moments in Lukacs (1970). By choosing K sufficiently large so that E(X K 1 ) < 0 (which is possible since the assumptions on X imply that E(X 1 ) < 0) we have X K ∞ < ∞. Since X K ∞ is equal in distribution to the ascending ladder height subordinator of X K stopped at an independent and exponentially distributed time, the finiteness of the n-th moment of X K ∞ , and hence of X ∞ ≤ X K ∞ , follows from the same statement being true of the ascending ladder height subordinator of X K . Note, that the above argument using the Wiener-Hopf factorization can easily be adapted to deal with the case q > 0 too. Lemma 6 (Mean value property) Fix n ∈ {1, 2, ...} Suppose that Y is a non-negative random variable satisfying E(Y n ) < ∞. Then if Q n is the n-th Appell polynomial generated by Y then we have that E(Q n (x + Y )) = x n for all x ∈ R.
Proof. As remarked in Novikov and Shiryaev (2004), this result can be obtained by truncation of the variable Y . However, it can also be derived from the definition of Q n given in (3). Indeed note the result is trivially true for n = 1. Next suppose the result is true for Q n−1 . Then using dominated convergence we have from (3) that Solving together with the requirement that E(Q n (Y )) = 0 we have the result.
Lemma 7 (Fluctuation identity) Fix n ∈ {1, 2, ...} and suppose that (1,∞) x n Π(dx) < ∞ and that hypothesis (H) holds. Then for all a > 0 and x ∈ R Proof. Note that on the event {τ + a < e q } we have that X eq = X τ + a +S where S is independent of F τ + a and has the same distribution as X eq . It follows that where h(x) = E x (Q n (X eq )). From Lemma 6 with Y = X eq one also has that h(x) = x n . We see then by taking expectations again in the previous calculation that E x (Q n (X eq )1 (Xe q ≥a) ) = E x (e −qτ + a X n τ + a 1 (τ + a <∞) ) as required.
Suppose that hypothesis (H) holds and Q n is generated by X eq . Then Q n has a unique positive root x * n such that Q n (x) is negative on [0, x * n ) and positive and increasing on [x * n , ∞).
Proof. The proof follows proof of the same statement given for random walks in Novikov and Shiryaev (2004) with minor modifications. (It is important to note that in following their proof, it is not necessary to make an approximation of the Lévy process by a random walk).
Proofs of Theorems
Proof of Theorem 2. In light of the Novikov-Shiryaev optimal stopping problems and their solutions, we verify that the analogue of their solution, namely the one proposed in Theorem 2, is also a solution for (2) for G(x) = (x + ) n , n = 1, 2, .... To this end, fix n ∈ {1, 2, ...} and define V n (x) = E x (Q n (X eq )1 (Xe q ≥x * n ) ) = E(Q n (x + X eq )1 (x * n −Xe q ≤x) ). From the above representation one easily deduces that V n is right continuous. From Lemma 7 we have that V n (x) = E x (e −qτ * n (X + τ * n ) n 1 (τ * n <∞) ) and hence the pairs (V n , τ * n ) are a candidate pair to solve the optimal stopping problem. Secondly we prove that V n (x) ≥ (x + ) n for all x ∈ R. Note that this statement is obvious for Otherwise when x ∈ (0, x * n ) we have, using the mean value property in Lemma 6 that where the final inequality follows from Lemma 8 and specifically the fact that Q n (x) ≤ 0 on (0, x * n ]. Note in the second equality above, by taking limits as x ↑ x * n and using the fact that Q(x * n ) = 0 we see that V n (x−) = (x + ) n at x = x * n . That is to say there is continuity at x * n . Thirdly on the event that {e q > t} we have that X eq is equal in distribution to (X t + S) ∨ X t where S is independent of F t and equal in distribution to X eq . In particular X eq ≥ X t + S. Since, P x almost surely, Q n (X eq )1 (Xe q ≥x * n ) ≥ 0 and Q n is positive increasing on [x * n , ∞) it follows that From this inequality together with the Markov property, it is easily shown that {e −qt V n (X t ) : t ≥ 0} is a supermartingale. As V n and X are right continuous then so is the latter supermartingale. Finally we put these three facts together as follows to complete the proof. From the supermartingale property and Doob's Optimal Stopping Theorem we have for any τ ∈ T 0,∞ that Hence by Fatou's Lemma, = E x (e −qτ V n (X τ )1 (τ <∞) ).
Using the fact that τ is arbitrary in T 0,∞ together with the lower bound on V n , it follows that On the other hand, rather trivially, we have and the proof of the theorem follows.
Proof of Theorem 3. On account of the fact that (x + ) n is convex, it follows that for each fixed τ ∈ T 0,∞ the expression E(e −qτ ((x + X τ ) + ) n 1 (τ <∞) ) is convex. Taking the supremum over T 0,∞ preserves convexity (as taking supremum is a subadditive operation) and we see that V n is a convex, and hence continuous, function.
To establish when there is smooth fit at this point we calculate as follows. For However, where in the first term on the right hand we may restrict the expectation to {x < X eq < x * n } as the atom of X eq at x gives zero mass to the expectation. Denote A x and B x the two expressions on the right hand side of (6). We have that
Integration by parts also gives
P(X eq ∈ (0, y]) dQ n dx (x + y)dy.
Hence it follows that lim x↑x * n A x = 0.
In conclusion we have that which concludes the proof.
Remarks
(i) As in Alili and Kyprianou (2004) one can argue that the occurence of continuous pasting for irregularity and smooth pasting for regularity appear as a matter of principle. The way to see this is to consider the candidate solutions (V (a) , τ + a ) where τ + a = inf{t ≥ 0 : X t ≥ a} and V (a) (x) = E x (Q n (X eq )1 (Xe q ≥a) ).
Let C * be the class of a > 0 for which V (a) is bounded below by the gain function and let C be the class of a > 0 in C * for which V (a) is superharmonic (i.e. it composes with X to make a supermartingale when discounted at rate q). By varying the value of a in (0, ∞) one will find that, when there is irregularity, in general there is a discontinuity of V (a) at a and otherwise when there is regularity, there is always continuity at a. When there is irregularity, the choice of a = x * n is the unique point for which the discontinuity at a disappears and the function V (a) turns out to be pointwise minimal in C (consistently with Dynkin's characterization of least superharmonic majorant to the gain) and pointwise maximal in C * . When there is regularity, the minmal curve indexed in C and simultaneously the maximal curve in C * will occur by adjusting a so that the gradients either side of a match which again turns out to be the unique value a = x * n .
(ii) From arguments presented in Novikov and Shiryaev (2004) together with the supporting arguments given in this paper, it is now clear how to handle the gain function G(x) = 1 − e x + for Lévy processes instead of random walks as well as how to handle the pasting principles at the optimal boundary. We leave this as an exercise for the reader. | 4,142.6 | 2005-07-22T00:00:00.000 | [
"Mathematics"
] |
Mixing Enhancement of Non-Newtonian Shear-Thinning Fluid for a Kenics Micromixer
In this work, a numerical investigation was analyzed to exhibit the mixing behaviors of non-Newtonian shear-thinning fluids in Kenics micromixers. The numerical analysis was performed using the computational fluid dynamic (CFD) tool to solve 3D Navier-Stokes equations with the species transport equations. The efficiency of mixing is estimated by the calculation of the mixing index for different cases of Reynolds number. The geometry of micro Kenics collected with a series of six helical elements twisted 180° and arranged alternately to achieve the higher level of chaotic mixing, inside a pipe with a Y-inlet. Under a wide range of Reynolds numbers between 0.1 to 500 and the carboxymethyl cellulose (CMC) solutions with power-law indices among 1 to 0.49, the micro-Kenics proves high mixing Performances at low and high Reynolds number. Moreover the pressure losses of the shear-thinning fluids for different Reynolds numbers was validated and represented.
Introduction
Different applications of micromixers can be found in biomedical, environmental industries and chemical analysis, where they are essential components in the micro-total analysis systems for such applications requiring the rapid and complete mixing of species for a variety of tasks [1,2]. Various characteristics of micromixers have been developed to produce fast and homogenous mixing; micromixers are usually classified according to their mixing principles as active or passive devices [3][4][5][6]. Active micromixers need an external energy supply to mix species. Passive micromixers are preferable due to their simple structures and easy manufacturing and greater robustness and stability [7]. To get enhanced mixing flows, the chaotic advection technique is employed as one of the strongest passive mixing methods for non-Newtonian flows. One of the potential chaotic geometries which can present a good method to advance the performances of the hydrodynamics is called the Kenics mixer. A Kenics mixer is a passive mixer created for conditions of laminar flow; it is generally constituted of a series of helical elements, and each element rotated 90 • relatives to the second. The helical elements are designed to divide the flow into two or more flows, turn them and afterward recombine them [8,9]. Kurnia [10] observed that the inserted of a twisted tape in a T-junction micromixer creates a chaotic movement that improves the convection mass transfer at the expense of a higher pressure drop. For dean instability, Fellouah et al. [11] investigated experimentally the flow field of power-law and Bingham fluids inside a curved rectangular duct. Pinho and White law [12] studied the effect of dean number on flow behavior of non-Newtonian laminar fluid. For twisted pipes, Stroock et al. [13] realized a twisting flow microsystem with diagonally oriented ridges on the bottom wall in a microchannel. They attained chaotic mixing by alternating velocity fields. Tsai et al. [14] calculated the mixing fluid of non-Newtonian carboxymethyl cellulose (CMC) solutions in three serpentine micromixers. They summarized that the curvature-induced vortices develop in a long way the mixing efficiency. Bahiri et al. [15], using grooves integrated on the bottom wall of a curved surface, studied numerically the mixing of non-Newtonian shear-thinning fluids. They illustrated that the grooves elevated the chaotic advection and augmented the mixing performance. Naas et al. [16,17] and Kouadri et al. [18] used the two-layer crossing channels micromixer to evaluate the mixing rate hydrodynamics and thermal mixing performances, finding that the mixing rate was nearly 99%, at a very low Reynolds number. Kim et al. [19] experimentally characterized the barrier embedded Kenics micromixer. They showed that the mixing rate decreases as the Reynolds number augments for the suggested BEKM chaotic micromixer. Hossain et al. [20] experimentally and numerically analyzed a model of micromixer with TLCCM that can achieve 99% mixing over a series of Reynolds number values (0.2-110).
There are not many works in the literature for non-Newtonian fluid mixing using Kenics micromixers. Therefore, the idea of the current study is to investigate the performance of a microKenics for mixing shear thinning fluids, trying to attain a high mixing quality and pressure drop. Using CFD code, numerical simulations were carried out at Reynolds numbers ranging from 1 to 500 in order to examine the flow structures and the hydrodynamic mixing performances within the concerned Kenics micromixer. Various CMC concentrations were proposed to investigate the chaotic flow formation and thermal mixing performances within the suggested micromixer. In order to get important homogenization of the fluids' indices and pressure losses will be appraised.
Governing Equations and Geometry Discretion
Steady conservation equations of incompressible fluid are solved numerically in a laminar regime by using the ANSYS Fluent TM 16 CFD software (Ansys, Canonsburg, PL, USA) [21], which is fundamentally based on the finite volumes method. We choose the SIMPLEC scheme for velocity coupling and pressure. A second-order upwind scheme was nominated to solve the concentration and momentum equations. The numeric's were ensured and simulated to be converged at 10 −6 of root mean square residual values.
A non-Newtonian solution of carboxyméthyl cellulose (CMC) is used as working fluid for the simulation of fluid flows. The density of CMC solutions according to Fellouah et al. [11] and Pinho et al. [12], is 1000 kg/m 3 . The coherence coefficient and the power law indexes of the CMC solutions are indicated in Table 1, where the diffusion coefficient equals 1 × 10 −11 m 2 /s. The 3D governing equations for incompressible and steady flows are continuity, momentum and species mass fraction convection diffusion equation: where U (m/s) denotes the fluid velocity, ρ (kg/m 3 ) is the fluid density, P (Pa) is the static pressure, µ (w/m·k) is the viscosity, C i is the local mass fraction of each species by solving the convection-diffusion equation for the i-th species. D i is the mass diffusion coeffcient of the species "i" in the mixing. For power-law non-Newtonian fluids the apparent viscosity is: where k (w/m·k) is called the consistency coefficient and n is the power-law index . γ (s −1 ) is the shear rate.
For a shear thinning fluid (Ostwald model), the generalized Reynolds number (Re g ) is defined as [6]: where D h (m) is the hydraulic diameter of the micromixer.
To measure the efficiency of the Kenics micromixers, mixing rate is defined as follows [16]: where σ signifies the standard deviation of mass fraction and characterized as: N indicates the number of sampling locations inside the transversal division, C i is the mass fraction at examining point i, and C is the ideal mixing mass fraction of C i , and it is equal to 0.5, σ 0 (Pa·s −1 ) is the standard deviation at the inlet part.
The boundary conditions are a condition of adhesion on the walls where the velocities are considered to be zero, uniform velocities are executed at the inlets, the mass fraction of the fluid at the inlet 1 equal to 1 and that of the inlet 2 equal to 0, an atmospheric pressure condition is considered at the exit. All walls are considered adiabatic.
The configuration ( Figure 1) is based on the Kenics KM static mixer. It consists of a tube with a diameter D = 1.2 mm and a length L = 16.5 mm, with six helical parts. Each part has a thickness t = 0.025 mm and length li = 1.5 mm. The final helical blade element is placed at the distance l = 3 mm from the tube outlet. The angle between the two inlet entrances is α = 35 • .
where U (m/s) denotes the fluid velocity, ρ (kg/m 3 ) is the fluid density, P (Pa) is the static pressure, μ (w/m•k) is the viscosity, C is the local mass fraction of each species by solving the convection-diffusion equation for the i-th species. D is the mass diffusion coeffcient of the species "i" in the mixing. For power-law non-Newtonian fluids the apparent viscosity is: where k (w/m•k) is called the consistency coefficient and n is the power-law index γ (s −1 ) is the shear rate. For a shear thinning fluid (Ostwald model), the generalized Reynolds number (Reg) is defined as [6]: where (m) is the hydraulic diameter of the micromixer. To measure the efficiency of the Kenics micromixers, mixing rate is defined as follows [16]: where σ signifies the standard deviation of mass fraction and characterized as: N indicates the number of sampling locations inside the transversal division, C is the mass fraction at examining point i, and C is the ideal mixing mass fraction of C , and it is equal to 0.5, σ (Pa•s −1 ) is the standard deviation at the inlet part.
The boundary conditions are a condition of adhesion on the walls where the velocities are considered to be zero, uniform velocities are executed at the inlets, the mass fraction of the fluid at the inlet 1 equal to 1 and that of the inlet 2 equal to 0, an atmospheric pressure condition is considered at the exit. All walls are considered adiabatic.
The configuration ( Figure 1) is based on the Kenics KM static mixer. It consists of a tube with a diameter D = 1.2 mm and a length L = 16.5 mm, with six helical parts. Each part has a thickness t = 0.025 mm and length li = 1.5 mm. The final helical blade element is placed at the distance l = 3 mm from the tube outlet. The angle between the two inlet entrances is α = 35°.
Grid Independence Test
In this work, which investigated fluid mixing in laminar flow in which the convergence is limited by the pressure-velocity coupling, a converged and stable solution was obtained using SIMPLE algorithm. The pressure correction under relaxation factor is given at 0.3, which facilitates the acceleration of convergence for the second order upwind scheme. The convergence of iterative calculations was attained when the specified value of the residual quantities are less than 10 6 .
To choose an adequate mesh; an unstructured mesh has been generated with tetrahedral elements; several grids were tested for the present proposed geometry (Table 2). Numerical variations of the mixing index are not important after the marked cell size; therefore this can be studied as the better mesh for the calculation. This mesh size with 338,438 cells will give better results with less time as compared to the finer mesh.
Numerical Validation
A numerical study of the pressure drop in a T-Junction Passive micromixer to verify the accuracy of the CFD with that of C. Kurnia et al. [10], see Table 3. The comparison illustrated a good agreement where the relative error of the numerical results is less than 1%.
Results and Discussion
To compare the numerical results a quantitative comparison was made for Newtonian fluids in a Kenics micromixer as shown in Figure 2. The mixing performance of the microKenics was compared with other three micromixers [15]: the SHG micromixer (staggered herringbone), a mixer based on patterns of grooves on the floor of the channel and a 3D serpentine micromixer with repeating "L-shape" units and the TLSCC (two-layer serpentine crossing channels) a micromixer which the principle serpentine channels with an angle of 90 • regarding the inlets.
The mixing indices were compared using the range of Reynolds number between 0.1 to 120. The Kenics micromixer and TLSCC displayed exceptional mixing performance compared to the other two micromixers for Re < 30, with superiority of the microKenics, as the Kenics device shows almost complete mixing (MI > 0.999) at low Reynolds numbers (Re = 0.1-5).
The mixing index at the exit of Newtonian and non-Newtonian fluid in Kenics was compared with the curved micromixer of Bahiri et al. [15]. For range numbers of Reynolds (0.1-500) and shear thinning index n (1, 0.85, and 0.6), as shown in Figure 3, the Kenics proved a high mixing efficiency.
The effects of the Reynolds number and the behavior index on the chaotic mixing mechanism were qualitatively analyzed by presenting the contours of the mass fraction at the various transverse planes P1-P7 and the exit. Figure 4 shows the improvement of the mass fraction distribution on the y-plane along the micromixer at Re g = 25 and n = 0.73. The twist of element and the sharp change of angle between blades affected the intensity of the fluid particles' movement and the mixing performance. Figure 5 show the streamlines flow in the microKenics for fluid behavior 0.73 and for Re g 0.1 and 50. The flow field enhanced the secondary flow along the micromixer for all cases of Re due to the blade conurbation of the Kenics device. Figures 6 and 7 present the mass fraction contours for different power-law indices (n = 1 and 0.49), with Reynolds numbers ranging between 0.1 and 50, at the different cross-sectional planes. Table 4 gives the distances between different plans to analyze the local flow behavior. For n = 1, the flow behavior presented by the mass fraction contours shows that the fluid layers for P1 to P4 advance in the same mode of molecular diffusion.
When the Reynolds number increases to 50, the quality of the mixing begins to improve, therefore a homogeneous mixture is obtained at the exit plane in the microKenics for all the values of the behavior index. Figure 8 shows the variation of mixing index versus generalized Reynolds number for different values of power-law index inside the microKenics. It can be seen that for all values of n, the micromixers have nearly the same high mixing index (Mi = 0.99). The mixing index at the exit of Newtonian and non-Newtonian fluid in Kenics was compared with the curved micromixer of Bahiri et al. [15]. For range numbers of Reynolds (0.1-500) and shear thinning index n (1, 0.85, and 0.6), as shown in Figure 3, the Kenics proved a high mixing efficiency. The mixing index at the exit of Newtonian and non-Newtonian fluid in Kenics was compared with the curved micromixer of Bahiri et al. [15]. For range numbers of Reynolds (0.1-500) and shear thinning index n (1, 0.85, and 0.6), as shown in Figure 3, the Kenics proved a high mixing efficiency. The effects of the Reynolds number and the behavior index on the chaotic mixing mechanism were qualitatively analyzed by presenting the contours of the mass fraction at the various transverse planes P1-P7 and the exit. Figure 4 shows the improvement of the mass fraction distribution on the y-plane along the micromixer at Reg = 25 and n = 0.73. The twist of element and the sharp change of angle between blades affected the intensity of the fluid particles' movement and the mixing performance. The effects of the Reynolds number and the behavior index on the chaotic mixing mechanism were qualitatively analyzed by presenting the contours of the mass fraction at the various transverse planes P1-P7 and the exit. Figure 4 shows the improvement of the mass fraction distribution on the y-plane along the micromixer at Reg = 25 and n = 0.73. The twist of element and the sharp change of angle between blades affected the intensity of the fluid particles' movement and the mixing performance. Table 4 gives the distances between different plans to analyze the local flow behavior. For n = 1, the flow behavior presented by the mass fraction contours shows that the fluid layers for P1 to P4 advance in the same mode of molecular diffusion.
When the Reynolds number increases to 50, the quality of the mixing begins to improve, therefore a homogeneous mixture is obtained at the exit plane in the microKenics for all the values of the behavior index. Table 4. Position of the planes in the micromixer. For low Reynolds numbers (Re = 0. [1][2][3][4][5], when the species have more contact time to achieve a perfect mixing with the Kenics, the mixing index loses a part corresponding to nearly 14% of its value. In addition, by increasing the Reynolds number the fluid homonezation augments due to increases of secondary flows and advection, compared with TLCC micromixers for all cases of the power-law index. 13.5 P8 16.5 Figure 9 shows the evolution of MI along the micro-Kenics, at different planes, with various values of the behavior index and for Re = 1, 5, 10, 25 and 100. For all cases of n, we can see from this figure that MI grows progressively and reaches high values when approaching the exit plane. Thus, as mentioned before we can see in all figures, the mixing performance increases with the increase in the behavior index, for Re g ≤ 50. Figure 9 shows the evolution of MI along the micro-Kenics, at different planes, with various values of the behavior index and for Re = 1, 5, 10, 25 and 100. For all cases of n, we can see from this figure that MI grows progressively and reaches high values when approaching the exit plane. Thus, as mentioned before we can see in all figures, the mixing performance increases with the increase in the behavior index, for Re 50. The Newtonian fluid with n = 1 is independent of shear rate and maintains a constant value of viscosity among different numbers of Reynolds (Figures 10 and 11). Besides, the decrease of value of n induces the increases of the apparent viscosity of non-Newtonian fluid which also depends on the consistency coefficient of the fluid and the shear rate. Therefore, Figure 10 indicates that for a known shear rate, where the fluid with a lower power-law index has a higher apparent viscosity furthermore the apparent viscosity increases by reducing the power-law index. The Newtonian fluid with n = 1 is independent of shear rate and maintains a constant value of viscosity among different numbers of Reynolds (Figures 10 and 11). Besides, the decrease of value of n induces the increases of the apparent viscosity of non-Newtonian fluid which also depends on the consistency coefficient of the fluid and the shear rate. Therefore, Figure 10 indicates that for a known shear rate, where the fluid with a lower power-law index has a higher apparent viscosity furthermore the apparent viscosity increases by reducing the power-law index. Therefore, Figure 10 indicates that for a known shear rate, where the fluid with a lower power-law index has a higher apparent viscosity furthermore the apparent viscosity increases by reducing the power-law index. Figure 11 shows the apparent viscosity on line x = 0 at the exit of micro Kenics for all power-law indices. It is obvious that the apparent viscosity decreases by rising the Reynolds number.
The pressure drop obtained from CFD simulations was compared with the TLCC micromixer [18], for the cases with the same CMC solutions and flow speed. As remarked in Figure 12, the pressure loss of Kenics is less than that from TLCC; so the best advantage has been obtained by the Kenics. Figure 11 shows the apparent viscosity on line x = 0 at the exit of micro Kenics for all power-law indices. It is obvious that the apparent viscosity decreases by rising the Reynolds number.
The pressure drop obtained from CFD simulations was compared with the TLCC micromixer [18], for the cases with the same CMC solutions and flow speed. As remarked in Figure 12, the pressure loss of Kenics is less than that from TLCC; so the best advantage has been obtained by the Kenics. A high mixing performance of the micromixer is generally associated with a high-pressure loss that involves the required energy input for the mixing process. Figure 13 shows the pressure loss increases with the increases of generalized Reynolds and concentration level. It is evident that a decreasing power-law index leads to an increment of the apparent viscosity and consequently a rising pressure loss. A high mixing performance of the micromixer is generally associated with a highpressure loss that involves the required energy input for the mixing process. Figure 13 shows the pressure loss increases with the increases of generalized Reynolds and concentration level. It is evident that a decreasing power-law index leads to an increment of the apparent viscosity and consequently a rising pressure loss. A high mixing performance of the micromixer is generally associated with a high-pressure loss that involves the required energy input for the mixing process. Figure 13 shows the pressure loss increases with the increases of generalized Reynolds and concentration level. It is evident that a decreasing power-law index leads to an increment of the apparent viscosity and consequently a rising pressure loss.
Conclusions
In this work, mixing of CMC non-Newtonian fluids in a microKenics device was numerically investigated for different regimes (Re = 0.1-500), using CFD code. The analyses showed that the mixing performances of the Kenics micromixers consisting of repeating short twisted helical configurations is better than that of other micromixers at low Reynolds numbers.
It can be achieved that for fluids with all power-law indices studied (n = 0.49, 0.6, 0.73, 0.85, 0.9, and 1) and low Reynolds numbers (less than 8) the micromixer is an excellent one, while for the fluids with Re 12 MI start decreasing for all power-law indices, but for low power law index (n = 0.6), the MI is reached from numbers of Re 60. At elevated Reynolds numbers (Re 120), the micromixer performance is improved for all values of the power-law indices. The results confirmed that the apparent viscosity of
Conclusions
In this work, mixing of CMC non-Newtonian fluids in a microKenics device was numerically investigated for different regimes (Re = 0.1-500), using CFD code. The analyses showed that the mixing performances of the Kenics micromixers consisting of repeating short twisted helical configurations is better than that of other micromixers at low Reynolds numbers.
It can be achieved that for fluids with all power-law indices studied (n = 0.49, 0.6, 0.73, 0.85, 0.9, and 1) and low Reynolds numbers (less than 8) the micromixer is an excellent one, while for the fluids with Re > 12 MI start decreasing for all power-law indices, but for low power law index (n = 0.6), the MI is reached from numbers of Re ≥ 60. At elevated Reynolds numbers (Re ≥ 120), the micromixer performance is improved for all values of the power-law indices. The results confirmed that the apparent viscosity of CMC solutions decreases with the increase of the shear rate, while, the pressure drop increases rapidly with increasing Reynolds number and power-law index. Nevertheless it is still the slightest loss compared to other micromixers in the literature with the same mean flow speed and apparent viscosity. Data Availability Statement: Data would be available upon reader's request.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,272.4 | 2021-11-30T00:00:00.000 | [
"Physics"
] |
Priority-Aware Virtual Machine Selection Algorithm in Dynamic Consolidation
In the past few years, many researchers attempted to tackle the problem of decreasing energy consumption in cloud data centers. One of the widely adopted techniques for this purpose is dynamic Virtual Machine (VM) consolidation. Consolidation moves VMs between hosts to decrease energy consumption. However, it has a negative impact on performance leading to Service Level Agreement (SLA) violations. Accordingly, selecting which VM to migrate from one host to another is a challenging task since it can affect performance. Researchers came up with several solutions and policies for efficient VM selection. In this paper, we exploit the fact that many tasks and users may tolerate some performance degradation which means, the tasks running on the VMs can be of different priorities. Accordingly, we propose augmenting consolidation with the priority concept, where low priority tasks are always selected first for migration. Towards this goal, we modified the popular Minimum Migration Time VM selection algorithm using the priority concept. The efficiency of the proposed algorithm is confirmed through extensive simulations using CloudSim toolkit and a real workload. The results show that priority awareness has a positive impact on decreasing energy consumption as well as maximizing SLA obligation. Keywords—Cloud computing; energy efficiency; service level agreement; VM consolidation; VM selection
INTRODUCTION
Virtual Machine (VM) consolidation is a useful technique for enhancing the utilization of the resources of cloud data center and reducing their energy consumption by leveraging the virtualization technology [1].Virtualization provides the ability to create more than one VM instance on a single Physical Machine (PM) host.Accordingly, it permits more than one application to be allocated on a single PM in order to enhance the overall resource utilization and reduce the overall energy consumption of the data center.
Virtualization also allows live migration of VMs.Dynamic VM consolidation adopts live migration to minimize the number of PMs to which VMs are allocated.This is achieved by shutting down an underutilized host for more energy conservation and migrating its VMs to other PMs.However, this may lead to Service Level Agreement (SLA) violations.SLAs are established between cloud service providers and users to specify the required Quality of Service (QoS).After provisioning QoS, it should be monitored to ensure it is maintained throughout the service duration.QoS provisioning and monitoring are two classical problems in computer science [2] [3].Unfortunately, when a VM migrates, its primary memory has to be transferred to the destination PM.Unfortunately, during this process, the requested CPU cannot be provided since the VM will be in a transition state which causes performance degradation and leads to SLA violation.Accordingly, enhancement of energy consumption and performance is a trade-off problem.Thus, dynamic VM consolidation techniques need to be designed with ultimate care such that not only power consumption is reduced, but also the requested QoS defined through SLAs is maintained.By carefully choosing which VMs to migrate when needed, consolidation can maintain more obligation of SLAs.
Dynamic VM consolidation is typically broken down into separate sub-problems [4]: 1) Host Overload detection: determining if a host is viewed as an overloaded one calling for a decision to choose one or more VMs to be migrated from this host.
2) Host Underload detection: determining if a host is viewed as an underloaded one calling for migrating all VMs allocated to this host to another, and switching the host to the low power mode.
3) VM Placement: finding a suitable destination host for allocating the migrated VMs from the overloaded and underloaded hosts.
4) VM Selection: a decision of which VMs should migrate from an overloaded host.
Simplifying the VM consolidation technique by diving it into four sub-problems and providing a separate algorithm for each one has the advantage of isolated examination and analysis of each algorithm to find a better approach.This work focuses on enhancement of the VM selection sub-problem.
Our proposed technique is based on the observation that some people might tolerate some performance degradation in services provided by a cloud and accept some SLA violations for cost savings while others cannot.For example, latency insensitive applications can tolerate some delay.From this prospect, we propose a novel approach in which we classify cloud user's tasks into two priority classes and deal with them in two different ways as follows: www.ijacsa.thesai.org The users with high priority tasks should get a maximum obligation of their SLAs, and the cloud service providers should accept some energy consumption.
The users with low priority tasks encounter reduced cost at the expense of accepting some SLA violations, while the cloud service providers gain more energy savings.
In other words, we treat users and their tasks differently and attempt to balance energy and performance as much as possible by considering priority when selecting a VM to migrate.It is worth noting that the cloud users' tasks priority will be assigned by the cloud provider as requested by the cloud users.
One of the most popular and effective VM selection algorithms in literature is the Minimum Migration Time (MMT) which picks the VMs with the minimum time required for migration.In this work, we propose priority-aware MMT algorithm to reduce the energy consumption while providing more SLA obligation for users with high priorities.The rest of paper is organized as follows: Section II discusses related work, Section 3 describes the proposed VM selection algorithm, Section 4 describes the experimental settings and results; and finally, the conclusion will be in Section 5.
II. RELATED WORK
VM selection and VM placement algorithms both comprise a challenging task of choosing a VM for migration and a preferable host for placement respectively.Several algorithms have been proposed in the literature for these purposes.Beloglazov et al. [4] proposed three VM selection algorithms; Random Selection (RS), MMT, and Maximum Correlation (MC).RS randomly chooses any VM for migration without any rules.The idea of MMT migration is to give preference to the VMs that require the minimum time for the whole migration process.Additionally, the VM with the maximum correlation coefficient relative to the other peer VMs on the same PM is the one selected for migration.The correlation is a parameter representing the effect of the VM on overloading the host.Moreover, the authors proposed Power Aware Best Fit Decreasing (PABFD) placement algorithm as a modification of the conventional Best Fit Decreasing (BFD) algorithm.The PABFD algorithm allocates each VM to the host that causes the least increase of power consumption due to placement.Fu and Zhou [5] proposed a novel VM selection algorithm called Meets Performance (MP).This algorithm finds the host's utilization deviation over the host overload threshold and compares it with the utilization of VMs on the host.It then selects the VM, whose migration results in shifting the utilization of the host nearer to the upper threshold.This is to reduce the number of migrations needed.Furthermore, the authors proposed a novel VM placement algorithm called Minimum Correlation Coefficient (MCC).This coefficient is used to describe the intense of correlation between the selected VM for migration and the destination host.The higher the correlation, the higher the effect on the performance of the destination host.The algorithm selects a VM with the minimum correlation with the target host to avoid degrading the performance of the other VMs allocated to it.
Rahimi et al. [6] proposed a VM placement algorithm based on priority routing.The main idea is to classify VMs based on their resource utilization, and classify hosts based on their resource availability, then give priority to the resources where CPU has the higher priority compared to the RAM, while the bandwidth has the lowest priority.After that, VMs are placed on the host with the most similar categories by creating a routing path table and considering resource priority.It is worth noting that this idea of priority is totally different from our proposed priority concept of tasks and users.
Farahnakian et al. [7] optimized VM placement by adopting Ant Colony Optimization (ACO) technique and proposed the Ant Colony System-based VM Placement Optimization (ACS-PO) algorithm.The proposed approach uses artificial ants in order to consolidate VMs and allocate them to the smallest number of active hosts based on the present requirements of the resources.What is interesting about those ants is that they work concurrently to develop VM migration plans based on a defined objective function.
Monil and Rahman [8] proposed a fuzzy VM selection algorithm.The fuzzy technique is an approach for tackling intelligent decision-making problems.The authors recognized that there are different VM selection algorithms in the literature offering different advantages; and generated a method which can aggregate the advantages of all of them in a single fuzzy logic tool.The input to the fuzzy tool is MMT and MC discussed above and the output is a VM selected for migration.
As discussed above and to the best of our knowledge, none of the algorithms on the literature classifies the cloud user's tasks based on their priorities.In this paper, such classification is exploited for delivering a better balance between energy consumption and performance in cloud data centers.Specifically, we modify the MMT algorithm using this priority concept as explained in the followings section.
III. PROPOSED ALGORITHM
As noted above, cloud service providers can satisfy their requirements of decreasing the cost of energy consumption while optimizing their resource utilization by using dynamic VM consolidation.Typically, the dynamic VM consolidation process sets up a threshold called the utilization threshold.It then monitors all active hosts' utilization ensuring that none of them exceeds this threshold.Whenever such a case is detected, some VM have to be offloaded from the corresponding source host and migrated to another destination host to avoid performance degradation on the source.
As mentioned before, dynamic VM consolidation is typically broken down into separate sub-problems [4]: 1) Host overload Detection: Local Regression Robust (LRR) [4] is one of the most efficient and widely used algorithms to set the utilization threshold and keep the summation of the utilization of all VMs bellow it.If the CPU utilization exceeds the set threshold, the consolidation process invokes the VM selection and VM placement algorithms to take an action.
2) Host Underload Detection: Host with minimum CPU utilization algorithm [4] is one of the popular and successful www.ijacsa.thesai.orgalgorithms for this purpose.The idea is to find the host with the minimum CPU utilization compared to the other hosts.This host is recognized as the underloaded host, and all VMs on it have to be migrated with attention to the destination hosts after placing the VMs to avoid making them overloaded.
3) VM Placement: PABFD [4] discussed above is the most widely known and one of the most effective VM placement algorithms.It allocates each VM to the host which causes the minimum increase of power consumption due to this allocation.
4) VM Selection is a decision of selecting which VM has to be migrated.This is where our paper contributes.Our optimization is based on the priority concept where, as discussed earlier, latency insensitive applications that tolerate delay and users who may accept some performance degradation for price savings are given lower priorities as discussed below.When the total requirements of CPU performance by the VMs exceed the available CPU capacity of the PM, the host is recognized as an overloaded host; the overloaded host may cause an increase in response time and a decrease in throughput.In this case, the cloud users do not get the expected QoS, and some VMs must be migrated from this host to decrease the CPU utilization of the host.Since live VM migration also has a negative impact on performance, low priority tasks will be the ones chosen for migration since those tasks accept some performance degradation due to migration and tolerate some SLA violation.On the other hand, the high priority tasks will be kept in the host saved from performance degradation due to migration.
As shown in Figure 1, we adopt the efficient most wellknown MMT [4] VM selection algorithm and modify it using the priority concept.We make the selection decision in two phases.In the first phase, we select all the low priority tasks from an overloaded host and prepare a list for the second phase.The second phase selects from the low priority list the VM with the minimum time required for its migration in comparison to the other peer VMs allocated to the host.The time required for migration is defined [4] as the ratio between the RAM amount used by the VM and the available network bandwidth.After migrating the selected VM, the process is iteratively repeated as long as the host is still overloaded.
IV. EXPERIMENTS AND RESULTS
Since the system of interest is Infrastructure as a Service (IaaS), which is a cloud environment intended to give the users a view of infinite computing resources, it is clear that we need to experiment with and evaluate the proposed VM selection algorithm on a large-scale virtualized data center infrastructure.However, conducting such an experiment in a real environment as a repeatable experiment is very difficult.Thus, simulations were chosen to evaluate the performance of the proposed VM consolidation technique and preserve the repeatability of experiments.CloudSim toolkit [9] is the simulation platform selected because it is a popular framework for simulating cloud computing settings.It can be used for modeling virtualized settings; supporting on-demand resource provisioning management.A recent extension to CloudSim allows energy aware simulations and supports energy-efficient strategies.This is in addition to allowing the simulation of service-oriented applications with dynamic workloads.
A. Experimental Settings
CPUs with dual-core are adequate for evaluating resource management algorithms intended to run on multi-core CPU architectures.In fact, it is essential to simulate a large number of servers to assess the efficiency of the VM consolidation mechanism.Thus, selecting less powerful CPUs for the simulations is beneficial because fewer workloads will overload a server [4].To evaluate the efficiency of the proposed algorithm, we simulated a datacenter containing 800 heterogeneous PMs with two configurations: HP ProLiant ML110 G4 (Intel Xeon 3040, dual-core 1860 MHz, 4 GB, 1 Gbps).
The characteristics of the VM instances are of types identical to those of Amazon EC2 instances except that all VMs are single core.This is because the workload data employed in the simulation comes from the single core: Extra-large Instance (2500MIPS, 3.75GB).
B. Performance Metrics
Different performance metrics are used for evaluating the proposed VM selection algorithm.We adopt the same metrics proposed and elaborated by Beloglazov and Buyya [4]:
C. Workload Data
To validate the proposed VM selection algorithm with more applicable simulations, a real workload from a CoMon system which is a monitoring infrastructure for PlanetLab [10] was used.This CPU utilization data is collected from more than thousand VMs from servers distributed over five hundred locations around the world every five minutes.Data is created from ten days randomly chosen during the months of March and April, 2011.The median value is calculated over the ten days and used with each performance metric.The basic features of this data are presented in Table 1.
D. Experimental Results
Using the PlanetLab workload data, we compare the original MMT algorithm with the priority awareness optimization.Figure 2 shows the energy consumption due to consolidation in Kwh.The results show that the priority-aware MMT decreases the energy consumption by 13%. Figure 3 shows the percentage SLATAH; the priority-aware MMT decreases the SLATAH metric by 42%. Figure 4 describes the performance degradation due to migration; choosing the low priority tasks that have minimum time to complete migration can provide 37% decrease in degradation.Figure 5 describes the overall SLA violation delivered by the consolidation technique; priority-aware MMT can provide 21% reduction of SLA violation.Figure 6 shows the energy consumption and SLAV rate; the rate is decreased by 31% when using the priority-aware MMT.Finally, Figure 7 shows the number of VMs migrated due to consolidation; they are reduced by 40% in case of the priority-aware MMT.Since live VM migration results in an overhead on the system, the better consolidation mechanism is the one which requires fewer migrations.Based on the results above, it is clear that priority awareness has a considerable positive effect on all performance metrics.In other words, the proposed priority-aware VM selection algorithm is an efficient optimization in comparison to MMT algorithm regarding all metrics.
V. CONCLUSION AND FUTURE WORK
In this paper, we proposed a novel priority-aware VM selection algorithm, which takes into consideration the priorities of tasks.This algorithm is original since, to the best of our knowledge, it is the first to exploit the priorities of cloud tasks and users.We selected the widely-used and successful MMT VM selection algorithm and showed that modifying it using priority-awareness has a positive effect on energy consumption and on all performance metrics.www.ijacsa.thesai.orgAs future work, we intend to apply the proposed priority concept and experiment with different other VM selection algorithms to ensure the reliability of the proposed approach and further assess its effectiveness.
Energy consumption in Kwh SLATAH (SLA violation Time per Active Host)
TABLE I .
WORKLOAD CPU UTILIZATION STATISTICS | 3,894.8 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Single-unit data for sensory neuroscience: Responses from the auditory nerve of young-adult and aging gerbils
This dataset was collected to study the functional consequences of age-related hearing loss for the auditory nerve, which carries acoustic information from the periphery to the central auditory system. Using high-impedance glass electrodes, raw voltage traces and spike times were recorded from more than one thousand single fibres of the auditory nerve of young-adult, middle-aged, and old Mongolian gerbils raised in a quiet environment. The dataset contains not only responses to simple acoustic stimuli to characterize the fibres, but also to more complex stimuli, such as speech logatomes in background noise and Schroeder-phase stimuli. A software toolbox is provided to search through the dataset, to plot various analysed outcomes, and to give insight into the analyses. This dataset may serve as a valuable resource to test further hypotheses about age-related hearing loss. Additionally, it can aid in optimizing available computational models of the auditory system, which can contribute to, or eventually even fully replace, animal experiments.
Background & Summary
Age-related hearing loss is one of the most prevalent diseases world-wide, as it affects over 65% of adults above 60 years of age 1,2 .Moreover, it is predicted to become increasingly more prevalent as our society ages.Elderly with age-related hearing loss often experience a reduced ability to communicate in daily settings.By affecting mental health, physical health, and social functioning, age-related hearing loss can result in social isolation and a decreased quality of life 3 .Moreover, age-related hearing loss is associated with an increased risk of cognitive impairment and dementia 4,5 .
Both peripheral cochlear damage and a decline of central processes in the brain are thought to contribute to age-related hearing deficits 6,7 .To unravel the different contributions of peripheral and central age-related degeneration, we studied the functioning of the auditory nerve, the sole connection between the peripheral and central auditory systems.The general aim of this work was to determine the consequences of age-related hearing loss on single auditory nerve fibre spiking activity.
An attractive and well-studied animal model to address this aim is the quiet-aged Mongolian gerbil.In this animal model, noise-induced cochlear damage is minimized, and thus the effects of aging on cochlear functioning can be studied in isolation.A large body of previous research has revealed age-related cochlear and central auditory pathologies in the quiet-aged gerbil [8][9][10][11] .Furthermore, the gerbil possesses good low-frequency hearing, which is important to refer, for example, speech-encoding deficits to the human condition 12,13 .And finally, since the gerbil can be trained to behaviourally make auditory discriminations, our results have also been directly linked to behavioural consequences of age-related hearing loss 14,15 .
The dataset presented here contains the raw waveforms and spike times from single auditory nerve fibres of young-adult, middle-aged, and old gerbils, recorded while presenting a variety of acoustic stimuli to assess the functioning of the single fibres 16 .Gerbils were anaesthetized with ketamine/xylazine injections, and auditory brainstem responses (a type of compound response) were recorded to derive a measure of their general hearing sensitivity.The auditory nerve was approached dorsally through the cerebellum and a high-impedance glass electrode was slowly moved through the nerve.Single-unit recordings were made for as long as the surgical preparation was stable.Afterwards, spikes in the data were identified, data files were organized into folders for each single unit, data were analysed to characterize the fibre, and a set of criteria was used to ensure that the single unit had been isolated from the auditory nerve bundle.Figure 1 shows a schematic overview of the study.
The dataset contains a total of 1160 single-unit auditory nerve fibres, of which 314 were recorded in old gerbils (>36 months old) that had various degrees of age-related hearing loss.A gerbil of 36 months or older is vulnerable; approximately 50% of gerbils in our facility die before reaching this age.Furthermore, successful recordings of single auditory-nerve fibres were obtained in only 43% of the experimental old gerbils (compared to 88% in young adults).Common reasons were that they die early in the experiment due to unstable anaesthesia, leading to heart failure, or they had lost all their hearing sensitivity, and single fibres did not respond to acoustic stimulation.This increases the value of the recordings reported here, especially those from the oldest gerbils.
Outcomes from these data have been published before 14,15,[17][18][19][20] .By openly sharing these data in combination with detailed metadata and start-up software, it can serve as a valuable resource to test hypotheses about age-related hearing loss that have not yet been addressed.Furthermore, the detailed description of the methods is aimed to improve reproducibility of the experiments, as well as to serve as a starting point for adding to this dataset.Data from animals, especially from those where the yield is small, should be used to the full, in an effort to reduce the number of animals sacrificed for scientific research.Finally, this dataset can aid in optimizing computational models, which may partly, or eventually even fully, replace animal experiments.
animals.
The dataset was collected from a total of 104 Mongolian gerbils, Meriones unguiculatus, of either sex that were born and raised in the animal facility at the Carl von Ossietzky Universität Oldenburg, Germany.The founder animals of this in-house colony came from Charles River Laboratories in 2009.Animals were group housed, kept on a 12:12 h light:dark cycle, fed ad libitum, and provided with cage enrichment.Average sound levels in the housing rooms were 48 and 55 dB A outside and during working hours, respectively.Sound levels were intentionally kept low, to minimize the effects of external, noise-related damage to the auditory system.Thus, the isolated effects of aging and age-related hearing loss on single auditory-nerve responses could be studied.Experimental procedures were in accordance with the ethics authorities of Lower Saxony, Germany, under the permit numbers AZ 33.9-42502-04-11/0337, AZ 33.19-42502-04-15/1990, and AZ 33.19-42502-04-21/3695. Surgical procedures.Anaesthesia.Initial anaesthesia of the gerbils was accomplished by intraperitoneal injection of ketamine (135 mg/kg; Ketamin 10%, bela-pharm GmbH or Ketamidor, WDT) and xylazine (6 mg/kg; Xylazin 2%, Ceva Tiergesundheit GmbH or Xylazin, Serumwerk) diluted in saline (0.9% NaCl).Anaesthesia was maintained by hourly subcutaneous injections with one third of the initial dose of the same mixture (45 mg/kg ketamine and 2 mg/kg xylazine).Additionally, if a hind-paw reflex was detected, an additional one sixth of the initial dose was injected.Meloxicam, a non-steroidal antiphlogistic agent (1 mg/kg; Metacam 2 mg/ml, Boehringer Ingelheim) was injected at the beginning of the experiment when the animal was sensitive to the surgical procedures.In some experiments, a lidocaine ointment (Xylocain Gel 2%, Aspen Pharma Trading Limited) was applied topically on the muscles overlying the place of craniotomy as an additional local analgesic.Anaesthetic depth was constantly monitored by electrocardiogram recordings using intramuscular needle electrodes in the front leg and the contralateral hind leg (DAM50, World Precision Instruments), and visualized on an oscilloscope (SDS 1102CNL, SIGLENT Technologies).Body temperature was monitored via a rectal probe and maintained at 38 °C by a homeothermic blanket (Harvard Apparatus).To avoid airway obstruction during the experiment, some middle-aged and old gerbils were tracheotomized, but breathed unaided.Of the total of 104 gerbils, 78 received additional oxygen (flow 1.5 l/min) in front of the tracheotomy or snout throughout the experiment.For each animal, details of the anaesthesia are specified in the metadata of the experiment ('exp.info.anesthesia').
Placement of the sound system.The head of the animal was fixed in a bite bar (Kopf Instruments, Tujunga CA, USA), with the head mount, in addition, fixed to the exposed frontal skull using dental cement.A small opening in the bulla was made to prevent build-up of negative pressure in the middle-ear cavity.The pinna was removed to expose the bony ear canal.Subsequently, the ear bar, which contained the speaker and calibration microphone, was placed directly onto the bony ear canal, and sealed using petroleum jelly.To avoid damage to the tympanic membrane, the diameter of the ear bar's front end was slightly larger than the ear canal.For the current dataset, we used two different sound systems: either an ER-2 speaker (Etymotic Research, Inc.) in combination with a Knowles microphone (FG-23329, Knowles Electronics), or a Sennheiser speaker (IE-800, Sennheiser) in combination with an Etymotic microphone (ER7-c, Etymotic Research).The sound system that was used is specified in the metadata of each experiment ('exp.info.sound_system').
Accessing the auditory nerve.To access the auditory nerve, a craniotomy over the right cerebellum was carried out by carefully removing parts of the occipital, parietal, and temporal bones.Following a duratomy, cerebellar tissue was slowly aspirated until the brainstem was exposed.To expose the auditory nerve, a few small balls of paper tissue (<0.5 mm), drenched in saline, were placed between the temporal bone and the brainstem.auditory brainstem response.Stimulus generation.Auditory brainstem responses (ABRs) were used to determine general hearing sensitivity and to monitor cochlear health during the single-unit recordings.ABRs were measured during the presentation of custom-generated chirps (0.3-19 kHz, 4.2-ms duration, 5-dB step size, 200-500 repetitions), to compensate for the frequency-dependent travelling wave delay in the gerbil cochlea 21,22 .Chirps were generated in MATLAB (version R2015b; The MathWorks, Inc., Natick, Massachussets, United States) and were calibrated and equalized using the most recently obtained calibration file.After each (re)placement of the ear bar, the calibration file was acquired by measuring sound pressure level (SPL) near the eardrum with the miniature microphone sealed in the same ear bar and the output amplified by a microphone amplifier (MA3, Tucker Davis Technologies [TDT]).Stimuli were presented through an external audio card (Hammerfall DSP Multiface II, RME Audio; 48 kHz sampling rate), amplified (HB7, TDT), and presented through the small speaker sealed into the ear bar.
Waveform recording.To record the ABR, platinum needle electrodes were placed subdermally ventral to the ear canal, and in the ipsilateral neck muscle for recording and referencing, respectively.The output of the needle electrodes was fed into an amplifier (1,000x amplification, 0.3-3 kHz bandpass filter; ISO-80, World Precision Instruments) and recorded using the external RME audio card.Custom-written MATLAB software (R2015b) averaged and stored the ABR waveforms across stimulus levels.ABR thresholds were defined as the lowest level that evoked clear ABR waves and a wave I amplitude >4 µV.The stimuli and thresholds of the ABR are specified in the metadata of each experiment ('exp.info.ABR').
Recording of single unit auditory nerve fibres.After visualizing the auditory nerve bundle, single units were recorded using glass micropipette electrodes (GB120F-10, Science Products GmbH) pulled on a P-2000 electrode puller (Sutter Instruments Co.).Electrodes were filled with a high concentration potassium-chloride solution (3M-KCl) and had an impedance between 5 and 50 MΩ.After the electrode was mounted in the holder, it was manually positioned just above the auditory nerve bundle using a micromanipulator (Märzhäuser).The electrode holder was attached to an inchworm motor (IW-711, Burleigh, Inc.), which could be controlled remotely via a piezo microdriver and handset (6000 ULN and 6005 ULN handset, Burleigh).An Ag/AgCl pellet electrode (Warner Instruments) served as an electrical reference.Electrical signals were amplified (10x; WPI 767, World Precision Instruments), filtered (50/60 Hz; Hum Bug, Quest Scientific), made audible through a speaker (MS2, TDT), visualized on an oscilloscope (SDS 1102CNL, SIGLENT Technologies), digitized (RX6, TDT; 48,828 Hz sampling rate), and displayed in a graphical user interface (GUI) on a personal computer using custom-written MATLAB software (R2015b).While a broad-band noise search stimulus (50-70 dB SPL) was played through the in-ear speaker, the electrode was slowly advanced through the auditory nerve (1-5 µm step size), until spikes were seen on the oscilloscope and/or heard through the MS2 speaker.The hardware used to record single-unit auditory nerve fibres was kept the same throughout all experiments and is specified in the metadata of each experiment ('exp.data.recording_system').
Data acquisition.
Stimuli to characterize the auditory nerve fibre.After spikes were observed that preferably also responded to the broad-band noise search stimulus, a quick audio-visual estimate of the fibre's best frequency (BF) and threshold were obtained, using online sliders of tone-burst frequency and level in the software's GUI.The response range estimates were based purely on audio-visual cues and no spike-rate criterion was initially applied.Next, tone bursts with a frequency ranging from well below to well above the audio-visually estimated BF were presented (~1.5 octaves wide) at around 10 dB above the audio-visually estimated threshold, with a step size varying between 50-250 Hz, depending on the frequency range.These data are stored in the field 'exp.data.BF' .The unit's BF was defined during the experiment from the frequency-response curve as the tone frequency with the highest spike rate.Next, to obtain the unit's rate-level function (RLF), tone bursts at BF were presented at a range of levels.These data are stored in the field 'exp.data.RLF' .Depending on how stable the recording was, and on the research question for the experiment, data were also recorded while varying both the tone's frequency and level to determine the fibre's response field, tuning curve, and characteristic frequency (CF) ('exp.data.CF'), while presenting tones of various levels at the best frequency with more repetitions ('exp.data.PH'), while presenting clicks ('exp.data.CLICK'), and in silence ('exp.data.SR').Methods and criteria to further calculate the unit's response characteristics, such as BF, threshold, CF, spontaneous rate, phase locking, and latency, are described below (in the section 'Data analysis for technical validation').
Except for the clicks, all stimuli were calibrated using custom-made MATLAB software (R2015b) according to the latest calibration file.A new calibration file was obtained after each (re)placement of the ear bar, using the miniature microphone sealed in the ear bar near the speaker.2-sample condensation clicks are ~97 dB pe (peak equivalent) SPL when presented with 20 dB attenuation, which was the default setting in these click recordings.The attenuation of the click can be found in the metadata of each click recording ('exp.data.CLICK.curvesettings').Metadata of tone bursts, such as acquisition duration, stimulus delay, stimulus ramps, and randomization, can be found in the metadata of the respective recording (e.g., 'exp.data.BF.curvesettings').The Data Structure document ('Data_Structure.pdf'),published along with the dataset 16 , can be consulted for detailed descriptions on the variables stored in this field.
Complex acoustic stimuli.In addition to the recordings to characterize the auditory nerve fibre, many experiments also included auditory nerve fibre responses to complex acoustic stimuli.Briefly, in 21 of 104 experiments, responses to two 1-s noise bursts were recorded, where the second burst of each stimulus pair was 180° phase-inverted relative to the first burst (60 frozen repetitions).Responses to these noise bursts were used to study the effects of age-related hearing loss on single fibre temporal coding 17 .Next, in 5 of 104 experiments, responses to Schroeder-phase harmonic tone complexes with various duty cycles, sweep directions, and velocities were recorded 14 .In 17 of 104 experiments, responses to consonant-vowel-consonant logatomes were recorded.These responses were used to study vowel discrimination and representation in single fibres of young-adult and quiet-aged gerbils 15,19 .These complex acoustic stimuli were presented as .wavfiles; the waveforms, sampling rates, and number of samples are included in the dataset ('exp.data.*NOISE/SPS/CVC*.acous-tic_stimulus').Furthermore, the metadata sheet ('metadata.csv'),published along with the dataset 16 , indicates in which experiment these stimuli were presented.In addition, single-fibre responses to other complex acoustic stimuli were also obtained from these experiments, including responses to vowel-consonant-vowel logatomes in quiet and in noise, responses to TFS1 stimuli, which are sets of harmonic and inharmonic tone complexes that differ only in their temporal fine structure but not in their envelope, as developed by Moore & Sek 23 , and responses to amplitude-modulated tones in various levels of broadband noise with a spectral notch centred at the carrier frequency of the amplitude-modulated tone.The responses of these datasets are not yet fully analysed and will be uploaded and added to the full dataset as soon as the studies are published.
Spike detection.
During data collection, spike triggering was defined interactively.This allowed the researcher to estimate BFs and thresholds directly after collecting the data.However, spike amplitude often varied during the recording and the spike trigger could be difficult to track accurately online.Therefore, spike triggering was more carefully revisited offline by manually checking and adjusting the spike trigger trial-by-trial by a trained and experienced scientist.Raw waveforms were band-pass filtered (300-3000 Hz) with a 6 th -order type II Chebyshev filter (cheby2 MATLAB-function, 20-dB roll off).A manually set spike trigger threshold was applied to all trials based on visual inspection of the first five trials.Subsequently, each trial was carefully inspected, and the spike trigger was adjusted on a trial-by-trial basis whenever the trigger level was too low and included baseline activity or when the trigger level was too high and, as such, excluded spikes.Spike times were defined by the time of the peak in each waveform snippet that exceeded the set spike trigger.The metadata from this offline spike detection are stored in 'exp.data.[recordingtype].curvesettings.analysis'.The Data Structure document ('Data_Structure.pdf')can be consulted for detailed descriptions on the variables stored in this field.Note that the stored spike trigger applies to the filtered waveforms, as described above, and not to the raw waveforms as stored in the 'curveresp' fields.The resulting spike times are stored in 'exp.data.[recordingtype].curvedata.spike_times'.Calculating spike rate.For the analysis of recordings with responses to tones (BF, CF, PH, and RLF recordings), the tone-burst-evoked spike rate was calculated separately for each trial.The number of spikes that were recorded during the presentation of the tone, i.e. between t 1 ( = stimulus onset) and t 2 ( = stimulus onset + stimulus duration), were divided by the stimulus duration in s.Subsequently, spike rates were averaged over the number of repetitions of unique frequency-level combinations.The experimenter had the option of including silent trials interleaved between the tone-burst trials, presented as often as each unique frequency-level combination (defined as the number of repetitions).These silent trials can be used to estimate the unit's spontaneous rate.When this option was included, spontaneous rates associated with these recordings (stored in 'exp.data.
Data analysis for technical validation.
[recording type].analysis.sr')were always calculated over the total trial duration (as opposed to the stimulus duration) and averaged over the repetitions containing silent trials.By contrast, spontaneous rates stored in 'exp.data.SR.analysis.sr'were derived from longer trials (~ 2.4 s) and were also averaged over the repetitions.As the total recording time in silence of the SR recording type was longer than that of the silent trials in tonal recording types (~240 s in SR recordings vs. ~0.8s in RLF recordings), the spontaneous rate estimate is likely to be the more precise for the SR recording type.For units without data in the 'SR' field, the author advises choosing the tonal recording type with the most repetitions, indicating the longest total time and, hence, the most reliable estimate of spontaneous rate.This is typically the PH recording ('exp.data.PH.analysis.sr'),followed by the RLF recording ('exp.data.RLF.analysis.sr').
Analysis of responses to tone bursts.For BF recordings, the mean tone-evoked spike rate was plotted as a function of stimulus frequency.A smoothing spline function was fitted to the data and the peak of this fitted function stored as the best frequency in Hz ('exp.data.BF.analysis.bf'),according to Aralla, et al. 24 .The frequency-response curve can be reconstructed by plotting the mean and standard deviation of the spike rates as a function of stimulus frequency, which are stored in 'exp.data.BF.analysis.rates', 'exp.data.BF.analysis.stdevs', and 'exp.data.BF.analysis.freqs', respectively.This analysis procedure was carried out by the function BFextract_func.m.
For RLF recordings, the mean tone-evoked spike rate was plotted as a function of stimulus level.The threshold was defined as the lowest stimulus level evoking a spike rate higher than 15 spikes/s and higher than [mean spontaneous rate + 1.2 times standard deviation spontaneous rate].Spontaneous rate was determined from the silent trials of the same recording.The threshold was stored in 'exp.data.RLF.analysis.threshold' .The rate-level function can be reconstructed by plotting the mean and standard deviation of the rates as a function of stimulus level, which are also stored in 'exp.data.RLF.analysis' .The frequency of the tone burst corresponds to the best frequency as determined online during the experiment and is close, but often not exactly the same as the one stored in exp.data.BF.analysis.bf.The analysis procedures were carried out by the function RLFextract_func.m.
PH recordings are like RLF recordings, except that they contain fewer levels and more repetitions per level.As such, phase locking can be reliably studied per stimulus level.PH recordings were typically only collected for auditory nerve fibres with a relatively low best frequency (<~5 kHz).Vector strength (vs) was calculated as follows: where N is the total number of spikes and ϕ(j) is the phase of the j th spike within the period of the tone.The vector strength is calculated based on spikes from all repetitions for each level and is stored as an array ('exp.data.PH.analysis.vs')along with the stimulus levels ('exp.data.PH.analysis.levels')and frequency ('exp.data.PH.analysis.frequency').The significance of the vector strength (vs) is determined by calculating a p-value as follows: When N < 50, the p-value is NaN (which stands for Not-a-Number, a value of a numeric data type that does not contain a number), also making the vector strength invalid.When p < 0.001, the vector strength is considered significant and meaningful.P-values corresponding to the vector strengths at each stimulus level are stored in 'exp.data.PH.analysis.prob' .The analysis procedures were carried out by the function PHextract_func.m.
In CF recordings, responses were recorded to tone bursts while varying both tone frequency and level.For each stimulus frequency, the fibre's rate threshold was defined as the lowest level with a spike rate higher than T = [mean spontaneous rate + 1.2 times standard deviation spontaneous rates].Spontaneous rate was determined from the silent trials of the same recording.T could be manually adjusted when needed, for example for fibres with a spontaneous rate of 0 spikes/s.The tuning curve was constructed by plotting the rate threshold as a function of the stimulus frequency.Fibre threshold is defined as the lowest threshold of the tuning curve ('exp.data.CF.analysis.threshold')and the characteristic frequency (CF) as the stimulus frequency that gave rise to this threshold ('exp.data.CF.analysis.cf').The Q 10dB ('exp.data.CF.analysis.q10')was calculated by dividing the characteristic frequency by the tuning-curve bandwidth at 10 dB above threshold.Q 10dB is NaN (Not-a-Number) when this bandwidth could not reliably be established, for example when the frequency range was too narrow.The response field can be reconstructed by creating a 3D-surface plot of the mean rates as a function of stimulus frequency and stimulus level, which are also stored in 'exp.data.CF.analysis' .The analysis procedures were carried out by the function CFextract_func.m.Analysis of responses to clicks.From CLICK recordings, the response latency was determined using three different methods.A Poisson probability density function was constructed from the spikes that were evoked after the onset of the click, across all repetitions.The response latency was defined as the time when this function fell below a threshold of 10 −6 25 and was stored in 'exp.data.CLICK.analysis.latency_poisson' .Response latency was also measured by the first incidence when two consecutive bins (0.05-ms bin size) in the peristimulus time histogram were higher than the highest bin before the onset of the click 26 .This value was stored in 'exp.data.CLICK.analysis.latency_2bins' .Finally, the mean and median first-spike latency (FSL) were calculated and stored in 'exp.data.CLICK.analysis.fsl_mean'and 'exp.data.CLICK.analysis.fsl_median' .Furthermore, the standard deviation ('exp.data.CLICK.analysis.fsl_std'),variance ('exp.data.CLICK.analysis.fsl_var'),and interquartile range ('exp.data.CLICK.analysis.fsl_iqr') of the first-spike latencies across the repetitions were stored as a measure of first-spike jitter.All click latencies are relative to the onset of the click.Analyses were carried out by the function Clickextract_func.m.
Data analysis of complex stimuli.
From NOISE, CVC, and SPS recordings, the number of trials that can be included in the analysis as well as the trial indices are stored in 'exp.data.[recordingtype].analysis.ntrials'and 'exp.data.[recordingtype].analysis.trials', respectively.Furthermore, the average rate response, calculated over 5-ms bins, along with the time vector of the bin centres, are stored in 'exp.data.[recordingtype].analysis.PSTH_ rates' and 'exp.data.[recordingtype].analysis.PSTH_centers' , respectively.The PSTH_rates and PSTH_centers variables can be used to plot the mean peri-stimulus time histogram (PSTH) of the recording.
Data Records
Repositories.All data files from individual animals ('G#.mat'), as well as the data file containing all experiments ('all_AN_data.mat'), the metadata sheet ('metadata.csv'), and the document describing all fields in the structure ('Data_Structure.pdf'),are shared on the DRYAD server 16 .The data can easily be downloaded without restrictions and are shared under Creative Commons 0 (CC0).The software toolbox is shared on the Zenodo server (https://doi.org/10.5281/zenodo.10370064), is linked to the dataset on DRYAD, and is licensed under the GNU General Public Licence (GPL) 16 .The full dataset is also available as the function data_heeringa2024 in the Auditory Modeling Toolbox (AMT) version 1.6 27 .
Raw data from individual animals.Data were stored in a nested MATLAB structure (type struct) to preserve the organization between the raw data and the associated metadata.For each animal, one struct was made with a standard variable name 'exp' , which was saved as a .mat-file.Figure 2 shows an overview of the hierarchy in the struct and the location of the different kinds of metadata, outcomes, spike times, and raw waveforms.In the first layer of the struct, there are three fields: 1) 'exp.animalID', a string with the unique ID of the experiment, which is similar to the name of the data file, 2) 'exp.info', a struct with all metadata relevant to the experiment, e.g., the animal's sex, age, and hearing threshold, but also the sound and recording systems that were used, 3) 'exp.data' , a struct with the data that were recorded from the single auditory nerve fibres.
The single-unit data are organized as follows.All data recorded from one fibre were stored in one row (#) in the struct ('exp.data(#)').Each fibre has a unique name within that animal, stored in 'exp.data(#).unit' .All fibres have the same fields, that can be either filled or empty depending on whether the recording was obtained from that given fibre.Within a filled data field, e.g.'exp.data(#).BF' , there are again five fields: 1) 'exp.data(#).BF.filename' , a string with the original filename, 2) 'exp.data(#).BF.analysis' , a struct containing analysed outcomes of the recording, 3) 'exp.data(#).BF.curvedata' , a struct that stores the spike times and the variables for each trial, 4) 'exp.data(#).BF.curvesettings' , a struct that stores all metadata relevant to the recording, 5) 'exp.data(#).BF.curveresp' , a struct that stores the raw recorded voltage traces for each trial.Each entry in this structure is described in detail in the document 'Data_Structure.pdf' .
For each individual gerbil (also called an experiment in this context), there is a single .matfile containing all associated raw data.To keep file size within workable limits of < 1 Gb, data from a few experiments (n = 3 at time of publication) were separated into two or three different .matfiles.These can be recognized by the '_#' after the animal ID in the filename, where # is the sequence number.While the metadata of the experiment are the same between the different files from one experiment ('exp.animalID'and 'exp.info'), the data of the single fibre recordings are different between the sequence numbers ('exp.data').
Files that help users search through the dataset.
There is one .matfile that contains all experiments in one struct ('all_AN_data.mat').When this file is loaded into MATLAB, the variable is named 'all_exp' and the experiments are all listed consecutively with their corresponding animalID, info, and data fields.In this struct, the raw waveforms ('curveresp') and, if available, the acoustic stimulus ('acoustic_stimulus') were deleted to keep the file size manageable.The struct can be easily searched through to find, for example, animals of a certain age, experiments in which certain stimuli were presented, or auditory nerve fibres with a certain range of best frequencies.Furthermore, this struct can be used to make a metadata sheet of the latest version of the dataset, using the code check_dataset.m.
The dataset consists of 104 experiments, with a total of 1160 single auditory nerve fibres.Table 1 lists the characteristics of the gerbils and the fibres in this dataset.Note the slightly skewed distribution of quiet-aged gerbils towards more males.This is mostly due to a high risk of ovarian cancer in older female gerbils, resulting in more early deaths before or during the experiments of females compared to males.To illustrate age-related hearing loss in both the male and female animals of this dataset, a rough estimate of hearing sensitivity, as determined by the auditory brainstem response (ABR) to chirps (0.3-19 kHz), is plotted as a function of the animal's age (Fig. 3).Note the large variability in age-related hearing loss among the old animals, which is typical for the Mongolian gerbil 8 .
technical Validation
Verification of single-unit recording and specificity to the auditory nerve bundle.First, the recording was inspected for the possibility of a multi-unit recording, that is when spikes were derived from more than one fibre.Inter-spike intervals were assessed through the output of the function checkAN.m (see Fig. 4b).Units with multiple inter-spike intervals <0.6 ms, which is the absolute refractoriness of auditory nerve fibres 28 , were excluded from the dataset.We encountered this situation only rarely.Next, to ensure that the spikes derived from an auditory nerve fibre, and not from a neuron of the cochlear nucleus which can occasionally be encountered in the same general area of electrode placement, three criteria were used: 1.The median spike waveform across all spikes of one recording was carefully inspected for the presence of a prepotential, which would indicate that the spikes derived from a ventral cochlear nucleus bushy cell 29,30 .The function checkAN.m was used for this purpose, which plots one unfiltered trial of the recording (Fig. 4a), the inter-spike interval histogram (Fig. 4b), the first 300 spike waveforms (Fig. 4c), and the median spike waveform with 95% confidence intervals (Fig. 4d).We did not encounter any prepotentials in our recordings.2. The shape of the rate-level function was carefully checked for atypical shapes.Typically, rate-level functions from the auditory nerve fall into one of the following three categories: straight, sloping saturating, or flat saturating 20,31,32 .When a rate-level function showed nonmonotonicity at levels lower than 80 dB SPL, it indicated that the spikes derived from a non-primary cell receiving inhibitory input.The unit was then excluded from the dataset.Figure 5a shows all rate-level functions recorded from one gerbil, with both flat-saturating and sloping-saturating shapes.3. The responses to tones at 20 and 30 dB above threshold were examined for non-primary-like shapes.When the response was clearly non-primary like, the unit was excluded from the dataset.When available, data derived from BF-, RLF-, and PH-recordings were combined for this purpose.Figure 5b, constructed using the function makePSTH.m,shows a response shape that is typically encountered for auditory nerve fibres.
consistency with data from other labs.To further verify the reliability of the dataset, several outcomes of the analyses were plotted, such as best frequencies, thresholds, spontaneous rates, and phase locking metrics.Three age groups were defined: 1) young-adult gerbils, <12 months of age, 2) middle-aged gerbils, 12-36 months, and 3) old gerbils >36 months.These scatter plots and distributions were compared to auditory nerve data published for young-adult gerbils from other labs.Figure 6a shows the distribution of best frequencies and thresholds across the age groups.Fibres of young-adult gerbils exhibited two regions of best sensitivity (lowest thresholds), separated by a frequency region (around 3-4 kHz) with slightly less sensitive and fewer fibres.This is typical for the auditory nerve of the gerbil: it was observed and discussed in previous studies from four different labs [33][34][35][36] .Distributions of best frequencies and thresholds derived from middle-aged and old gerbils have not previously been published by other labs.The area with higher thresholds and fewer fibres separates the gerbil cochlea into a low-and a high-frequency region.For the young-adult gerbils in the current dataset, highest thresholds were at 3.5 kHz and fewest fibres were in the bin bordered by 2.5 and 3.0 kHz (Fig. 6b).This is consistent with the dataset of Huet, et al. 36 and close to the border frequency of 4 kHz suggested by Ohlemiller and Echteler 34 and by Müller 35 .
Figure 6c shows spontaneous rate plotted as a function of best frequency.Among the high-frequency fibres (>3.5 kHz), there was a cluster of fibres with low spontaneous rates.This is typical for the gerbil; it has been observed in previous studies from different labs 33,34,36 .The low-frequency fibres show a bimodal distribution of spontaneous rate, with a mode around 5 spikes/s and one around 60 spikes/s.This is also consistent with previously published distributions 33,34,36 .
Figure 6d shows the maximum vector strength in response to a best-frequency tone plotted as a function of the fibre's best frequency.The maximum best frequency at which significant phase locking was recorded in young-adult gerbils was 4.6 kHz.This is consistent with previously published data from Versteegh, et al. 37 , who reported an upper phase-locking frequency for gerbil auditory nerve fibres of 4 to 5 kHz.Furthermore, the highest vector strength values were found among the fibres with low best frequencies (<1.5 kHz) and low spontaneous rates (<18 spikes/s, shown in open markers Fig. 6d).This is also consistent with previous work in gerbils and with vector strength recorded in auditory nerve fibres of cats 37,38 .Phase locking did not change in the aged animals 17 .
Sampling across best frequency for the different recording types.Distribution of the recorded auditory nerve fibres across the frequency axis was plotted to illustrate the sampling of units within age groups for the different recording types (Fig. 7).BF distributions of two large datasets, CF and CLICK (Fig. 7a,b, respectively), are representative of the distribution of the full dataset (Fig. 6b).Furthermore, these figures confirm that the characteristic frequency derived from the CF recordings correlated strongly with the best frequency derived from the BF recordings, with no systematic deviation towards higher or lower frequencies (Fig. 7a).Click latency had a strong negative correlation to best frequency, illustrating the travelling wave delay along the cochlea (Fig. 7b).Sampling distribution across best frequency of the remaining datasets are shown in the lower panels (Fig. 7c-f).No SR, SPS, and CVC recordings were obtained in middle-aged gerbils, while SPS responses were only recorded in young-adult gerbils.Sampling across the best frequency range of RLF and PH recordings are shown in Fig. 6a and Fig. 6d, respectively.Usage Notes code to help search through the dataset.Three main scripts are provided to help the user search through the full dataset, as well as within a struct of one animal.
1. check_dataset_metadata.m loops through the 'all_exp' struct and focusses on the metadata of the experiments.It recreates the metadata sheet ('metadata.csv'),which can be used to select an animal of interest and investigate it further in check_dataset_animal.m.This script was used to generate Table 1 and Fig. 3. 2. check_dataset_units.m loops through the 'all_exp' struct and focusses on the analysed outcomes of the single units.It can be used to plot any of these outcomes against each other, typically with the fibre's best frequency on the horizontal axis.This script was used to generate all panels of Figs. 6 and 7. 3. check_dataset_animal.m loops through the units of an 'exp' struct of one animal.It generates a scatterplot of the threshold as a function of the best frequency of all the fibres recorded in that animal and a plot with all rate-level functions of that animal in one graph.This script was used to generate Fig. 5a (from 'G220922.mat').It also calls the function check_AN.m,which plots the unfiltered first trial, the inter-spike interval histogram, the first 300 spike waveforms, and the median spike waveform +/− 95% confidence interval of a given recording.The output of this function is shown in Fig. 4 (BF recording of 'G220922.mat', unit '3p_607' [i = 12]).The main script also calls the function makePSTH.m,which is used to generate a peri-stimulus time histogram (PSTH) of all responses to tone bursts at or close to a given stimulus level above the fibre's threshold.makePSTH.mwas used to generate Fig. 5b, based on the spike times of ani-malID 'G220908' from unit '3p_181' (i = 23) at 20 dB above threshold (TestLevel = 20).
acknowledgements
First, I would like to share my sincere gratitude to Friederike Steenken and Lichun Zhang, who collected part of these raw data and allowed me to make it available through this project, as well as to Christine Köppl, who initiated the scientific work and encouraged me to pursue this project on data that were generated under her mentorship.I thank Go Ashida and Sharad Shanbag for their work on programming the acquisition software 'Tytology2' and Rainer Beutelmann for programming the software to record the auditory brainstem responses.I also thank Go Ashida, Roberta Aralla, and Helge Ahrens for contributing to the analysis scripts, Paul Hinze for his work on organizing the data structs, and Fiona Teske for reviewing the published code.My gratitude extends to Georg Klump, Rainer Beutelmann, and Jonas Klug for their help in selecting and programming some of the complex acoustic stimuli and to Sonja Standfest, Nadine Thiele, and Jesse Röseler for technical assistance during the experiments.Finally, I thank Piotr Majdak for his help in making these data and software available to the users of the Auditory Modeling Toolbox.English language services were provided by stels-ol (contact address at desmosa@gmx.de).This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2177/1 -Project ID 390895286.Part of the data collection was funded by the DFG priority program "PP 1608".
The recordings for characterizing the auditory nerve fibre (recording types: BF [Best Frequency], CF [Characteristic Frequency], PH [PHase locking], CLICK, RLF [Rate-Level Function], and SR [Spontaneous Rate]) have been analysed to give the user a general idea of the auditory nerve fibre type and to search through the data more effectively.The outcomes of these analyses are stored along with the raw data in the data structs ('exp.data.[recordingtype].analysis')and are presented below under the section Technical Validation.
Fig. 2
Fig.2Hierarchical organization of a data structure file.Overview of the location of each data-and metadata type within the data structure.
Fig. 3
Fig. 3 Hearing sensitivity of the gerbils.The auditory brainstem response (ABR) thresholds of females (green markers) and males (purple markers) in response to broadband chirps (0.3-19 kHz) as a function of their age in months.
Fig. 4
Fig. 4 Inter-spike intervals and spike waveforms an example recording.(a) The unfiltered, first data trace of the recording.Spike times are indicated with red asterisks.(b) The inter-spike interval (ISI) histogram, including descriptive statistics and the total number of short ISIs (<1 ms and <0.6 ms).(c) The waveforms of the first 300 spikes that were recorded.Time = 0 ms indicates the peak of the spike, i.e. spike time.Waveforms are plotted between −1.3 ms and +1.3 ms from the spike peak.(d) The median spike waveform (red line) with 95% confidence intervals (CI; shaded red).The number in the plot indicates the total number of spikes in the recording and used for this plot.This figure is the output of the function checkAN.m, of the BF recording of animalID 'G220922' and unit '3p_607' .
Fig. 5
Fig. 5 Criteria for specificity to the auditory nerve bundle.(a) Rate-level functions recorded from one gerbil ('G220922').Average firing rate is plotted as a function of stimulus level.Different colours indicate rate-level functions from different units.(b) An example of a peri-stimulus time histogram (PSTH) derived from tone burst responses at best frequency (animalID = 'G220908' , unit = '3p_181').Responses in the BF recording (level = 30 dB SPL, 5 repetitions) and the RLF recording (level = 40 dB SPL, 10 repetitions) are combined.This fibre had a best frequency of 7.6 kHz and a threshold at 19 dB SPL.
Fig. 6
Fig.6 Physiological properties of the dataset.(a) Threshold plotted as a function of the fibre's best frequency.Data from young-adult, middle-aged, and old gerbils are plotted in blue, yellow, and red markers, respectively.Solid lines represent the moving average for each age group.(b) Distribution of best frequencies of all fibres recorded in young-adult gerbils.(c) Spontaneous rate as a function of best frequency.The blue dashed line indicates the border between fibres with a low-and fibres with a high best frequency at 3.5 kHz.(d) Maximum vector strength in response to a tone at the fibre's best frequency plotted as a function of best frequency for young-adult (blue circles), middle-aged (yellow triangles), and old gerbils (red squares).High-spontaneous rate (high-SR) and low-spontaneous rate (low-SR) fibres are plotted separately with filled and open markers, respectively, with 18 spikes/s as a cut-off rate33 .Only vector strength values that were significant (p < 0.001) are plotted (see Methods).
Fig. 7
Fig. 7 Sampling across the frequency axis.(a) For fibres for which a CF recording was obtained (n = 119), the characteristic frequency derived from the response field is plotted as a function of the fibre's best frequency.A histogram of the characteristic frequencies is shown on the right.The black dashed line indicates y = x.(b) For fibres for which a CLICK recording was obtained (n = 261), the click latency, as determined by the 2-bin method, is plotted as a function of the fibre's best frequency.High-and low-SR fibres are plotted separately in solid and open markers.A histogram of the best frequencies is shown below.(c) For fibres for which a SR recording was obtained (n = 203), the spontaneous rate (SR) derived from this recording is plotted as a function of the fibre's best frequency (BF).The legend of panel (a) also applies here.(d-f) The sampling across BF and threshold for the complex stimuli recordings NOISE (n = 143, panel d), SPS (n = 22, panel e), and CVC (n = 135, panel f) in young-adult, middle-aged, and old gerbils.The legend of panel (a) also applies here.
Table 1 .
Characteristics of the animals and auditory nerve fibres in the dataset.ABR: auditory brainstem response; SD: standard deviation; SPL: sound pressure level. | 9,517.2 | 2024-04-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Extremal unimodular lattices in dimension 36
In this paper, new extremal odd unimodular lattices in dimension $36$ are constructed. Some new odd unimodular lattices in dimension $36$ with long shadows are also constructed.
Introduction
A (Euclidean) lattice L ⊂ R n in dimension n is unimodular if L = L * , where the dual lattice L * of L is defined as {x ∈ R n | (x, y) ∈ Z for all y ∈ L} under the standard inner product (x, y). A unimodular lattice is called even if the norm (x, x) of every vector x is even. A unimodular lattice, which is not even, is called odd. An even unimodular lattice in dimension n exists if and only if n ≡ 0 (mod 8), while an odd unimodular lattice exists for every dimension. Two lattices L and L ′ are isomorphic, denoted L ∼ = L ′ , if there exists an orthogonal matrix A with L ′ = L · A, where L · A = {xA | x ∈ L}. The automorphism group Aut(L) of L is the group of all orthogonal matrices A with L = L · A.
Rains and Sloane [17] showed that the minimum norm min(L) of a unimodular lattice L in dimension n is bounded by min(L) ≤ 2⌊n/24⌋+ 2 unless n = 23 when min(L) ≤ 3. We say that a unimodular lattice meeting the upper bound is extremal.
The smallest dimension for which there is an odd unimodular lattice with minimum norm (at least) 4 is 32 (see [13]). There are exactly five odd unimodular lattices in dimension 32 having minimum norm 4, up to isomorphism [4]. For dimensions 33, 34 and 35, the minimum norm of an odd unimodular lattice is at most 3 (see [13]). The next dimension for which there is an odd unimodular lattice with minimum norm (at least) 4 is 36. Four extremal odd unimodular lattices in dimension 36 are known, namely, Sp4(4)D8.4 in [13], G 36 in [6,Table 2], N 36 in [7, Section 3] and A 4 (C 36 ) in [8,Section 3]. Recently, one more lattice has been found, namely, A 6 (C 36,6 (D 18 )) in [9, Table II]. This situation motivates us to improve the number of known non-isomorphic extremal odd unimodular lattices in dimension 36. The main aim of this paper is to prove the following: Proposition 1. There are at least 26 non-isomorphic extremal odd unimodular lattices in dimension 36.
The above proposition is established by constructing new extremal odd unimodular lattices in dimension 36 from self-dual Z k -codes, where Z k is the ring of integers modulo k, by using two approaches. One approach is to consider self-dual Z 4 -codes. Let B be a binary doubly even code of length 36 satisfying the following conditions: the minimum weight of B is at least 16, the minimum weight of its dual code B ⊥ is at least 4.
Then a self-dual Z 4 -code with residue code B gives an extremal odd unimodular lattice in dimension 36 by Construction A. We show that a binary doubly even [36,7] code satisfying the conditions (1) and (2) has weight enumerator 1 + 63y 16 + 63y 20 + y 36 (Lemma 2). It was shown in [15] that there are four codes having the weight enumerator, up to equivalence. We construct ten new extremal odd unimodular lattices in dimension 36 from self-dual Z 4 -codes whose residue codes are doubly even [36,7] codes satisfying the conditions (1) and (2) (Lemma 4). New odd unimodular lattices in dimension 36 with minimum norm 3 having shadows of minimum norm 5 are constructed from some of the new lattices (Proposition 7). These are often called unimodular lattices with long shadows (see [14]). The other approach is to consider self-dual Z k -codes (k = 5, 6,7,9,19), which have generator matrices of a special form given in (7). Eleven more new extremal odd unimodular lattices in dimension 36 are constructed by Construction A (Lemma 8). Finally, we give a certain short observation on ternary self-dual codes related to extremal odd unimodular lattices in dimension 36. All computer calculations in this paper were done by Magma [1].
Unimodular lattices
Let L be an odd unimodular lattice and let L 0 denote the even sublattice, that is, the sublattice of vectors of even norms. Then L 0 is a sublattice of L of index 2 [4]. The shadow S(L) of L is defined to be L * 0 \ L. There are cosets Shadows for odd unimodular lattices appeared in [4] and also in [5, p. 440], in order to provide restrictions on the theta series of odd unimodular lattices. Two lattices L and L ′ are neighbors if both lattices contain a sublattice of index 2 in common. If L is an odd unimodular lattice in dimension divisible by 4, then there are two unimodular lattices containing L 0 , which are rather than L, namely, L 0 ∪ L 1 and L 0 ∪ L 3 . Throughout this paper, we denote the two unimodular neighbors by The theta series θ L (q) of L is the formal power series θ L (q) = x∈L q (x,x) . The kissing number of L is the second nonzero coefficient of the theta series of L, that is, the number of vectors of minimum norm in L. Conway and Sloane [4] gave some characterization of theta series of odd unimodular lattices and their shadows. Using [4, (2), (3)], it is easy to determine the possible theta series θ L 36 (q) and θ S(L 36 ) (q) of an extremal odd unimodular lattice L 36 in dimension 36 and its shadow S(L 36 ): respectively, where α is a nonnegative integer. It follows from the coefficients of q and q 3 in θ S(L 36 ) (q) that 0 ≤ α ≤ 16.
Self-dual Z k -codes and Construction A
Let Z k be the ring of integers modulo k, where k is a positive integer greater than 1. A Z k -code C of length n is a Z k -submodule of Z n k . Two Z k -codes are equivalent if one can be obtained from the other by permuting the coordinates and (if necessary) changing the signs of certain coordinates. A code C is selfdual if C = C ⊥ , where the dual code C ⊥ of C is defined as {x ∈ Z n k | x · y = 0 for all y ∈ C}, under the standard inner product x · y.
If C is a self-dual Z k -code of length n, then the following lattice is a unimodular lattice in dimension n. This construction of lattices is called Construction A.
From self-dual Z 4 -codes
From now on, we omit the term odd for odd unimodular lattices in dimension 36, since all unimodular lattices in dimension 36 are odd. In this section, we construct ten new non-isomorphic extremal unimodular lattices in dimension 36 from self-dual Z 4 -codes by Construction A. Five new non-isomorphic unimodular lattices in dimension 36 with minimum norm 3 having shadows of minimum norm 5 are also constructed.
Extremal unimodular lattices
Every Z 4 -code C of length n has two binary codes C (1) and C (2) associated with C: The binary codes C (1) and C (2) are called the residue and torsion codes of C, respectively. If C is a self-dual Z 4 -code, then C (1) is a binary doubly even code with C (2) = C (1) ⊥ [3]. Conversely, starting from a given binary doubly even code B, a method for construction of all self-dual Z 4 -codes C with C (1) = B was given in [16,Section 3]. (2) ) and A 4 (C) has minimum norm min{4, d E (C)/4} (see e.g. [7]). Hence, if there is a binary doubly even code B of length 36 satisfying the conditions (1) and (2), then an extremal unimodular lattice in dimension 36 is constructed as (1) and (2), then k = 7 or 8 (see [2]).
Remark 5. In this way, we have found two more extremal unimodular lattices A 4 (C), where C are self-dual Z 4 -codes with C (1) = B 36,1 . However, we have verified by Magma that the two lattices are isomorphic to N 36 in [7] and A 4 (C 36 ) in [8].
For i = 1, 2, . . . , 10, the code C 36,i is equivalent to some code C 36,i with generator matrix of the form: where A, B 1 , B 2 , D are (1, 0)-matrices, I n denotes the identity matrix of order n and O denotes the 22 × 7 zero matrix. We only list in Figure 1 the 7 × 29 . A generator matrix of A 4 (C 36,i ) is obtained from that of C 36,i .
In this section, we construct more extremal unimodular lattices in dimension 36 from self-dual Z k -codes (k ≥ 5).
Let A T denote the transpose of a matrix A. An n × n matrix is negacir- culant if it has the following form: Let D 36,i (i = 1, 2, . . . , 9) and E 36,i (i = 1, 2) be Z k -codes of length 36 with generator matrices of the following form: where k are listed in Table 3, A and B are 9 × 9 negacirculant matrices with first rows r A and r B listed in Table 3. It is easy to see that these codes are self-dual since AA T + BB T = −I 9 . Thus, A k (D 36,i ) (i = 1, 2, . . . , 9) and A k (E 36,i ) (i = 1, 2) are unimodular lattices, for k given in Table 3. In addition, we have verified by Magma that these lattices are extremal.
Lemma 8 establishes Proposition 1. Remark 9. Similar to Remark 6, it is known [7] that the extremal neighbor is isomorphic to L for the case where L is N 36 in [7], and we have verified by Magma that the extremal neighbor is isomorphic to L for the case where L is A 4 (C 36 ) in [8].
Related ternary self-dual codes
In this section, we give a certain short observation on ternary self-dual codes related to some extremal odd unimodular lattices in dimension 36.
Unimodular lattices from ternary self-dual codes
Let T 36 be a ternary self-dual code of length 36. The two unimodular neighbors Ne 1 (A 3 (T 36 )) and Ne 2 (A 3 (T 36 )) given in (3) are described in [10] as L S (T 36 ) and L T (T 36 ). In this section, we use the notation L S (T 36 ) and L T (T 36 ), instead of Ne 1 (A 3 (T 36 )) and Ne 2 (A 3 (T 36 )), since the explicit constructions and some properties of L S (T 36 ) and L T (T 36 ) are given in [10]. By Theorem 6 in [10] (see also Theorem 3.1 in [6]), L T (T 36 ) is extremal when T 36 satisfies the following condition (a), and both L S (T 36 ) and L T (T 36 ) are extremal when T 36 satisfies the following condition (b): (a) extremal (minimum weight 12) and admissible (the number of 1's in the components of every codeword of weight 36 is even), (b) minimum weight 9 and maximum weight 33.
For each of (a) and (b), no ternary self-dual code satisfying the condition is currently known.
By Theorem 6 in [10] (see also Theorem 3.1 in [6]), L S (T 36 ) and L T (T 36 ) are extremal. Hence, min(A 3 (T 36 )) = 3 and min(S(A 3 (T 36 ))) = 5. Note that a unimodular lattice L contains a 3-frame if and only if L ∼ = A 3 (C) for some ternary self-dual code C. Let L 36 be any of the five lattices given in Table 2. Let L 36 are adjacent if (x, y) = 0. It follows that the 3-frames in L 36 are precisely the 36-cliques in the graph Γ(L 36 ). We have verified by Magma that Γ(L 36 ) are regular graphs with valency 368, and the maximum sizes of cliques in Γ(L 36 ) are 12. Hence, none of these lattices is constructed from some ternary self-dual code by Construction A. | 2,968 | 2014-11-30T00:00:00.000 | [
"Mathematics"
] |
Synchronization of Delayed Neural Networks With Actuator Failure Based on Stochastic Sampled-Data Controller
This paper addresses the master-slave synchronization problems of delayed neural networks with actuator failure based on stochastic sampled-data controller. To simplify the analysis process, only two different sampling periods whose occurrence probabilities follow the Bernoulli distribution are considered. In addition, it can be further extended to cases with multiple random sampling periods. The sampling system with random parameters is transformed into a continuous system through applying the input delay method. The novelty of this article is to consider the problem of actuator failure which may exist in the real world. By constructing a new type of Lyapunov-Krasovskii function (LKF), a sampling controller for neural networks synchronization system is designed. Using Jensens’s inequality, Wirtinger’s inequality and convex optimization methods, the stability criterion of neural networks with low conservativeness is acquired. Meanwhile, the controller gain matrix can be obtained through solving the linear matrix inequalities (LMIs). One numerical example provides feasibility and advantages of theoretical results.
I. INTRODUCTION
Nowadays, there has been a large number of researches on neural networks because they are widely used in various fields such as signal processing, image processing, pattern recognition, optimization, and associative memory design and so on [1], [2]. For the control community, the attractiveness of neural networks is that they can fully approximate complex nonlinear mapping relationships, and they can learn and adapt to the dynamic characteristics of uncertain systems. In this form, the introduction of neural networks into control systems is an inevitable trend in the development of control disciplines. Based on the method of linear matrix inequalities (LMIs), many studies have devoted to the stability analysis of master-slave synchronization problems of neural networks. There are lots of achievements gained in recent years [3]- [7]. However, in reality, the time delays The associate editor coordinating the review of this manuscript and approving it for publication was Liang Ding .
inevitably occur in neural networks, which may result instability, oscillation, poor performance of the system [8]. Using the input delay method, the study of synchronous control on neural networks with delays has become a focus topic [9]. Hence, the stability analysis of master-slave synchronization problems of delayed neural networks has received an increasing attention [10], [11].
Along with the researches on neural networks, the synchronous problem of neural networks system has gradually turned into an indispensable research area. The synchronous control of the neural networks is under the stability for the neural networks. A lot of significant methods for master-slave synchronization have been mentioned in recent years, such as pinning control [12], impulsive control [13], [14], event-triggered control [15] and sampled-data control [16], [17].
Benefit from advancement of computer technology, the sampled-data control systems have been attracted increasing interest in the past decades [18]. For synchronization, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the sampled-data only needs the information about the state of the system at the sampling instants [19]. The characteristic of this method is to reduce the transmission of information and improve the efficiency of control. There is an important issue to choose the sampling period when using sampling data control to achieve neural networks synchronization. A longer sampling interval will bring lower communication channel occupation, less signal transmission, and fewer drive controllers signal [20], [21]. Hence, how to obtain a larger upper bound of the sampling periods is the focus of the method [22]. In the past, many studies have been devoted to sampling all signals at a constant rate on a single-rate digital control system. However, continuous sampling is not applicable because of the human factors and uncertain interference of environmental [23]. Thus, it is especially significant to consider time-varying sampling for synchronous control [24]. It should be noted that in spite of sampled-data control technologies have been developed well in control theory, the synchronization problem of particular sampled-data on networks has so far attracted very little attention because of the random interference and mathematical complexity of the constrained system [25]. It is worth mentioning that, in [26], a new method about stochastic switched sampled-data control with time-varying has been proposed. In [27], the probabilistic sampling H ∞ problem of sampled-data systems with parameter uncertainties based on input delay method has been researched. To simplify the analysis process, only two sampling intervals have been considered. Stimulated by [27], Cao et al. have designed a random variable sampling controller for multiple intervals of delayed neural networks in [2]. In another research field, an inevitable disadvantage that we should not ignore is the sensor or actuator failures in various situations [28]. It is noted that, in [29]- [31], the actuator is assumed that there is not failed. But in actual systems, especially in networks conditions, actuator failures can inevitably occur. This will severely degrade the performance of systems. Even worse, the systems may become unstable. Therefore, in order to determine the reliability of the sampled data, it is meaningful to study the stability of the sampled data on the neural networks in the case of actuator failure. Inspired by the above ideals, the main research contents and results of the paper are generalised as follows: (1) In a set of LMIs, the new stability conditions can be expressed by constructing a suitable Lyapunov functional and using Jensen's inequality, Wirtinger-type inequality and reciprocally convex method; (2) It is assumed that the sampling time varies with time and can be arbitrarily switched between two different values. Moreover, the method can also be applied to multiple random sampling times; (3) Unlike previous studies, the article proposes a reliable control scheme for the delayed neural network withactuator failure via stochastic sampled-data control. Sufficient conditions are presented to guarantee the stability and the desired controller can be obtained.
Assumption 1: The neuron activation function g i (·) is continuous and bounded, if there exist constants where s 1 , s 2 ∈ R n , and s 1 = s 2 .
When the system (1) is the master system in the paper, and a slave system is ascertained as: where the structure is same as (1). u F (t) ∈ R n represents the measurable control with failure described by where F(t) is the time-varying failure level of actuators. Combining (1),(3) and (4) with e(t) = y(t) − x(t), the synchronization error system (SES) can be formulated as: ). Now, the function f i (s) meets the conditions as follow: where s ∈ R and s = 0. In this article, the sampled-data feedback controller u(t) used in synchronizing neural networks is generated by the zero-order holder (ZOH), which is mathematically modeled and includes using conventional digital-to-analog converter to complete practical signal reconstruction, and converting discrete signals into continuous signals. This is to say, it is keep the sampling value unchanged during the sampling interval and continue until the next sampling interval, with a battery of holding time 0 = t 0 ≤ t 1 ≤ · · · ≤ t k ≤ · · · ≤ lim k→+∞ t k = +∞, i.e.. Only the measured discrete 200924 VOLUME 8, 2020 sampled data is available for control purposes. Summarizing the above, the following form is the representation for the sampled-data controller: where K denotes the gain matrix of sampled-data controller to be resolved. t k+1 −t k h is defined as the sampling period, with any k ≥ 0, and h > 0 denotes the upper limit of sampling periods.
In addition, instead of the actuator outage, define the failure level where F α ,F α ∈ (0, 1] are the lower and upper bound of F α (t). Specially, F α =F α = 1 represents that no failures happen on u(t). Here, F(t) is divided into following forms: So Eq.(5) can be expressed aṡ Given the time-varying delay τ (t) = t − t k which is piecewisely linear, and τ (t) ≤ t k+1 − t k . There, we can transform (9) into: . (10) Assuming that there are two sampling intervals, the values are c 1 , c 2 and 0 < c 1 < c 2 whose happening probabilities are known: When the sampling interval is c 1 , τ (t) ∈ [0, c 1 ) with its probability: When the sampling interval is c 2 , the probabilities of τ (t) ∈ [0, c 1 ) and τ (t) ∈ [c 1 , c 2 ) are c 1 c 2 and c 2 −c 1 c 2 , respectively. The corresponding probabilities are as follows: Therefore, the probability of τ (t) can be calculated as: The stochastic variable β(t) that satisfies a Bernoulli distribution is defined as: Then we have: We can easily get Comprehend the above conditions, SES (10) is converted into:ė Next are the lemmas that will be applied in the proof. Lemma 1 [19]: Let x(t) be a differentiable function: ]. Lemma 2 [33]: For given positive integers n, m, a scalar α ∈ (0, 1), a n × n-matrix G > 0 and two matrices M 1 and M 2 in R n×m , for all vector ζ ∈ R m , the following function (α, G) can be got by: if there exist matrix X ∈ R n×n and G X * G > 0, the following inequality holds: Lemma 3 [32]: For given T = 1 , 2 and 3 with appropriate dimensions, if T (t) (t) ≤ I , then It is equivalent that there exists ε > 0 satisfying
Remark 1: Theorem 1 guarantees that the system (1) and (3) are stochastically synchronous based on sampled-data controller with actuator fault. However, in the most of literatures, such as in [25], [27], [2], the problem of actuator failures for neural networks is not taken into account. But in the practical application, the impact of actuator failures and parameters uncertainty can not be ignored. Therefore, we consider the sensor or actuator failures for delayed neural networks.
Remark 2: In [2], an improved FMB integral inequality was introduced. The upper bound provided by the improved FMB is closer than the upper bound taken under Jensen's inequality. Thence, the tighter bounding inequality in Lemma 1 is used to deal with the integral item − t t−dẋ T (s)Z 1ẋ (s)ds, which may bring less conservative result.
Remark 3: The multiple sampling problem can be solved by introducing a random variable β(t) that satisfies the Bernoulli distribution. In order to deal with this problem, the sampling system is transformed into a continuous system with random parameters. In spite of Bernoulli distributions was used to random packet losses, uncertain observation. However, it seems that few attempts have been made to use it to solve problems associated with multiple sampling.
According to the conditions given in Theorem 1, the controller gain matrix K will be derived in the following theorem.
12
, I , I , I , I ) and its transpose, respectively. We can get (38). The corresponding controller gain matrix K can be got by (39). The proof is accomplished. Specifically, it is worth noting that the stability analysis with controller failures at two sampling intervals has been considered in the previous part of this paper. If we extend the sampling interval to three or more, we can get more general analysis results of the stability of random sampling control in systems (10). In general, we can extend the case that there are n sample intervals c 1 , c 2 , . . . , c n with 0 < c 1 < c 2 · · · < c n . The probability of the occurrences is Similar to the subsection of the main method, the probability of τ (t) is calculated as Note that n i=1 β i = 1. Then the indicator function is denoted as: We can figure out In this case, the error system (10) is expressed as follow: where c i−1 < τ i (t) < c i . Then, similar to Theorem 1, for multiple sampling situations, we can get corresponding results. Specifically, when F α =F α = 1, which means actuator works properly, in this case, the SES (20) is transformed into the following form: Corollary 1: For given scalars β, µ, d, 1 , 2 , c 1 , c 2 , the error system (41) is stochastically stable, if there exist positive matrices P, Z 1 , Z 2 , S 1 , S 2 , R 1 , R 2 , Q 1 , Q 2 and positive diagnose matrices V 1 , V 2 , any appropriate dimensional matrices N 1 , N 2 , X 1 , X 2 , G, L, such that the following LMIs holds: The notations are the same as in Theorem 1. Then, the master and slave neural networks realize synchronization under stochastic sampling without actuator failure. The gain matrix K can be obtained by K = G −1 L.
IV. NUMERICAL EXAMPLES
In this section, a persuasive example is presented to verify the feasibility of the method.
Example 1: The systems (1) and (3) are considered with parameters as follows: The neuron activation functions are f i (x i (t)) = e x i −e −x i e x i +e −x i (i = 1, 2). A straightforward calculation yields W 1 = 0.1I and W 2 = 0.3I . Here, we assume that there are no failures happen to u α (t), that is F α =F α = 1. The time-varying delay is d(t) = e t 1+e t , and it can be obtained that d = 1, µ = 0.25. Next, when choosing 1 = 1 and 2 = 0.1, we consider two cases and the results are given in Table 1 in Fig.1 and 2, respectively. If the condition of Theorem 1 is satisfied, we can verify the stability of the error synchronization system. Simultaneously, we readily receive the controller gain matrix K . By handling the LMIs (42)-(44), the value for the controller parameters K could be gained correspondingly as follow: Using the obtained gain K, the correspondence of controller e(t) is displayed in Fig.3, and the error state u(t) is displayed in Fig.4. Then according to Theorem 1, it can be concluded that the synchronization between drive system (1) and response system (3) is achieved. It is shown that the trend of the state variable eventually goes to 0 in Fig.3, which also verifies the effectiveness and the feasibility of our method. Fig.5 shows the stochastic parameter h.
V. CONCLUSION
In this paper, a new way of stochastic sampled-data control for delayed neural networks with actuator failure via stochastic sampling has been researched. The sampling data controller with only two sampling intervals are considered and it can be extended to n sampling intervals. A new LKF has been constructed to hold more information about the characteristics of the actual sampling mode. The conditions established in Theorem1 reduce conservatism and guarantee the synchronization of master-slave systems. Moreover, we can synthesize the sampled-data controller gain matrix trough solving the LMIs, under the allowable maximum sampling period. The effectiveness of our approach has been verified via a illustrative examples given. In our future work, we will consider the effect of parameter variation for the delayed neural networks on the behavior of the stochastic sampled-data controller. Also, we will explore the robustness of the proposed approach. was with the University of Duisburg-Essen, Germany, funded by the Alexander von Humboldt Foundation. He is currently a Full Professor with Yanshan University, China. He is the author or coauthor of more than 110 articles in mathematical, technical journals, and conferences. He was involved in more than ten projects supported by the National Natural Science Foundation of China, the National Education Committee Foundation of China, and other important foundations. His research interests include nonlinear control systems, control systems design over networks, teleoperation systems, and intelligent control. VOLUME 8, 2020 | 3,740.8 | 2020-10-26T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Real time quantum gravity dynamics from classical statistical Yang-Mills simulations
We perform microcanonical classical statistical lattice simulations of SU(N) Yang-Mills theory with eight scalars on a circle. Measuring the eigenvalue distribution of the spatial Wilson loop we find two distinct phases depending on the total energy and circle radius, which we tentatively interpret as corresponding to black hole and black string phases in a dual gravity picture. We proceed to study quenches by first preparing the system in one phase, rapidly changing the total energy, and monitoring the real-time system response. We observe that the system relaxes to the equilibrium phase corresponding to the new energy, in the process exhibiting characteristic damped oscillations. We interpret this as the topology change from black hole to black string configurations, with damped oscillations corresponding to quasi-normal mode ringing of the black hole/black string final state. This would suggest that α′ corrections alone can resolve the singularity associated with the topology change. We extract the real and imaginary part of the lowest-lying presumptive quasinormal mode as a function of energy and N.
Introduction
General relativity breaks down when curvature singularities appear. How these singularities are resolved in a consistent extension of general relativity is a very important issue.
The topology change from black hole (BH) to black string (BS) [1] is an interesting physical process for which the singularity resolution is essential: although rich dynamics beyond linear perturbation theory is expected based on results in numerical relativity (see e.g. [2]), the change in topology requires physics beyond general relativity.
In string theory, gauge/gravity duality [3] provides us with a well-defined description of this BH/BS transition in terms of a gauge theory dual [4,5] (see also [6][7][8][9]). To review the main aspects of this description, let us consider the example of two-dimensional maximally supersymmetric SU(N ) Yang-Mills (2d maximal SYM) compactified on spatial circle of circumference r x , which is conjectured to possess a dual string theory description. In fact, this gauge theory has two dual description, one being type IIB string theory on R 1,8 ×S 1 with N D1-branes wrapped on the circle, and the other one being type IIA string theory on a circle of circumference r x = (2π) 2 α rx with N D0-branes (known as T-dual). In terms of the 2d maximal SYM gauge theory description, the information about the individual D0-branes on the spatial circle is encoded in the phases of the eigenvalue distribution of the Wilson line winding on circle (referred to as Wilson loop in the following). Depending on the distribution of D0-branes along the circle, there can be various phases, such as a black hole phase (corresponding to a localized, or "gapped", distribution of Wilson loop phases), and a black string phase (corresponding to an "ungapped" distribution of Wilson loop phases). In the gravity dual of 2d maximal SYM gauge theory, one can further distinguish between a uniform black string phase (corresponding to a uniform distribution JHEP01(2019)201 Figure 1. The conjectured correspondence between the distribution of the phases of Wilson loop eigenvalues (top row) and the topology of the black hole/black string dual configurations (bottom row), with black hole, uniform black string and wavy string configurations corresponding to localized ("gapped"), uniform and nonuniform ("ungapped") phase distributions, respectively. Dotted lines indicate periodic boundary conditions on the circle. Figure adapted from ref. [11].
of Wilson loop phases), and a wavy string phase (corresponding to a nonuniform distribution of Wilson loop phases); see figure 1. Note that these phases have been explicitly been constructed in ref. [10].
Equilibrium properties of the 2d SYM gauge theory can be studied by using lattice Monte Carlo simulations, which are non-trivial to set up [12][13][14][15][16][17][18][19][20][21][22][23], but seem to be able to offer fully non-perturbative insights [24][25][26][27][28]. As a result of numerical studies in 2d maximal SYM [28][29][30] combined with numerical and analytical studies of the thermodynamic properties of D-branes [4,5,11,31,32], the conjectured equilibrium phase diagram sketched in figure 2 has started to emerge. Specifically, at low temperature T (large temporal radius β = T −1 ), two phases of localized and uniform eigenvalue phase distributions at small and large spatial radius r x , respectively, are separated by a first order phase transition line. Above a critical temperature a third phase corresponding to a non-uniform eigenvalue distribution arises, while the phase transitions soften to be of second and third order, respectively.
This tremendous progress non-withstanding, the question of topology change unfortunately cannot be answered using simulations of equilibrium quantities, because it requires information about real-time evolution.
Real-time information in quantum field theories are extremely difficult to obtain, because there is no known way to study the real-time dynamics of quantum systems at a reasonable computational cost. Absent any breakthrough development in the numerical treatment of real-time quantum systems, we take a more pedestrian approach in the present work by studying the real-time evolution in the classical statistical approximation, the non-perturbative lattice technology of which has been well developed in the context of relativistic heavy-ion collisions [33][34][35][36][37] (see also [38,39]). From a gauge theory perspective, the real-time dynamics of fully quantum 2d SYM is expected to be well approximated by its classical dynamics for modes which are highly occupied. For bosons in equilibrium, this is the case for the modes with energy ω obeying βω 1, or the low-energy modes. At small temporal circle radius β or high temperature, many bosonic modes can be well approximated by their classical dynamics. Additionally, at high temperature fermionic modes develop a large thermal mass, so they can be expected to decouple and thus not contribute to observables that are insensitive to the number of degrees of freedom of the theory. Thus at high temperature, one can expect the dynamics of 2d SYM to be at least qualitatively well approximated by the classical dynamics of purely bosonic Yang-Mills theory. This expectation has indeed be confirmed in previous numerical studies in other dimensions [40][41][42][43][44] and a comparison of Monte Carlo results between 2d SYM and 2d SU(N) pure gauge theory [11]. (Note that generalizations of the classical statistical framework to include quantum effects to some extent may be possible, cf. [45][46][47][48].)
JHEP01(2019)201
Not surprisingly, some aspects of the 2d SYM dynamics cannot be captured by the purely bosonic classical statistical approximation. For instance, by neglecting the fermions one is studying the phase diagram of the purely bosonic theory shown in figure 2, cf. refs. [5,11,49,50]. However, at high temperature, the 2d SYM and pure Yang-Mills phase diagrams become indistinguishable, such that the classical statistical simulations performed as part of this work could reasonably be expected to offer qualitative insights into the black string phase, and potentially also the deconfined black hole phase.
From string theory point of view, the high temperature (weak coupling) regime should describe a highly stringy system, where the α corrections are large but one can still tune the g s correction by dialing N . By performing direct numerical simulations of the high temperature gauge theory dynamics, one can study whether the α correction is sufficient
JHEP01(2019)201
for resolving the singularity associated with the topology change, and how the g s correction affects the result. A key asset of our classical statistical simulations is that we should be able to see how a perturbation of a black hole/black string rings down as the system (re-)approaches equilibrium. In the context of classical gravity, this ring-down process is encoded in the quasi-normal mode spectrum [51], which we attempt to measure numerically in purely bosonic Yang-Mills.
This paper is organized as follows: section 2 contains background on the simulation method, a particular scaling symmetry and the equilibrium phase diagram. Section 3 discusses rapid quenches of the system energy and corresponding signals of topology change, including quasinormal mode ringing. We summarize and conclude in section 4.
Equilibrium phase diagram
We study SU(N ) Yang-Mills theory with 8 scalars on a spatial circle, in Minkowski signature such that the dynamics is 1+1 dimensional. The theory will be set up using the tools from lattice gauge theory, by discretizing 9-dimensional classical Yang-Mills on a lattice where eight dimensions are toroidally compactified (see ref. [11] for more discussion about this point).
Simulation method
In order to solve the equation of motion while preserving gauge invariance it is useful to make use of standard lattice formulations of gauge fields. Our setup is based on our earlier work in ref. [11] on quantum Monte-Carlo simulations, which we briefly review here in order to keep this work self-contained. 1 We replace continuum 9-dimensional Euclidean space by an isotropic cubic lattice such that x i = ax i , i = 1, 2, . . . 9 withx taking on integer values and a being the (spatial) lattice spacing. With g denoting the strong coupling constant, the continuum gauge field variables A i (x) are replaced by link variables U i (x) = e aA i (x) = e −igaA a i (x)T a which are elements of the SU(N ) Lie group and live on links between lattice sitesx i and obeying U † . This allows to use the standard single plaquette definition for the Hamiltonian density and U is defined through [11,39]: (Note that denotes the sum over all spatial loops on the lattice starting from sitex i with only one orientation, e.g. ≡ 1≤i<j≤d .) The lattice equations of motion are calculated from the Hamiltonian equation and one finds
JHEP01(2019)201
when using the definition E i (x) = −ga 2 T a E a i (x) and discretizing time in units of t = a∆tt witht an integer. The update rule for the electric field is found by requiringḢ ≡ dH(t) dt = 0 [11,36,39]: where the gauge staple S ij is defined as in [39] as Note that for negative values of j, a gauge link is traversed in the opposite direction, e.g.
The Hamiltonian for the system is given by Similarly, the Gauss law constraint on the lattice is given by To prepare initial conditions which satisfy G(t) = 0, we simply take E i = 0 and start with link variables are Gaussian-random with predefined magnitude. Note that this is different than the initial conditions chosen for standard (quantum, Euclidean) lattice Monte Carlo simulations. Once initial conditions are specified, we determine the total system energy from measuring (2.5) and real time evolution on the lattice is then performed by using set of equations (2.3), (2.4) to time-step forward the fields U i , E i . Our evolution scheme is accurate up to O(a 2 ) corrections in time.
One of the main observables in this work will be the Wilson loop defined as a product over link matrices (2.7) We will study the absolute value of the normalized trace of the Wilson loop, It should be pointed out that for spatial directions with only one site and periodic boundary conditions, the corresponding gauge field becomes a scalar, e.g. A i → X i . In the following we consider the situation where eight spatial directions are compactified on a point, while the ninth direction is allowed to be large, so that A i → {X I , A x } with I = 1, 2, . . . 8.
Relation to matrix models and scaling symmetry
In continuum, the lattice-discretized theory described above corresponds to the classical Lagrangian where µ = {t, x} and F µν , D µ are the two-dimensional field strength and gauge covariant derivative, respectively. In the 't Hooft large-N limit, the 't Hooft coupling λ = g 2 N , the compactification radius r x and the energy per d.o.f E/N 2 are fixed to be of order N 0 . Since in two dimension λ is dimensionful, in the following we employ dimensionless units such as In temporal gauge A t = 0, the classical equations of motion in continuum become which obey the following scaling symmetry With this rescaling, we can set r x = 1. Therefore, the energy E and the number of colors N are the only relevant simulation parameters.
UV problem in classical statistical lattice simulations
Classical statistical simulations suffer from a well-known ultraviolet instability (this is the same as the Rayleigh-Jeans law for black body radiation which is cured by quantum mechanics). Given a finite lattice discretization scale a (the lattice spacing), classical dynamics will eventually start to populate modes close to the lattice UV cutoff scale a −1 at late times. The dynamics of these high momentum modes, however, should involve quantum effects which are absent in the classical statistical simulations. Therefore, once a simulation starts to become sensitive to the lattice UV scale, the resulting dynamics can no longer be trusted, and the simulation has to be stopped. Nevertheless, classical lattice statistical simulations with a fixed UV cutoff a −1 can successfully be used for system properties dominated by modes in the IR. In practice, we prepare initial conditions for the simulations which are well localized in the IR, and then run the simulations for sufficiently short times before modes start to pile-up at the UV lattice cutoff scale. To check if pile-up has occurred, we monitor the Fourier transform of the magnetic component of Hamiltonian, which has successfully been used as an indicator for this purpose in the past [35,53]. be seen that the system starts out dominated by IR physics, but eventually flows to the UV. As long as the system is dominated by IR physics, observables such as the expectation value of the Wilson loop (2.7) are unaffected by the classical UV instability, but become sensitive as soon as pile-up of modes in the UV occurs, as shown in the right panel of figure 3. For this reason, the results reported below have been obtained by only using simulation data for times t < t UV , where t UV denotes the time when UV pile-up has first occurred.
Results for equilibrium phase diagram in classical statistical simulations
As a reminder, classical statistical simulations are performed in the microcanonical ensemble (at fixed energy E) instead of the grand-canonical ensemble (fixed temperature T ) employed in quantum simulations of lattice field theory. Real-time information on observables may be obtained by averaging time-dependent results over classical ensembles. In order to connect to equilibrium properties of the system, one would want to measure observables after the system has become time-independent (thermalized) at time t ≥ t therm , but before the simulation becomes contaminated by UV artefacts t ≤ t UV . We find that both t therm and t UV depend on the normalized energy E/N 2 in simulations.
In practice, for fixed E/N 2 we consider the time-evolution of observables such as the ensemble-averaged Wilson loop expectation value shown in figure 3, and perform an additional time-average over t therm ≤ t ≤ t UV to increase statistics. Results from this procedure for the Wilson loop expectation value as a function of energy for various N are shown in figure 4. We find that results for different N ≥ 16 cluster around an apparently universal curve, indicating that |W | (E/N 2 ) is approximately independent of N . At large E/N 2 , where our classical-statistical simulations can reasonably be expected to be a good approximation of the full quantum results, we find that |W | → 0, in accordance with bosonic quantum theory simulations [11]. Conversely, bosonic quantum theory simulations indicate that |W | (E/N 2 ) = 1 for low temperature, whereas from figure 4 it can be seen that classical statistical simulations result in |W | (E/N 2 ) → 1 for E → 0, because all link variables are equal to the identity matrix in this limit. Furthermore, full quantum theory simulations exhibit confinement for low temperatures [11], whereas classical statistical simulations do not show confinement. Therefore, as expected, classical statistical simulations can not be used to study properties of, or the transition to, the confined center broken phase of the full bosonic quantum theory (cf. figure 2).
However, one may ask if classical statistical simulations can be used to probe the properties of the quantum theory in the deconfined phase, notably the region close to the center symmetry phase transition shown in figure 2. In order to study the phase structure more precisely, the distribution of the Wilson line phases ρ(θ) are shown in figure 5 as a function of energy.
At high energy, one finds that the phase distribution ρ(θ) in the classical statistical simulations is ungapped, see figure 5. This is consistent with the full bosonic quantum simulations [11], and as such is consistent with the conjectured gravity picture of a black string phase at high temperature that was eluded to in the introduction.
Conversely, at low energy E/N 2 → 0, we see clear evidence for a gapped distribution of Wilson loop phases in the classical statistical simulations that can be well-fit by a semicircle distribution semi-circle GWW model As indicated in figure 2, a phase transition from ungapped to gapped Wilson line phase is expected as the temperature is lowered at fixed circle length in the full bosonic quantum theory. (In SYM, the transition in the microcanonical ensemble has been found to be first order in ref. [10]). A priori, it is not obvious that such a transition should be accessible using classical statistical simulations. However, the fact that the behavior of the Wilson loop eigenvalues changes qualitatively between low and high energy suggests that classical statistical simulations are able to access both the ungapped and gapped deconfined phase separated by either a phase transition or rapid analytic crossover transition, respectively. Thus, while classical statistical simulations cannot be expected to be quantitatively accurate approximations in the gapped phase, our results strongly suggest that they can be The gapped Wilson loop phase distribution observed at low energy in the classical statistical simulation is consistent with the conjectured gravity dual picture of a localized black hole configuration. As the energy in the classical statistical simulations is increased, the fit of ρ(θ) in terms of the semi-circle distribution (2.13) leads to increasing fit parameter c (see figure 6), where c = c crit = π having the natural interpretation 2 of separating blackhole from black string configurations at a critical normalized energy of where the uncertainty is primarily coming from the residual N -dependence of c, cf. figure 6. Note that at for c crit = π, the expectation value for the Wilson loop becomes We find that for E/N 2 ≥ E/N 2 critical , the Wilson loop distribution is rather well fit by another one-parameter form given by which is reminiscent of the analytically known distribution for the non-uniform string [31,55] in the Gross-Witten-Wadia model [56,57]. Best-fit values for k are indicated in figure 5.
JHEP01(2019)201
Our current results are not precise enough to say anything about the nature of the transition from gapped to ungapped Wilson loop phase in classical statistical simulations, nor can we confirm or rule out the presence of a third phase at high temperature expected from gravity, namely uniform Wilson loop phase distributions (cf. figure 1).
Real-time response to quenches
The results from the previous section strongly suggests that classical statistical simulations may be used to probe the properties of the black-hole/black-string transition that are in qualitative, if not quantitative, agreement with full quantum theory simulations in equilibrium.
Unlike current quantum simulations, no obstacle prevents the application of classical statistical simulations to non-equilibrium problems, suggesting that our method can be used to probe the real-time dynamics of the topology change from black hole to black string phases. Assuming that the real-time evolution of the Wilson loop phase distribution ρ(θ, t) possesses a good large-N limit, then such topology changes may take place within a finite time even at large-N .
Black-hole/black-string topology change
In order to study topology changes in a controlled manner in classical statistical simulations, we employ the following protocol: 1. Generate a classical statistical gauge field configuration with initial energy E/N 2 < 750 (E/N 2 > 750) expected to be in the black hole (black string) phase 2. Evolve the gauge field configuration until t ≥ t therm so that early-time transients have disappeared 3. Rapidly quench the system energy to a fixed final value of E/N 2 > 750 (E/N 2 < 750) without changing the Wilson line phases distribution or violating the Gauss law constraint (2.6). Note that the new energy indicates an equilibrium configuration in the respective other phase. This quench can be achieved by multiplying the electric field by a constant q, 4. Measure the real-time response of observables as the system tries to attain the new equilibrium black string (black hole) configuration Repeating the above protocol for many configuration with fixed initial energy and averaging over this classical ensemble of configurations leads to real-time results for observables shown in the following. We have repeated the above procedure to study the real-time response of other quenches, verifying in particular that it is also possible to observe the inverse process of gapped phase to ungapped phase, which we associate with a topology change process from black-hole to black-string configurations.
Possible observation of quasinormal modes
The real-time evolution of Wilson loop phase distribution ensemble-averages show oscillations around the new equilibrium configuration after a rapid quench, cf. figure 7. These oscillations are particularly visible in the ensemble-averaged Wilson loop expectation value shown in figure 8, and are apparently displaying little sensitivity to the choice of N ≥ 16. Similar oscillations in classical statistical simulations are ubiquitous, with frequency and damping rates associated with the mass and width of quasi-particles that are in good agreement with perturbative quantum field theory, cf. refs. [35,58].
By contrast, in classical gravity, oscillations of excited geometries for instance of black holes are characterized in terms of their quasinormal mode ringdown behavior, cf. ref. [51]. If the conjectured relation between Wilson loop phase distribution and geometry holds, this . Real and imaginary part of lowest-lying presumptive quasinormal mode frequency ν as a function of final system energy for N = 32. Multiple entries correspond to quenching protocols with different initial, but the same final energy, e.g. E/N 2 = 500 → 700 and E/N 2 = 900 → 700, and have been slightly displaced on the plot to increase visibility.
naturally leads to the interpretation of linking quasiparticle oscillations in gauge theory with quasinormal mode oscillations of black holes in string theory.
We are able to obtain estimates for both the real and imaginary part of the lowest-lying mode frequency ν by fitting the location and height of the extrema of |W |(t) to a form |W |(t) ∝ Re e iνt . Results from this fitting procedure for various final energies E/N 2 are shown in figure 9. Also shown in figure 9 are results from the fitting procedure when changing the initial energy but leaving the final energy after the quenched unchanged. Our results seem to imply that the extracted results for ν show little sensitivity to initial system energies, instead only depending on the final E/N 2 . The behavior of ν seems to be smooth and continuous with energy, and as such is apparently insensitive to the phase change near (2.14). Furthermore, our results for ν are consistent with those from matrix model simulations which do not suffer from UV problems [59], to which our simulations reduce to in the zero volume limit.
The smooth analytic behavior of ν on E/N 2 in classical statistical simulations may be obtained from the classical scaling symmetry outlined in section 2.2. Under the scaling symmetry, frequencies are expected to scale as ν → αν, and the energy density scales as → α 4 . At fixed temperature, assuming the energy density in classical statistical simulations to be approximately independent of volume, we therefore expect ν ∝ E N 2 1/4 at fixed volume. Note that this scaling argument would apply to any time-dependent quantity, such as for example for the Lyapunov exponent λ L ∝ (E/N 2 ) 1/4 , cf. ref. [44].
Results shown in figure 9 indicate that ν increases with energy, qualitatively consistent with the finding that the quasinormal mode frequency calculated in type II supergravity on black p-brane background, increases with temperature regardless of the value of p [60]. Quantitatively, the quasinormal mode frequency in type II supergravity for p = 1 differs from the classical statistical results shown in figure 9, which is expected because supergravity results are applicable for small temperatures whereas classical statistical approximations are quantitatively accurate at high temperatures.
JHEP01(2019)201 4 Summary and conclusions
In the present work, we have performed classical statistical simulations of microcanonical ensembles in SU(N ) Yang-Mills theory with eight scalars on a circle. Depending on the energy, we have identified two distinct equilibrium phases of the Wilson loop eigenvalue distribution which qualitatively correspond to those expected for black holes and black strings in the conjectured dual gravity picture. We found that gapped Wilson loop equilibrium phase distributions occurred for energies E < E crit , while ungapped distributions occurred for E > E crit , with our estimate for E crit given in (2.14). Our present results were not precise enough to decide if there is an actual phase transition at E = E crit as opposed to an analytic cross-over in classical statistical simulations, nor can we confirm or rule out the presence of a second transition to a phase of uniform Wilson loop distributions at very high energies.
We were able to perform real-time measurements of Wilson loop distributions following a quench in system energy from one phase to another. We found characteristic oscillations in the phase distribution and the Wilson loop expectation value, which showed very little sensitivity on the number of colors for N ≥ 16 or the state the system was in before the quench. Interpreting these oscillations as the string-theory analogue of quasinormal mode ringdown of black holes in classical gravity, we were able to extract estimates for the real and imaginary part of the lowest-lying presumptive quasinormal mode as a function of energy. Our results were found to be in qualitative, but not quantitative, agreement with analytic calculations of quasi-normal modes in the supergravity approximation.
There are several natural extensions to this work. For instance, one might be able to study Lyapunov exponents in classical statistical simulations similar to refs. [43,44,61], including 1/N effects.
Another natural extension would be to consider simulations of Yang-Mills on a 2dimensional torus rather than a circle, where a much richer phase diagram is expected based on gravity calculations [62]. Within the present simulation environment based on ref. [11], such change is operationally trivial and just corresponds to extending the number of lattice sites along a second direction.
To conclude, based on our results obtained in this work we expect that classical statistical simulations of SU(N ) Yang-Mills theory could become a valuable tool to study real-time phenomena in quantum gravity that are otherwise hard to access.
JHEP01(2019)201
Since then we had several conversations regarding the real-time dynamics of quantum gravitational systems. The classical Yang-Mills simulation is one of the options Joe encouraged us to try. Perhaps we are still at a very primitive stage, but we hope that we can make a steady progress toward Joe's dream! Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 6,427.4 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
Preclinical rationale for entinostat in embryonal rhabdomyosarcoma
Background Rhabdomyosarcoma (RMS) is the most common soft tissue sarcoma in the pediatric cancer population. Survival among metastatic RMS patients has remained dismal yet unimproved for years. We previously identified the class I-specific histone deacetylase inhibitor, entinostat (ENT), as a pharmacological agent that transcriptionally suppresses the PAX3:FOXO1 tumor-initiating fusion gene found in alveolar rhabdomyosarcoma (aRMS), and we further investigated the mechanism by which ENT suppresses PAX3:FOXO1 oncogene and demonstrated the preclinical efficacy of ENT in RMS orthotopic allograft and patient-derived xenograft (PDX) models. In this study, we investigated whether ENT also has antitumor activity in fusion-negative eRMS orthotopic allografts and PDX models either as a single agent or in combination with vincristine (VCR). Methods We tested the efficacy of ENT and VCR as single agents and in combination in orthotopic allograft and PDX mouse models of eRMS. We then performed CRISPR screening to identify which HDAC among the class I HDACs is responsible for tumor growth inhibition in eRMS. To analyze whether ENT treatment as a single agent or in combination with VCR induces myogenic differentiation, we performed hematoxylin and eosin (H&E) staining in tumors. Results ENT in combination with the chemotherapy VCR has synergistic antitumor activity in a subset of fusion-negative eRMS in orthotopic “allografts,” although PDX mouse models were too hypersensitive to the VCR dose used to detect synergy. Mechanistic studies involving CRISPR suggest that HDAC3 inhibition is the primary mechanism of cell-autonomous cytoreduction in eRMS. Following cytoreduction in vivo, residual tumor cells in the allograft models treated with chemotherapy undergo a dramatic, entinostat-induced (70–100%) conversion to non-proliferative rhabdomyoblasts. Conclusion Our results suggest that the targeting class I HDACs may provide a therapeutic benefit for selected patients with eRMS. ENT’s preclinical in vivo efficacy makes ENT a rational drug candidate in a phase II clinical trial for eRMS. Electronic supplementary material The online version of this article (10.1186/s13395-019-0198-x) contains supplementary material, which is available to authorized users.
With the addition of a targeted therapy, short-term survival in relapsed RMS is improving measurably; notably, the ARST0921 Children's Oncology Group (COG) clinical trial for relapsed RMS was stopped early. The 6-month event-free survival (EFS) for temsirolimus plus vinorelbine and cyclophosphamide chemotherapy was superior to the 6-month EFS for bevacizumab plus the same chemotherapy (65% vs 50%, two-sided p value = 0.0031) [13]. The next task is to determine whether these stepwise changes in 6-month EFS will result in improvements over chemotherapy-only long-term survival rates for metastatic aRMS and eRMS [1,3]. Nevertheless, other new agents are also needed.
Our recent study established the antitumor efficacy of class I-specific histone deacetylase (HDAC) inhibitor, entinostat (ENT) with the chemotherapy, vincristine (VCR) in preclinical cell and mouse models and found HDAC3 inhibition as the primary mechanism of decreasing PAX3:FOXO1 expression in fusion positive aRMS [14]. Herein, we demonstrate that ENT in combination with the chemotherapy VCR has synergistic antitumor activity in a orthotopic fusion-negative eRMS mouse model although our studies of patient-derived xenograft (PDX) mouse models had too strong of VCR single-agent activity to detect synergy. Mechanistic studies involving CRISPR suggest that HDAC3 inhibition is the primary mechanism of cell-autonomous cytoreduction in eRMS. Our findings demonstrate that targeting class I HDACs using ENT is a novel, clinically feasible epigenetic therapy for children with fusion-negative eRMS-a result which is in line with our parallel studies in fusion-positive RMS [14].
Cell culture
Murine primary tumor cell cultures U57810 and U37125 were generated as described previously [15]. Briefly, for the establishment of murine eRMS primary cell cultures, tumor samples were minced into small fragments followed by collagenase treatment (0.5%) overnight at 4°C . The disassociated cells were incubated in DMEM supplemented with 10% fetal bovine serum (FBS) and penicillin (100 U/mL)/streptomycin (100 μg/mL) in 5% CO 2 in air at 37°C. Human eRMS RD and Rh18 cell lines were cultured in RPMI 1640 growth medium supplemented with 10% FBS and 1% penicillin/streptomycin and incubated at 37°C and 5% CO 2 . Primary Human Skeletal Muscle Myoblasts (HSMM) were cultured in growth medium (Cell Applications) supplemented with 10% FBS and 1% penicillin/streptomycin and incubated at 37°C and 5% CO 2 .
RNA extraction and RT-PCR
For murine eRMS cell treatment studies, U57810 cells were treated with VCR (2.5 nM), ENT (400 nM), or VCR and ENT (2.5 nM and 400 nM, respectively) for 6 days. RT-PCR was performed for Myoglobin, Myh1, MyoD, and Myogenin relative to Gapdh using probes from SYBR green (Thermo Fisher Scientific).
VCR: chemotherapy agent
VCR was obtained from Sigma (V8879).
Orthotopic allograft studies
Allograft studies were conducted with IACUC approval at the Oregon Health & Science University. The orthotopic allograft mouse model of eRMS (U57810, genotype Myf5Cre, p53) was generated by injecting SCID Hairless Outbred (SHO) mice at 8 weeks of age with cardiotoxin in the right gastrocnemius muscle 1 day before injection of 10 6 U57810 cells in the muscle. Treatment was started once the tumors reached 0.25 cm 3 . Mice were treated with ENT 5 mg/kg/day by intraperitoneal (IP) injection, VCR at a dose of 1 mg/kg weekly by IP injection, or in combination at the same dose and treatment ended when the tumors reached 1.5 cm 3 . During treatment, mice with significant body weight loss approaching (10-15%) were euthanized early per protocol.
Patient-derived xenograft models at Champions Oncology
The Champions Personalized TumorgraftTM chemosensitivity test was conducted using a TumorGraft model established from a resection of sarcoma, which was removed from the abdomen. "Chemosensitivity" test is used for the patient drug sensitivity testing where we are guiding patients' clinical therapies by their PDX model responses. In this instance, it is ENT +/− VCR testing. The explant was received and immediately implanted into immunodeficient mice for the purpose of propagating the tumor for the test. All test agents were formulated according to the manufacturer's specifications. Beginning day 0, tumor dimensions were measured twice weekly by a digital caliper and data, including individual and mean estimated tumor volumes (mean TV ± SEM), are recorded for each group. Tumor volume was calculated using the formula: TV = width 2 × length × π/ 2. At study completion, percent tumor growth inhibition (%TGI) values were calculated and reported for each treatment group (T) versus control (C) using initial (i) and final (f ) tumor measurements by the formula: %TGI = [1 − (Tf − Ti)/(Cf − Ci)] × 100. Individual mice reporting a tumor volume > 120% of the day 0 measurement are considered to have progressive disease (PD). Individual mice with neither sufficient shrinkage nor sufficient tumor volume increases are considered to have stable disease (SD). Individual mice reporting a tumor volume ≤ 70% of the day 0 measurement for two consecutive measurements over a 7-day period are considered partial responders (PR). If the PR persisted until study completion, percent tumor regression (%TR) is determined using the formula: %TR = (1 − Tf/Ti) × 100; a mean value is calculated for the entire treatment group. Individual mice lacking palpable tumors for two consecutive measurements over a 7-day period are classified as complete responders (CR). All data collected in this study was managed electronically and stored on a redundant server system. All animal studies were conducted with the approval of Champions Oncology's International Animal Care and Use Committee (IACUC).
Patient-derived xenograft models at The Jackson Laboratory
The Jackson Laboratory established each PDX model using NSG (NOD.Cg-Prkdcscid IL2rgtm1Wjl/SzJ) mice. Tumor explants obtained from the patients were immediately implanted into the rear flanks of recipient female NSG (JAX # 5557) mice using a trochar. For tumors reaching about 2000 mm 3 , they were collected and passaged for serial transplantation in NSG mice to create low-passage fragments or cohort for future studies. The criteria for enrollment was the tumor volume range of 150-250 mm 3 . Mice were treated with vehicle or ENT/ VCR as a single agent or combination at dose and route of administration provided in Additional file 6: Table S1 until tumors reached 2000 mm 3 or reached study day 28. The antitumor activity of ENT and VCR was tested. All compounds were formulated according to the manufacturer's specifications. Beginning day 0, tumor dimensions were measured twice weekly by a digital caliper and data including individual and mean estimated tumor volumes (mean TV ± SEM) recorded for each group; tumor volume was calculated using the formula: TV = (width)2 × length/2. All animal studies were conducted with the approval of The Jackson Laboratory IACUC.
RNAseq
RNA sequencing was performed on four eRMS cultures (human cell lines RD and Rh18, mouse cell cultures U37125 and U57810). To identify transcriptional changes in eRMS following ENT treatment, each sample was treated with ENT for a fixed period of time alongside a paired untreated sample. All samples were treated for with ENT (2 μM) for 72 h. This dose was chosen based on our previous publication [16] where murine rhabdomyosarcoma primary tumor cell cultures were incubated with varying concentration of ENT for 72 h. The cytotoxic effect was then assessed by MTS assay. All cells were cultured on 10-cm dishes, and treatment began when plates were 60% confluent. Passages lower than 7 were used for all mouse cultures. Bioinformatic analysis of RNAseq performed as described in our earlier study [14].
CRISPR screening in eRMS
Cell culture, sgRNA designs, and virus production Murine eRMS tumor cell line (U57810) expressing hCas9 was derived from retroviral transduction of MSCV-hCas9-PGK-Puro vector into a parental U57810 cell line, followed by puromycin selection (1 μg/mL). All sgRNAs were designed using http://crispr.mit.edu/ with high-quality scores (> 70) to minimize off-target effects and were subsequently cloned into U6-sgRNA-EFS-GFP construct. HDAC sgRNAs were designed to specifically target the deacetylase catalytic domains as previously described [17]. Rosa26 and Rpa3 sgRNAs were used as negative and positive controls, respectively. Lentiviral sgRNA constructs were transfected together with viral packaging vectors (pPAX2: VSVG) using the standard protocol for PEI reagent (23966, Polysciences) into HEK293T cells. Viral supernatants were collected between 24 to 48 h post-transfection and passed through a 0.45-μm filter.
sgRNA/GFP competition assays
To evaluate sgRNA effects on eRMS cell proliferation, U57810 expressing Cas9 cells were transduced with sgRNA virus, followed by flow cytometry analysis of GFP/sgRNA+ populations using a Guava Easycyte HT instrument (Millipore) over a course of 16 days after viral infection. GFP percentages at indicated time points on histograms were normalized to day 2 GFP percentages post-infection.
Statistical analysis
The statistical test used has been described in our previous study [14]. Briefly, in the case of the orthotopic mouse model of murine eRMS, failure is defined as an event for tumor size greater than or equal to 1.2 cm 3 . Treatment groups were contrasted on the mean with analysis of variance in log units. Time to event distributions were summarized with Kaplan-Meier curves, and the significance of variation with the treatment group was assessed with log-rank tests. Corrections for multiple comparisons were made with the Dunnett method for ANOVA and with the Bonferroni method for log-rank testing. Statistical testing on means and time to event was two-sided with a nominal significance level of 5% and was carried out with R. The significance of variation intumor volume with treatment was assessed in PDX models using a repeated-measureslinear model with an autoregressive order 1 autocorrelation matrix and a Tukey correction for multiple comparisons in terms of treatment, day, and the treatment × day interaction. All statistical testing was two sidedwith a 5% experiment-wise significance level and all analyses were carried out in log10 units. SAS version 9.4 for Windows (SAS Institute) was used throughout. Statistical significance was set at *P < 0.05, **P < 0.01, and ***P < 0.001. Error bars indicate mean ± SD or SEM.
ENT in combination with vincristine slows eRMS tumor growth and induces myodifferentiation in vivo
We tested the efficacy of ENT and VCR as single agents and in combination in orthotopic allograft mouse models of eRMS. eRMS does not harbor Pax3:Foxo1 fusion, and thus, response was not expected to mimic fusion-positive RMS [14]. The eRMS model was generated by injecting murine eRMS primary cell cultures into the cardiotoxin-preinjured gastrocnemius muscle of SHO mice. ENT as a single agent showed minimal antitumor activity in this embryonal model of RMS; however, in these fusion-negative (Pax3:Foxo1 non-expressing) mice, treatment with the combination of ENT plus VCR reduced tumor volume significantly (Fig. 1a, b).
Since combined treatment with ENT and VCR caused a reduction in volumes of eRMS tumors (despite not carrying Pax3:Foxo1 fusion gene), we investigated whether treatment with ENT and VCR contributed to rhabdomyoblastic differentiation. Residual end-treatment tumors were evaluated histologically, and rhabdomyoblastic differentiation was scored for all the four treatment groups. The percentage of rhabdomyoblasts in the tumors from mice treated with the combination of ENT and VCR was prominent (83% rhabdomyoblasts on average) compared to mice treated with ENT alone (21%) or VCR alone (20%). Representative histology of each treatment group is provided in Fig. 1c. Mice bearing eRMS treated with the combination of VCR and ENT had very few mitotic figures (3.5% mitotic figures on average) in comparison to mice treated with VCR only (12%)-suggesting the rhabdomyoblasts were quiescent or non-proliferative rhabdomyoblasts (Additional file 1: Figure S1a-c). These studies supported that eRMS tumor cells that escaped cytoreduction were subject to rhabdomyoblastic differentiation-but only when ENT is combined with a specific chemotherapeutic agent (VCR). This result is consistent with clinical observations of rhabdomyoblastic differentiation induced by treatment stress [18].
ENT has single-agent activity in eRMS patient-derived xenografts
We investigated the antitumor efficacy of ENT and VCR as single agents and in combination for four biologically independent patient-derived xenograft mouse models of eRMS. The dosing details are given in Additional file 6: Table S1a-b, and the PDX model characteristics are given in Additional file 6: Table S2. Two of the four models were from recurrent and/or metastatic tumors taken from a biopsy or rapid autopsy. All of these contemporary models were established after 2010. In half of the cases, ENT showed single-agent activity in tumor growth inhibition relative to control (Fig. 2a-d). These eRMS models were hypersensitive to the VCR dose used in aRMS PDX models; thus, the synergy between ENT and VCR could not be assessed in this study. Statistical summaries of four different PDX eRMS are given in Additional File 6: Table S2-S6. Residual end-treatment tumors were examined histologically, and rhabdomyoblastic differentiation was scored for the CTG-1213 PDX mouse model. No difference existed between different treatment groups in terms of rhabdomyoblast differentiation except for combination (ENT + VCR) showing moderate differentiation (20%) (Additional file 6: Table S7).
ENT has single-agent activity in PleoRMS patient-derived tumorgraft xenografts
We next investigated the antitumor efficacy of ENT and VCR as single agents and in combination in three biologically independent patient-derived tumorgraft xenograft mouse models of pleoRMS. PDX model characteristics are given in Additional file 6: Table S8. In all the cases, ENT had single-agent activity relative to control (Fig. 2e-g). Statistical summaries of three different PDX pleoRMS models are given in Additional file 6: Table S9-S11. Residual end-treatment tumors were examined histologically, and rhabdomyoblastic differentiation was scored for the CTG-800 PDX mouse model, which showed the best response to treatment. There was no difference seen between different treatment groups in terms of rhabdomyoblast differentiation (Additional file 6: Table S12). Representative histology of each treatment group is provided in Additional file 2: Figure S2.
HDAC3 inhibition is responsible for tumor cell growth inhibition in eRMS
Our recent study showed HDAC3 inhibition by ENT as the primary mechanism in aRMS [14]. To determine whether inhibition of a specific HDAC3 target of ENT was responsible for cytoreduction in eRMS, we performed CRISPR-Cas9-mediated targeting of the deacetylase domains of HDAC3 in the U57810 murine eRMS primary tumor cell culture which were then tracked for cell viability over 16 days. The HDAC3 CRISPR constructs reduced the viability of eRMS cells (Fig. 3a) whereas other HDAC CRISPR did not reduce the cell viability (Additional file 3: Figure S3a-d). These results suggest that in eRMS, HDAC3 is a cell-autonomous survival factor. We have previously performed a cell viability assay in murine eRMS culture, U57810 (in which we carried out CRISPR-mediated depletion of HDAC3), using entinostat. In U57810, a dose-dependent change in cell viability at 72 h is present (IC25 1.2 μM; IC50 3.9 μM) (data not shown). By comparison, CRISPR HDAC3 depletion in the same cell line showed a minimal effect at day 4 and a maximum effect at day 16. These data are consistent and argue for a prolonged time-dependent effect on cell viability in the cell-autonomous experimental context.
To determine whether myogenic differentiation was a cell-autonomous effect of ENT-VCR combination therapy in eRMS, U57810 cells were treated with VCR and ENT individually and in combination for 6 days. RT-PCR measuring the mRNA expression of four different muscle differentiation markers showed no apparent differentiation effect in vitro (Additional file 3: Figure S3e). In addition, we performed siRNA-mediated knockdown of HDAC3 in eRMS cell line RD and analyzed for Figure S4). This is in contradistinction to the observed significant in vivo change in rhabdomyoblast counts for this murine eRMS model. This difference between in vitro and in vivo results raises the possibility that species-specific tumor microenvironment factors are critical to the myodifferentiation effect observed in vivo factors that may have been present in allografts but not xenografts. Table S2 & S8 respectively. Mice were treated with vehicle, ENT (4 mg/kg or 5 mg/kg), and vincristine (0.75 mg/kg) as single agents and in combination in each treatment groups and average tumor volume were plotted until the end-point of the experiment. Detailed treatment schedules are given in Additional file 6: Table S1. Statistical summary for the response to treatment is given in Additional file 6: Table S4-S6 for eRMS PDX models and in Additional file 6: Table S9-S11 for pleoRMS. Effectiveness of VCR precluded drawing any conclusions of ENT-VCR combination therapy in eRMS models
Transcriptional reprogramming by ENT differs for aRMS versus eRMS
We next turned to short-term RNAseq in order to explore whether ENT induced tumor cell production of factors related to remodeling the tumor microenvironment. We have previously shown that tumor cells can interact with muscle stem cells (i.e., satellite cells) in an IL-4R-dependent manner to accelerate tumor progression [19]. An integrated dataset was constructed for eRMS consisting of differential expression data and HDAC1, 2, 3, or 11 binding data (Fig. 3b). This merged dataset identified several genes of interest. To understand the broad changes in the transcriptional program for eRMS due to ENT treatment, Gene Ontology analysis [20] was carried out using PANTHER version 14.0 to identify key upregulated or downregulated processes. This ontology analysis of eRMS identified several key upregulated muscular programs: muscle contraction, muscle system process, muscle organ development, muscle structure development, and cell differentiation (Additional file 7: Table S14), which surprisingly does not translate into in vitro myodifferentiation as seen in vivo with the possibility that tumor microenvironment factors are critical to the myodifferentiation effect observed in vivo in response to ENT treatment.
However, more prominent than any other feature of ENT-regulated genes was the cohort of cytokine (CCL2, CSF1, CXCL1), growth factor (IGF2), and extracellular matrix genes (ECM1, SERPINE1) (Fig. 3b). These factors are associated with both myoblast/myofiber communication and differentiation [21] and macrophage interactions [22]. However, when re-examining ENT-VCR-treated mice, macrophage infiltration was increased only in areas of necrosis, not elsewhere. No difference in macrophage infiltration was observed between treatment groups (Additional file 5: Figure S5). Taken together, these results suggest that following cell-autonomous tumor cell death caused by HDAC3 inhibition, residual tumor cells under chemotherapeutic stress are induced to express cytokines and growth factors. These secreted factors may in turn lead to non-macrophage tumor microenvironment interactions facilitating rhabdomyoblastic differentiation of adjacent tumor cells.
Discussion
Our previous study uncovered that cell of origin epigenetically influenced transcription of the PAX3:FOXO1 oncogene in fusion-positive RMS [16]. This observation led to the investigation of pharmacological modifiers of ENT, a potent and selective Class I HDAC inhibitor, reduced the abundance of the tumor-driving PAX3:FOXO1 mRNA and protein expression [14]. Further investigation revealed HDAC3-SMARCA4-miR-27a-PAX3:FOXO1 circuit as a critical driver of chemo-resistant fusion-positive RMS [14]. The purpose of the current study was to investigate the preclinical efficacy of ENT in the other and more common subtype, eRMS [23]. ENT in combination with the chemotherapy agent VCR showed strong antitumor activity in eRMS orthotopic mouse models, antitumor activity in eRMS PDX models as a single agent in half of the cases, and a range of single-agent activity in pleoRMS PDX models. Although our in vitro concentration of ENT was higher at 400 nM and 2000 nM, our preclinical dose in eRMS and pleoRMS PDX models were clinically comparable (or slightly under-dosed) and similar to our recent study of ENT in aRMS [14]. In addition, the concentration of 400 nM used in vitro was less than the reported highest achievable maximum drug serum concentration (C max ) for ENT of 1000 nM [24], and the 2000 nM concentration was comparable to the very maximum dose of ENT used in our previous study [14] to see a dose-dependent effect of ENT in fusion-positive RMS.
Our CRISPR studies of the HDAC targets of ENT suggest that HDAC3 is a key factor to eRMS cell-autonomous tumor cell survival. Following cytoreduction in vivo in orthotopic allografts, residual tumor cells treated with chemotherapy undergo a dramatic, ENT-induced (70-100%) conversion to non-proliferative rhabdomyoblasts, raising the possibility that myogenic differentiation could be a true therapeutic goal [25]. Rhabdomyoblasts remaining at the end of multimodal therapy rarely if ever lead to disease recurrence and may truly represent a differentiated state [26]. Our studies further narrow the mechanism of this effect to the potential interplay of tumor cells secreting cytokines and non-malignant cells and the tumor microenvironment to cause this tumor cell differentiation. In eRMS PDX models, cytoreduction was a consistent feature, but myogenic differentiation was noticeably absent. This result is in contrast to the recently published work, which showed that HDAC3 knockout arrests tumor growth and induces myogenic differentiation both in vitro and in vivo [27].
Conclusion
In summary, our data suggest that targeting Class I HDACs may provide a therapeutic benefit for patients with eRMS. The single agent preclinical in vivo efficacy of ENT begets exploration of a combination with chemotherapy in a clinical trial for eRMS-although our studies of PDX models had too strong of VCR single-agent activity to detect synergy. The synergy in vivo of ENT in combination with chemotherapy for fusion-negative eRMS orthotopic allograft models and unexpected myodifferentiation effect brings new possibilities that epigenetic modifiers can not only cytoreduce tumors but also reprogram cancer cells towards a non-tumorigenic cell fate as a desired therapeutic outcome.
Additional files
Additional file 1: Figure S1. Phosphohistone H3 (pHH3) expression and rhabdomyoblast count in VCR and ENT + VCR-treated eRMS. Additional file 5: Figure S5. Representative immunohistochemistry for CD68 of mouse eRMS tissue and primary cells. Necrotic tissue (a) showed few macrophages present, while viable tumor (b) showed a collection of many macrophages. Macrophage presence was observed as being the same for all treatment groups. (TIF 2929 kb) Additional file 6: Table S1. Treatment schedule for PDX models. Table S2. Patient history of PDX eRMS models. | 5,408.2 | 2019-05-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Phenomenology of Baryogenesis from Lepton-Doublet Mixing
Mixing lepton doublets of the Standard Model can lead to lepton flavour asymmetries in the Early Universe. We present a diagrammatic representation of this recently identified source of $CP$ violation and elaborate in detail on the correlations between the lepton flavours at different temperatures. For a model where two sterile right-handed neutrinos generate the light neutrino masses through the see-saw mechanism, the lower bound on reheat temperatures in accordance with the observed baryon asymmetry turns out to be $\gsim 1.2\times 10^9\,{\rm GeV}$. With three right-handed neutrinos, substantially smaller values are viable. This requires however a tuning of the Yukawa couplings, such that there are cancellations between the individual contributions to the masses of the light neutrinos.
Introduction
Observational and theoretical studies of mixing and oscillations are typically concerned with neutral particle states. Important examples are neutral meson mixing, the oscillations of Standard Model (SM) neutrinos [1] and Leptogenesis through the mixing of sterile right-handed neutrinos (RHNs) in the early Universe [2][3][4][5]. In contrast, for charged particles in the SM at vanishing temperature, mass degeneracies between different states are not strong enough to produce observable phenomena of mixing and oscillations. This does however not preclude the fact that these effects are present in principle. Moreover, it has been demonstrated that the mixing of lepton doublets (which are gauged) can be of importance for Leptogenesis [6][7][8][9][10]: At high temperatures, the asymmetries are in general produced as superpositions of the lepton doublet flavour eigenstates of the SM. In the SM flavour basis, this can be described in terms of off-diagonal correlations in the two-point functions, or alternatively in effective density-matrix formulations in terms of correlations of charge densities of different flavours. At smaller temperatures, interactions mediated by SM Yukawa couplings become faster than the Hubble expansion, such that the flavour correlations decohere. In particular, the SM leptons receive thermal mass corrections as well as damping rates that lift the flavour degeneracy. By now, these effects have been investigated in detail. It turns out that due to the interplay with gauge interactions, the flavour oscillations that may be anticipated from the thermal masses are effectively frozen, while the decoherence proceeds mainly through the damping effects, i.e. the production and the decay of leptons in the plasma [9,10]. The appropriate treatment of these flavour correlations turns out to be of leading importance for the washout of the asymmetries from the out-of-equilibrium decays and inverse decays of the RHNs.
The origin of the charge-parity (CP ) asymmetry for Leptogenesis is usually attributed to the RHNs and their couplings [11]. In the standard calculation, when describing the production and the decay of the RHNs through S-matrix elements, one can diagrammatically distinguish between vertex and wave-function terms. The presence of finitetemperature effects as well as the notorious problem of correctly counting real intermediate states in the Boltzmann equations [12] have motivated the use of techniques other than the S-matrix approach: It has been demonstrated that the wave-function contribution can alternatively be calculated by solving kinetic equations (that are Kadanoff-Baym type equations which descend from Schwinger-Dyson equations, see Refs. [13][14][15][16][17] on the underlying formalism) for the RHNs and their correlations, or equivalently, by solving for the evolution of their density matrix [18][19][20][21][22][23][24][25]. The vertex contributions to the decay asymmetry can be obtained within the Kadanoff-Baym framework as well, as it is shown in Refs. [26][27][28][29][30][31][32]. We note at this point that it has more recently been argued that the asymmetry from the wave-function correction and the contribution from the kinetic equation are distinct contributions that should be added together [33,34]. However, it is shown in Refs. [19][20][21] that the kinetic equations derived from the two-particle irreducible effective action capture all contributions of relevance for the CP asymmetry at leading order, which also encompasses the wave-function corrections.
The calculations for Leptogenesis based on Schwinger-Dyson equations on the Closed-Time-Path (CTP) can also be applied to Leptogenesis from oscillations of light (masses much below the temperature) RHNs [35], also known as the ARS scenario after the authors of Ref. [36]. In this approach, we can interpret the CP violation as originating from cuts of the one-loop self energy of the RHNs, that are dominantly thermal. It can be concluded that thermal effects can largely open the phase-space for CP -violating cuts that are strongly suppressed for kinematic reasons at vanishing temperature.
Putting together the elements of flavour correlations for charged particles and of thermal cuts, we can identify new sources for the lepton asymmetry, in addition to the one from cuts in the RHN propagator. In models with multiple Higgs doublets, Higgs bosons may be the mixing particles [37], whereas in minimal type-I see-saw scenarios (with one Higgs doublet), this role can be played by mixing SM lepton doublets [38]. Yet, the RHNs remain of pivotal importance because due to their weak coupling, they provide the deviation from thermal equilibrium that is necessary for any scenario of baryogenesis.
While the set of free parameters of the type-I see-saw model will remain underconstrained by present observations and those of the foreseeable future, the parameter space in that scenario is still much smaller than in models with multiple Higgs doublets. For our phenomenological study, we therefore choose to consider the mixing of lepton doublets in the see-saw scenario, that is given by the Lagrangian In short, the scenario of baryogenesis from mixing lepton doublets can be described as follows [38]: Flavour-off diagonal correlations from the mixing of active leptons ℓ a (where a is the flavour index) can induce the production of lepton flavour asymmetries, corresponding to diagonal entries of a traceless charge density matrix in flavour space. Different washout rates for the particular flavours may then lead to a net asymmetry in total lepton number, i.e, a non-vanishing trace of the charge density matrix. Now, since off-diagonal correlations due to mixing vanish in thermal equilibrium, the mixing of lepton doublets that we aim to describe consequently is an out-of-equilibrium phenomenon. It is thus natural to assume that initially, when the primordial plasma is close to thermal equilibrium, all correlations between the SM lepton flavours vanish. Therefore, we are interested in possibilities of generating these dynamically. Due to gauge interactions, the distribution functions of the SM particles should track their equilibrium forms very closely. Moreover, gauge interactions are flavour-blind, so they can neither generate flavour correlations nor destroy these (up to the indirect effects that we discuss below). Sizeable off-diagonal correlations can however be induced through couplings to the RHNs N, the distributions of which can substantially deviate from equilibrium. The flavour correlations in the doublet leptons ℓ are suppressed however due to the SM Yukawa couplings h with the charged singlets e R and the Higgs field φ, whereφ = (ǫφ) † , and where ǫ is the totally antisymmetric SU(2) tensor. By field redefinitions, we can impose that h and M N are diagonal, which is a common and convenient choice of basis that we adapt throughout this present paper. For simplicity, we therefore write M N i ≡ M N ii .
In this paper, within Section 2, we first review the scenario of Ref. [38]. We improve on the previous discussion by introducing a diagrammatic representation of the mechanism. Moreover, we carefully discuss the generation and the decoherence of lepton flavour correlations at different temperatures, paying particular attention to the fact that both effects take a finite time to fully establish. Section 3 contains a survey of the parameter space of baryogenesis from mixing lepton doublets based on the Lagrangian (1). Under the assumption that only two RHNs are present, we perform a comprehensive scan, given the present best-fit values on the light neutrino mass differences and mixing angles, such that we can identify the point in parameter space that allows for the lowest reheat temperature for which an asymmetry in accordance with observation can result. In addition, we show that for three RHNs, substantially smaller temperatures can be viable, what requires however anomalously large Yukawa couplings of the µ-and the τ -leptons and a cancellation in their contributions to the mass matrix of the light neutrinos. The analysis is however restricted to the strong washout regime, such that it remains an open question of interest whether favourable parametric regions also exist when at least one of the RHNs induces only a weak washout. The concluding remarks are given in Section 4.
2 Generation and Freeze-Out of the Lepton Asymmetry
Diagrammatic Representation of the CP -Violating Source Terms
A detailed derivation of the source term for the asymmetries of the individual lepton flavours is presented in Ref. [38], where the CTP method is employed. Here, we do not reiterate these technical details, but we explain the qualitative form of the main results with the help of a diagrammatic representation of the Kadanoff-Baym equations that arise from the CTP approach. In particular, we express the perturbative approximations to the solutions of these equations diagrammatically. Moreover, we discuss how the mixing of the SM lepton doublets in the CTP formalism can be related to a density matrix formulation of flavour oscillations, that should be familiar e.g. from the problem of oscillations of active neutrinos [39][40][41][42][43].
The CTP formulation of the problem leads to Kadanoff-Baym equations, that we show here in a diagrammatic form in Figure 1(A). One may interpret the Kadanoff-Baym equations as (a subset of) exact Schwinger-Dyson equations, that can only be solved approximately in practice. Since couplings can be assumed to be weak, a perturbative one-loop expansion, that is indicated by Figures 1(B) and (C), amounts to a valid approximation.
We can assume that kinetic equilibrium is established by fast gauge interactions. The distribution functions and the propagators of the SM leptons ℓ are therefore effectively determined by the matrix q ℓab of the charge densities of ℓ and their flavour correlations [9]. The perturbation expansion then explicitly reads q ℓ = q where the superscript (m) stands for mixing and (f) for flavoured asymmetries. The zeroth order term is given by the equilibrium distribution for a vanishing charge density, and therefore q In order to clarify the relation of the present notation with the one used in the derivation of Ref. [38], we make the following remarks: In the present context, the interesting contributions within q (m) ℓ are the off-diagonal correlations of lepton doublets, as these are referred to in Ref. [38]. The leading CP -violating lepton-flavour asymmetries are then contained in q (f) ℓ . Note also that the first term on the right-hand-side of the equation in Figure 1(C) is called the source term in Ref. [38]. . . indicate extra diagrams of different topology than those drawn explicitly and that can be derived from the 2-particle irreducible effective action. Figures (B) and (C), illustrate the scheme that is used in Ref. [38] to obtain approximate solutions, which we also apply in this work. The full propagators for the doublets ℓ are now approximated by the results including flavour correlations. The loops are understood to include gauge-mediated processes and top loops that open up the phase space for the reactions between the particles, that are approximated to be massless, cf. e.g. Ref. [44]. The superscripts (0, m, f) indicate that the charge density matrix that can be computed from the corresponding propagators for ℓ yields the non-zero entries of q (0,m,f) ℓ .
Equations for Mixing Correlations
The first order approximation to the Kadanoff-Baym equations, that is given in Figure 1(B), has the main qualitative features of a density-matrix equation where t denotes time, M is a mass matrix, Γ a matrix that describes relaxation toward equilibrium and Θ a matrix-valued inhomogeneous term. This equation is of a form that is familiar from many applications, among which are neutrino oscillations [39][40][41][42][43] and ARS Leptogenesis [20,36,[45][46][47][48][49]. In particular, it is the commutator term [M, ̺] that induces flavour oscillations among the off-diagonal components of ̺. Now for the present application, we replace the density matrix ρ by the deviations of the lepton and anti-lepton number densities from their equilibrium values δn The off-diagonal components of δn ±(m) ℓ describe the flavour correlations of these nonequilibrium densities. The matrices δn ±(m) ℓ then evolve according to the following equations [38]: Here, η is the conformal time, which is suitable for performing calculations in the background of the expanding Universe. It is determined up to redefinitions of the scale factor, and we make the choice that the physical temperature in the radiation-dominated Universe is T = 1/η, what determines the comoving temperature used in above equations to be where m Pl is the Planck mass and g ⋆ the number of relativistic degrees of freedom. The Eqs.
(3) are derived in Ref. [38] by integrating the corresponding Kadanoff-Baym equations for distribution functions of lepton and antileptons and their flavour correlations over the spatial momentum p, and they are given diagrammatically in Figure 1(B). For the distribution functions, we have assumed that gauge interactions maintain these to be of the Fermi-Dirac form with a matrix-valued chemical potential, as explained in Ref. [9]. This allows to uniquely relate the momentum-space distributions to the number densities δn ℓ in Eq. (3). We now give explaining remarks on the individual terms in Eqs. (3): • The terms ∝ (h 2 aa − h 2 bb ) are present due to thermal masses and correspond to the commutator term in the density-matrix equation (2). Notice the different sign of these terms in the equations for lepton and anti-lepton densities, that was first noted in Ref. [9]. This is what makes it necessary to treat the lepton and antilepton densities differently, rather than considering the matrix of charge densities and their and correlations q • The terms involving B Y i describe the decays and inverse decays of sterile neutrinos into the active leptons. Provided the distribution of the sterile neutrinos N deviates from thermal equilibrium, these processes induce the flavour correlations of active leptons through the off-diagonal components (a = b) of Y † ai Y ib in first place. In this work, we restrict ourselves to the parametric regime where the freeze-out value of the lepton asymmetry is determined at times when the the distribution of the N i is dominated by non-relativistic particles, commonly referred to as strong washout. For this situation, we can assume that M N i ≫ T , and approximate the rate for decays and inverse decays using [38] Here, µ N i denotes a pseudochemical potential that can be employed in order to describe the deviation of f N i , which is the distribution function of the sterile neutrino N i , from its equilibrium form f eq N i . For standard Leptogenesis, results for the lepton asymmetry obtained when using a distribution function numerically derived from the Boltzmann equations before momentum-averaging are compared to the results obtained when using the approximation with a pseudo-chemical potential in Refs. [30,50], and the discrepancy between the two methods is found to be small because for strong washout, the RHNs are non-relativistic, such that they mostly populate modes with momenta that are small compared to the temperature. Within the density-matrix equation (2), the B Y i terms correspond to the inhomogeneous term Θ. In the diagrammatic equation Figure 1(B), this is the first term on the righthand side.
• The terms with B / fl ℓ describe the decay of the flavour correlations due to the SM Yukawa interactions, that discriminate between the different lepton flavours. In the density-matrix equation (2), they correspond to the anticommutator term involving the relaxation rate Γ, and in Figure 1(B) to the second term on the right-hand side. The relevant processes involve the radiation of extra gauge bosons or the decay and inverse decay of a virtual Higgs boson into a pair of top quarks, which are understood to be contained in the loops. (Otherwise, the 1 ↔ 2 processes between approximately massles particles would be strongly suppressed kinematically.) The rates for these processes are calculated to LO in Ref. [44], where it is found that γ fl = 5 × 10 −3 (see also Ref. [51] for an earlier estimate that leads to a similar quantitative conclusion and Refs. [52,53] for a recent LO calculation for the production of massless sterile neutrinos, that is closely related). Taking the momentum average of the kinetic equations of Ref. [38], we find that B / fl what differs from the value used in Ref. [38] due to the updated result of Ref. [44]. We note that the averaging implies the assumption that flavour correlations in all momentum modes decay at the same rate, which is not the case in reality. This procedure should therefore incur an order one inaccuracy that may be removed in the future by extra numerical efforts.
• Finally, the contribution with B g ℓ describes pair creation and annihilation processes that drive δn + ab + δn − ab toward zero. It thus forces an alignment between the correlations among the different flavours of leptons and of anti-leptons, that would otherwise perform oscillations with opposite angular frequency, and as a consequence, the evolution of the off-diagonal correlations is overdamped [9,10]. Gauge interactions thus contribute indirectly to the decay of flavour correlations in addition to the direct damping through the Yukawa interactions. In Ref. [38], the relevant momentum average of the pair creation and annihilation rate is estimated as B g ℓ = 1.7×10 −3 T 2 , based on the thermal rates for s-channel mediated processes, that should yield the dominant contribution due to the large number of degrees of freedom in the SM.
Ignoring the derivative with respect to the conformal time η, the solution to Eqs. (3) is given by where and where Ξ is a matrix specified in Eq. (14) below, that selects only those correlations that have enough time to be built up at the temperatures of interest. The flavour matrix Q ℓ can be viewed as the quantity that multiplies the CP -violation originating from the Yukawa couplings Y of the sterile neutrinos. By comparing the powers of h in the numerator and the denominator, we explicitly see that the q (m) ℓab become suppressed if h aa or h bb become large. This is because processes mediated the SM lepton Yukawa interactions lead to decoherence these off-diagonal correlations, as it is familiar from flavoured Leptogenesis [7][8][9]. Note that these decoherence effects also avoid the resonance catastrophe that would occur for h aa → h bb in their absence.
Neglecting the time derivatives is justified provided that there is enough time for the flavour correlations to adapt to the change in the non-equilibrium density of the sterile neutrinos. From Eqs. (3), it can be seen that rate for the flavour correlations to build up is given by This implies that the correlations do not build up in case the entries of h are so small that Γ − q ℓab < ∼ H when the right-handed neutrinos go out of equilibrium. One should expect this, because in the limit h → 0, the system is flavour blind and there should be no dynamical generation of correlations. For the subsequent discussion, it is useful to compare the rate for the build up of correlations (9) with the more commonly employed rate of flavour equilibration which is valid for |h xx | ≫ |h yy | and for the size of the particular lepton Yukawa couplings as in the SM.
Equations for the Flavoured Asymmetries
In deriving Eq. (8), we have accurately taken account of the impact of the gauge and the Yukawa couplings on generating and also damping off-diagonal correlations in q (m) ℓ . In the equation represented by Figure 1(C), the same couplings enter once more. At this level, we adapt from the usual calculations on flavoured Leptogenesis [7,8] the simplifying approximation that flavour correlations in q (f) ℓab are either unaffected (unflavoured regime) or completely erased (fully flavoured regime). We note that a detailed calculation as for q (m) ℓ should have the result that in the fully flavoured regime, the according components of q (f) ℓab are suppressed rather than fully erased, what would lead to sub-leading corrections to the present calculations. These flavour effects suggest to distinguish between the following regimes: Leptogenesis from mixing lepton doublets should therefore be inefficient at these temperatures.
ℓ , the correlations involving τ are erased due to decohering scattering. In summary, the non-zero entries of the charge-density matrices are given by When above constraints on the rates for the creation and for the decay of flavour correlations are not sufficiently saturated, a treatment of incomplete flavour decoherence is in order. This has been put forward in Ref. [9], but it is yet numerically challenging and requires further developments. For the numerical examples presented in this paper, it turns out however that the approximations as described for regime B1 should be appropriate.
In view of these considerations for the generation of flavour correlations, we can now write down the expressions for the matrix Ξ introduced in Eq. (7), that depend on the temperature regime in which Leptogenesis takes place: Note that the diagonal components are actually irrelevant, because Q ℓaa = 0. From above explicit expressions that indicate the non-zero entries of q (m) ℓ and q (f) ℓ , we see that these charge-density matrices are complementary, what justifies the decomposition of the Kadanoff-Baym equations done in Ref. [38], which is represented here in the Figures 1(B) and (C). We should still make a remark though on the fact why we do not consider terms of order Y 2 that multiply q (m,f) ℓ on the right-hand side of Figure 1(B). The reason is that by virtue of the requirement Γ − qℓab > H for the non-zero correlations q (m) ℓab , the relation (h 2 aa + h 2 bb )γ fl > H should be amply fulfilled as well. Therefore, the flavour damping mediated by the SM Yukawa couplings is more efficient in suppressing the off-diagonal correlations in q (m) ℓab than the damping induced by the couplings Y , that we therefore neglect. Note however, that in the equation represented by Figure 1(C), rates of order Y 2 multiply q (f) ℓ and q (m) ℓ . In particular, the second term on the right hand side is the washout term that suppresses the diagonal charge densities and those of the off-diagonal components, that are unaffected by flavour effects. The second term on the right-hand side is the source term, that we discuss next.
The charge correlations q (m) ℓ , as given by Eq. (7), are of an out-of-equilibrium form, which can simply be inferred from the fact that they are purely off-diagonal. Therefore, they give rise to a non-vanishing source of correlations for entries of the matrix q (f) ℓ , which may be diagonal or non-diagonal. The first term on the right-hand side of Figure 1(C) is the CP violating source that creates flavour asymmetries at the rate Note that the flavour structure of this equation takes the anticommutator form present in Eq. (2). When compared with the corresponding result of Ref. [38], we have generalised this source of lepton flavour asymmetries such that it also includes the off-diagonal correlations that are generated and that should be relevant for the regime A2.
Comparision with the Source Term for Conventional Leptogenesis
It is of course of interest to compare the asymmetry from lepton mixing with the standard asymmetry from the decays and the mixings of sterile neutrinos. For this purpose, we make use of the standard decay asymmetry including flavour correlations, that is given by [2,9,54] where We should compare the function ξ(x) with Q ℓab given in Eq. (8), since both play the role of loop factors, that multiply the CP asymmetry that is present in the Yukawa couplings Y . Inspecting Q ℓab and ignoring first those of the denominator terms that involve B / fl , we observe an enhancement ∼ 1/(h 2 aa − h 2 bb ), which one might naively guess when expecting that the source from lepton mixing is enhanced by the difference of the square of the thermal masses of the leptons. Within ξ(x j ), the corresponding enhancement is explicitly present in the terms ∼ 1/(M 2 N i −M 2 N j ). However, within Q ℓab , the denominator terms that involve B / fl indicate that this enhancement is limited by the damping of the flavour correlations, which induce CP violation from mixing. We observe two different types of damping: First, the terms ∼ B / fl are due to the decoherence of correlations due to scatterings mediated by the SM-lepton Yukawa couplings h. Second, the terms ∼ B / fl B g originate from the effect that leptons and anti-leptons oscillate with opposite frequencies. In conjunction with pair creation and annihilation processes, that mediate between leptons and anti-leptons, this leads to flavour decoherence from overdamped oscillations, as it was first described in Ref. [9] and as it is confirmed in Ref. [10]. An appropriate treatment of the resonant limit x → 1 reveals a similar regulating behaviour also for standard Leptogenesis [20][21][22][23][24].
Flavour Correlations and Spectator Effects
When Leptogenesis occurs at high temperatures, where flavour effects are not important, and when the production and the washout of the asymmetry results from decays and inverse decays of the lightest sterile neutrino, which we call N 1 for now in order to be definite, it is convenient to perform the single-flavour or vanilla approximation. It is based on a unitary flavour transformation (of the left-handed SM leptons) such that N 1 only couples to one linear combination ℓ of the left-handed leptons. On the other hand, when Leptogenesis occurs at temperatures below 1.3 × 10 9 GeV, it is advantageous to remain in the basis where the SM Yukawa-couplings h are diagonal. Interactions mediated by these couplings then rapidly destroy off-diagonal flavour correlations. In a practical calculation, we may then just delete the off-diagonal correlations that are induced by the terms (15) and (16) [7,8]. In the range between 1.3 × 10 9 GeV and 2.7 × 10 11 GeV, it is often convenient to perform a two flavour-approximation, where lepton asymmetries are deposited in the flavour τ and in a linear combination σ of e and µ. Correlations between τ and σ are then erased by h τ τ -mediated interactions. In effect, only diagonal correlations in the flavours τ and σ need to be calculated.
However the reduction to a single flavour at high temperatures or to two uncorrelated flavours between 1.3 × 10 9 GeV and 2.7 × 10 11 GeV only works when N 1 is the only righthanded neutrino that is effectively produced or destroyed in decay and inverse decay processes at times relevant for Leptogenesis, which is typical for hierarchical scenarios where M 1 ≪ M 2,3 , and therefore, the heavier of the N i are strongly Maxwell suppressed. Once more than one of the right-handed neutrinos is effectively produced or destroyed, flavour correlations of the active leptons must be taken into account [10,55], because the combinations ℓ or σ are in general different for the individual N i . Now, because for Leptogenesis from mixing lepton doublets, the relevant CP -violating cut is purely thermal and it involves a right-handed neutrino that must be different from the decaying neutrino, we must require that there are at least two sterile neutrinos N 1,2 with masses M N 1,2 that are not hierarchical. The latter condition is imposed in order to avoid a Maxwell suppression of the CP asymmetry 1 . This implies that for regime A2, we must account for lepton flavour correlations, which in general cannot be avoided by a basis transformation due to the dynamical importance of two sterile neutrinos. In regime B, we may proceed by remaining in the basis where the SM-lepton Yukawa-couplings h are diagonal and simply delete the off-diagonal correlations in the source terms (15) and (16).
In the CTP approach, the collision terms including flavour correlations can be derived in a systematic and straightforward manner. The flavour structure turns out to be in accordance with the anticommutator term in the density matrix equation (2). In order to further improve on the accuracy of the present analysis compared to the one presented in Ref. [38], we include the partial redistribution and equilibration of the flavoured asymmetry within the SM particles present in the plasma, the so-called spectator effects [56][57][58]. The relevant processes are mediated by Yukawa interactions as well as by the strong and weak sphalerons. It is then useful to track within the Boltzmann equations those asymmetries and correlations for which the diagonal parts are only violated by the decays and the inverse decays of the sterile neutrinos. These are given by which we have formulated as a matrix-valued quantity in view of the Boltzmann equations including flavour coherence, that we formulate below. Here, the number density of baryons is given by B, and the diagonal number density of leptons of the flavour a by L a , i.e. it accounts for left-and right-handed SM leptons. In addition to the interactions with sterile neutrinos, the flavour correlations in q (f) ℓ are also altered by processes that are mediated by the SM-lepton Yukawa interactions. However, according to our above discussion, we assume that these are either negligible (unflavoured regime) or lead to a complete decoherence (fully flavoured regime). We also note that the quantities denoted by q X are defined here as the charge densities within a single component of the gauge multiplet X. In contrast, ∆ accounts for a the total charge density that is summed over all gauge multiplicities. This implies that if q ℓaa changes by two units, ∆ aa does so by minus one.
In order to obtain the washout rates, we must reexpress the q ℓaa in terms of the ∆ aa . Moreover, there is also an asymmetry in Higgs bosons that depends on the ∆ aa . We obtain these densities through the relations As explained above, the redistribution of the asymmetries due to the spectators only afflicts the diagonal components of q ℓ . For regime A, we take strong sphalerons, weak sphalerons (that couple to the trace of q ℓ ), interactions mediated by Yukawa couplings of the t, b, c quarks and τ leptons to be in equilibrium. This leads to Note that the factors of 1/2 in Eqs. (19) are due to SU(2) doublet nature of ℓ and φ and to our convention of q ℓaa and q φ to account for one component of the SU(2) doublet only.
Boltzmann Equations
With above explanations and remarks, and with the calculational details given in Ref. [38], we put together Boltzmann equations that describe the freeze-out of the lepton asymmetry. In contrast to the most commonly studied scenarios, we now have more than one sterile neutrino in the game. Therefore, we distinguish the asymmetries that are created through the decays and inverse decays of the individual N i as q (m,f)N i ℓ and ∆ N i , such that Eq. (7) is decomposed as Furthermore, we follow the common procedure of expressing the Boltzmann equations in terms of the ratios It is convenient to parametrise the time evolution through variables z i = M N i /T , in terms of which the equations that describe the freeze-out of the asymmetry can be expressed in the following approximate form: where Y eq N i is the equilibrium value of Y N i . Through Eqs. (5,6,7,15), we can identify the source term for Leptogenesis from mixing leptons In the diagrammatic representation, this source corresponds to the first term on the right-hand side of Figure 1(C). According to what we state above regarding the effect of the SM lepton-Yukawa couplings, we should choose such that correlations that are suppressed by Yukawa-mediated interactions faster than the Hubble rate are deleted from the outset. We re-emphasise that a transformation to an effective two-flavour basis is not possible in Regime A when more than one of the sterile neutrinos is involved in the washout [10,55]. The Boltzmann equations (23) apply to standard Leptogenesis as well. In that case, the source is given byS We have defined here theS ℓab in Eqs. (24,26) such that these comply for a = b with the expressions given in Ref. [38], where these are introduced as a source for q ℓ rather than ∆. The various factors −1/2 and −2 therefore account for the ℓ being SU(2) doublets. The decay rate of the sterile neutrino N k in terms of the variable z i is In the non-relativistic approximation, this agrees with its thermal average up to relative corrections of order T /M N k . In the strong washout regime, a substantial simplification arises from the fact that the deviation of the sterile neutrinos from equilibrium is small, such that we can approximate We use this relation for both scenarios, Leptogenesis from mixing leptons as well as from the decay and mixing of sterile neutrinos. The error incurred through this standard approximation is investigated in Refs. [30,50]. Finally, we need to obtain an expression for the washout rateW [Y N i ℓ ], that is of the anticommutator form indicated in the density matrix equation (2) and that should account for the spectator effects. In terms of Feynman diagrams, the washout term corresponds to the second graph on the right-hand side of Figure 1(C). As explained above, washout affects the flavour-diagonal lepton charges and the off diagonal correlations in a different manner, such that it is useful to define where the superscript t indicates a transposition. The matrix components of the washout rate are given by Putting together the washout terms induced by lepton and by Higgs charge densities, we eventually obtain for the washout term The factor of 1/2 in front of the Higgs-induced term can be understood when noting that q ℓ = µ ℓ T 2 /6, whereas q φ = µ φ T 2 /3, where µ ℓ,φ are chemical potentials and the factor two is due to the difference between Fermi and Bose statistics.
Parametrisation of the Yukawa Couplings
Taking diagonal matrices for the sterile neutrino masses M N , the neutrino sector of the model (1) yet encompasses 18 parameters: 3 sterile neutrino masses and 15 parameters in the Yukawa coupling Y . (While the complex 3 × 3 matrix Y has eighteen degrees of freedom, three of these can be absorbed by phase rotations of the SM leptons ℓ.) This large number of parameters is a typical obstacle to comprehensive studies of the parameter space in type-I seesaw models. The Casas-Ibarra parametrisation [59] facilitates to impose the observational constraints from neutrino oscillations by rearranging the Lagrangian parameters into low and high energy categories. The nine high energy parameters are given by M N 1,2,3 as well as three complex angles ̺ 12 , ̺ 13 , ̺ 23 , in terms of which one defines the complex orthogonal matrix where s ij = sin ̺ ij and c ij = cos ̺ ij . The nine low-energy parameters are given in terms of the diagonal mass-matrix of the active neutrinos m ν = diag(m 1 , m 2 , m 3 ) and the six real angles and phases of the PMNS matrix and U ±δ = diag(e ∓iδ/2 , 1, e ±iδ/2 ). In terms of this parametrisation, the Yukawa couplings of the sterile neutrinos are obtained as A considerable, yet generic simplification occurs when one of the three sterile neutrinos decouples, say N 3 for definiteness. This can happen when the Yukawa couplings Y 3a are very small, when M 3 is very large or when we only assume the existence of two sterile neutrinos to start with. Note that such a configuration requires one of the light neutrinos to be massless. If we therefore take m 1 = 0, we imply that Moreover, the Yukawa couplings as given by Eq. (35) then turn out to be independent of α 1 , as an immediate consequence of m 1 = 0. Altogether, in the decoupling scenario, there are 11 Lagrangian parameters (9 parameters in the Yukawa couplings Y after rephasings and two Majorana masses for the sterile neutrinos). These decompose into 4 high energy parameters (M N 1,2 and ̺ 12 ) and 7 low-energy parameters (m 2 , m 3 , three angles ϑ ij and the two phases δ and α 2 ). Out of the latter, 5 have been measured experimentally (∆m 2 , δm 2 ≈ m 2 2 and the three PMNS mixing angles ϑ ij ). The free parameters of the model are therefore M N 1,2 , ̺, α and δ, while for the PMNS mixing angles in our numerical examples, we choose sin ϑ 12 = 0.55, sin ϑ 23 = 0.63 and sin ϑ 13 = 0.16, which are close to the best-fit values determined by current observations [60,61].
The Parameter Space in the Decoupling Scenario
The production rates of the N i , the washout rates of the asymmetries as well as the CP cuts entering the production rates of the lepton asymmetries that are presented in Section 2 apply all to the non-relativistic regime, i.e. when M N i ≫ T for all N i that are involved in a certain rate. In order to employ these results consistently, we should then avoid situations when at times relevant for the freeze-out value of the asymmetry relativistic N i are present. For the purpose of the present analysis, we therefore choose RHN masses that are not hierarchical, while not necessarily degenerate. Besides, for the source from mixing SM leptons, such parametric configurations are also favoured by the fact that the asymmetry from the decay of the lightest of the N i is exponentially suppressed in the case of hierarchical M i , cf. Eq. (24) above and Figure 4 below. In future work, it may be of interest though to consider relativistic RHNs when the asymmetry does not result from the decay of the lightest RHN, as certain flavour correlations may generically survive the washout from the lighter RHNs [55,62,63]. Now, as we observe below, the dependence of the final asymmetry on the relative size of M 1 and M 2 turns out to be mild (cf. Figure 4). Besides, given the relation (35) and the Boltzmann equations from Section 2.3, we see that the value of the freeze-out asymmetry scales proportional to the M N i when keeping the mass ratios fixed. Therefore, a scan over the four-dimensional parameter space defined by ̺ 12 , δ and α 2 yields comprehensive information on the model in the decoupling scenario [with ̺ 23 and ̺ 13 as in Eqs. (36)], given the constraints mentioned above. For the purpose of the scan, we choose the masses of the two RHNs as these are given in Table 1. The remaining values specified in Table 1 correspond to the point that we find in parameter space for which the maximal asymmetry occurs. We use the flavour approximations as specified for Regime B 1 in Section 2.1.2. Table 1: Set of parameters that yields the largest asymmetry for Leptogenesis from mixing lepton doublets in the decoupling scenario (effectively two RHNs only). Table 1.
In Figure 2, we show the freeze-out asymmetry normalised to the observed baryon-minus-lepton asymmetry [64,65] Y obs = 28 79 where the first factor accounts for the conversion to the final baryon asymmetry via sphalerons [66]. We vary parameters in the planes ̺ 12 and δ vs. α 2 , where we fix the remaining parameters as in Table 1. The alignment of some of the contours along δ + α 2 = const. in Figure 1(B) can be attributed to a constant washout of the e-flavour, Table 1. Figure 4: Dependence of the freeze-out asymmetry from lepton mixing on the parameter M N 2 , with the remaining parameters as specified in Table 1 (solid blue). For comparison, we show the asymmetry from standard Leptogenesis for the same parameters (dashed red).
since we find for the Yukawa couplings that (cf. Refs. [49,67]) √ m 2 cos ϑ 13 sin ϑ 12 sin ̺ 12 − e −iδ √ m 3 sin ϑ 13 cos ̺ 12 , (39a) √ m 2 cos ϑ 13 sin ϑ 12 cos ̺ 12 − e iδ √ m 3 sin ϑ 13 sin ̺ 12 . (39b) Next, we validate the assumption of strong washout and non-relativistic RHNs by considering the evolution of the individual flavour asymmetries Y ∆i = Y N 1 ∆ii + Y N 2 ∆ii over the parameter z = M N 1 /T , that is commonly used as the time variable when studying Leptogenesis from massive neutrinos. From Figure 3 (A), we observe that, as it is typical for strong washout scenarios, the freeze-out value of the asymmetry settles when z 10. In order to assess further the validity of the non-relativistic approximation for the RHNs as well as in order to determine the minimum value of the required reheat temperature, we start the integration of the Boltzmann equations at some value z = z ini with vanishing asymmetries as boundary conditions (while for all other numerical results, we start the integration at z = z ini = 0). From Figure 3 (B) we see that the result changes by less than 10% as long as z ini < ∼ 3. This independence of the details of the initial evolution of the asymmetries is a typical feature of the strong washout regime. Given the value of M N 1 from Table 1, we may therefore conclude that the minimum required reheat temperature T reh in the decoupling scenario is T reh > ∼ 1.2×10 9 GeV. Due to the order one uncertainties incurred through the estimate of B g ℓ and the momentum averaging leading to B / fl ℓ , this should be considered as coincident with the bound of T reh > ∼ 2 × 10 9 GeV for standard Leptogenesis [58,68,69]. However, the present optimal (by the criterion of minimising the lower bound on T reh ) point given in Table 1 is clearly distinct from the optimal parametric configurations in standard Leptogenesis because here, we are in the strong washout regime, while for the standard source, the lowest viable reheat temperatures occur in between the strong and the weak washout regimes.
The analysis presented in Figure 3 (B) also justifies the use of the flavour approximations for Regime B 1, valid for temperatures roughly below 1.3 × 10 9 GeV. While in fact, for z < ∼ 3, Our scenario falls into Regime A 2, the final prediction for the asymmetry should not be substantially affected by an inaccurate treatment of the flavour effects at early times.
Finally, we vary M N 2 while keeping the remaining parameters fixed as in Table 1. The resulting normalised freeze-out asymmetry is shown in Figure 4. We thus indeed verify that the ratio of M N 1 to M N 2 has no dramatic influence on the freeze-out asymmetry as long it remains of order one. For comparison, we also show in Figure 4 the asymmetry that arises for the same parameters from standard Leptogenesis. While we clearly see the resonance for M N 2 → M N 1 , away from this narrow enhanced region, the result is small compared to the asymmetry arising from lepton mixing. One should note however that there exist parametric configurations that are more favourable for standard Leptogenesis, in particular when saturating the bound on the reheat temperature from Refs. [58,68,69].
Three Sterile Neutrinos
Adding a third RHN N 3 implies that compared to the case with two RHNs only, the resulting asymmetry depends in addition on M N 3 , α 1 , ̺ 23 , ̺ 13 and the absolute mass scale of the light neutrinos, i.e. there are seven extra parameters. This appears to prohibit a comprehensive analysis of the parameter space in practice. Nonetheless, it is interesting to evaluate the asymmetries for an example point, that would be consistent with a smaller reheat temperature. We discuss how an enhanced asymmetry becomes possible even if generated at lower temperatures and how parameters need to be tweaked in order to arrange for such a situation. Table 2.
In Table 2, we present the point in parameter space for which we determine the freeze-out lepton asymmetry, where it can be seen from Figure 5 that an asymmetry in accordance with the observed value is obtained. Moreover, as exhibited in Figure 5(A), the final asymmetry is dominated by the e-flavour. This can be understood by an inspection of the matrix of Yukawa couplings that satisfies |Y ie | ≪ |Y iµ |, |Y iτ |, such that there is a substantially smaller washout rate for ℓ e than for ℓ µ and ℓ τ . On the other hand, larger Y iµ and Y iτ enhance the asymmetry q ℓee , cf. Eq. (15).
Turning to the parameters in Table 2, we observe the large imaginary part for ̺ 13 , which implies that the Yukawa couplings have a larger magnitude than for configurations with smaller imaginary parts of the ̺ ij . This implies that there is a cancellation in individual terms contributing to the masses of the light neutrinos in the see-saw mechanism, which may be interpreted as parametric tuning. It is noteworthy that situations with large couplings of ℓ µ,τ and relatively small couplings of ℓ e to the RHNs are also favoured in scenarios of Leptogenesis where the CP -violating source arises from the oscillations of relativistic RHNs [35,36,49,63,70], (the so-called ARS scenarios, after the authors of Ref. [36]). We emphasise that however, the source from active lepton mixing is different from the source from RHN oscillations, and while the favoured parametric configurations bear similarities in the pattern of Yukawa couplings, for given masses of the RHNs, the main contributions to the asymmetries are generated in both scenarios at very different temperatures, cf. Ref. [63].
Conclusions
In this work, we have investigated in some detail the possibility of generating the baryon asymmetry of the Universe from the mixing of lepton doublets within the SM extended by two or three RHNs. For this purpose, we have introduced a diagrammatic representation of the underlying mechanism, and we have discussed the dynamics of the flavour correlations of SM leptons at various temperatures. We then have performed a comprehensive parametric study in the setup with two RHNs in the type-I see-saw mechanism. For the case with three RHNs, we have identified a way to achieve lower reheat temperatures that are consistent with the observed baryon asymmetry.
We find that baryogenesis from mixing lepton doublets is a generically viable scenario in the type-I see-saw framework, provided • there are RHNs present in the mass range between 10 9 GeV and 10 11 GeV (cf. the discussion of Section 2.1.1 concerning the upper bound and the numerical findings of Section 3 regarding the lower bound) • and these RHNs are of the same mass-scale, while they do not need to be degenerate.
The lower mass bound on the RHNs and consequently on the reheat temperature can be evaded through a certain alignment of the Yukawa couplings Y , that allows these to be relatively large while the masses of the light active neutrinos remain small, which is possible in the presence of three RHNs, cf. Section 3.3. Methodically, the present calculation draws from formulations of Leptogenesis in the CTP approach that have been applied to the resonant regime [20], to oscillations of relativistic RHNs [35] as well as to the decoherence of active lepton flavours [9]. To this end, we identify the quantities B / fl ℓ and B g ℓ as the main contributors to the theoretical uncertainty. In introducing these, we average over the lepton momentum-modes under the simplifying assumption of identical reaction rates. While such a procedure is common practice in similar calculations for Leptogenesis from oscillations of RHNs (cf. e.g. Ref. [35,45,48]), it would nonetheless be desirable to improve on this approximation in the future by resolving the different reaction rates for each momentum mode.
It is interesting to observe that the minimal reheat temperatures for standard Leptogenesis and for baryogenesis from mixing lepton doublets appear to coincide. For couplings as in the SM, the term (h 2 aa + h 2 bb )B / fl ℓ B g ℓ in the enhancement factor, Eq. (8), numerically dominates in the denominator. Smaller gauge couplings would therefore lead to a larger asymmetry. On the other hand, the size of the gauge couplings does not have a leading influence on the asymmetry for standard Leptogenesis [58,68,69]. Therefore, the similar bound on the reheat temperatures can be attributed to a parametric coincidence.
We also note that the analysis in the present study is valid for the strong washout regime, i.e. the situation where the RHNs can be approximated as non-relativistic during the creation of the asymmetry. It would be interesting to relax this assumption (which should be possible for at least one of the RHNs when three or more RHNs are present altogether) because one may then anticipate substantially larger deviations from equilibrium. In that case, a calculation would however not enjoy the considerable simplifications that arise from treating the RHNs as non-relativistic.
While we have considered here the somewhat minimal framework of the SM augmented by RHNs, our analysis implies that new gauged particles that share the same quantum numbers and that are nearly degenerate or effectively become degenerate at higher temperatures are generic candidates for being involved in creating the matterantimatter asymmetry. This opens new prospects for scenarios of baryogenesis from out-of-equilibrium reactions in the expanding Universe. | 11,692.8 | 2014-11-11T00:00:00.000 | [
"Physics"
] |
Anti-predator behavior in two brown frogs: differences in the mean behaviors and in the structure of animal personality variation
Predation is a major source of selection and prey are known to modify their behavior depending on their past experiences and the current perceived risk. Within a species, variation in experience and in the response to perceived risk combine to explain variation in personality and individual plasticity. Between species, variation in personality and plasticity might also be the evolutionary consequence of different selective regimes. In this study, we describe the anti-predator behavior of two closely related brown frogs, Rana dalmatina and Rana latastei, and compare their structures of personality variation. We raised tadpoles in a common garden experiment with either fish, dragonfly larvae, or no predators. Tadpoles were then repeatedly tested in the presence of the three acute stimuli and their behavioral variation was described in terms of quantity and quality of movements and of path sinuosity. In these tests, tadpoles of both species and ontogenetic treatments responded flexibly to predators by moving less, faster, and with more tortuous movements, and tadpoles raised with predators tended to move even faster. Independent of the acute treatment, R. dalmatina moved more and faster than R. latastei and the differences were larger without than with predators, demonstrating its higher plasticity. At the individual level, the two species showed qualitatively similar but quantitatively different structures of personality variation. R. dalmatina, more active, faster, and more plastic than R. latastei, showed also higher repeatability and a larger behavioral variation both among and within individuals. Predators are a major source of selection and preys have evolved the ability to flexibly respond to them. These responses often vary among species, because of their different evolutionary histories, and among individuals, because of their different experiences. We analyzed both these sources of behavioral variation in two closely related brown frogs, Rana dalmatina and R. latastei. We raised tadpoles either with or without predators and tested them in open field trials both with and without predators. The effects of the raising environment were similar in the two species, whereas the effects of the testing arena differed. Both species decreased activity and increased speed and sinuosity with predators, but R. dalmatina moved always more and faster than R. latastei and it showed higher plasticity, larger variation among and within individuals, and relatively higher values in repeatability.
Introduction
Behaviors are labile phenotypic traits that individuals can flexibly vary, often over short temporal scales. Historically, in the attempt to unravel the causes of behavioral evolution, behavioral ecologists have focused on individual means, considering among-individual variation as the raw material for natural selection, and within-individual variation as a noising factor to be controlled either experimentally or statistically (Wilson 1998). In the last two decades, however, behavioral ecology has been witnessing a shift, with an explosion of studies that take directly in account within-individual variation (Stamps and Groothuis 2010;Wolf and Weissing 2012). These studies measure the behaviors of several individuals, multiple times under different environmental conditions, and, with statistical methods derived from quantitative genetics (Roff 1997;Brommer 2013), they decompose the total behavioral variation in its three main components: variation among individuals ( V I ), variation among environments ( V E ), and variation due to the interaction between individuals and environments ( V I×E ). The first component is called "animal personality" (Réale et al. 2007;Dingemanse et al. 2010) and is mathematically defined in terms of repeatability (Nakagawa and Schielzeth 2010). The second and third components describe individual plasticity (or "contextual plasticity," Stamps 2016), considering, respectively, variation in the average behavior of an individual in different contexts (i.e., "individual plasticity" sensu stricto), and among-individual variation in their plastic response (Dingemanse and Dochtermann 2013;Stamps 2016;Houslay et al. 2018).
All three components of behavioral variation are important to understand the ecology and the evolution of a species (Roche et al. 2016). From the one hand, plasticity allows individuals to respond to changes in the external conditions on short timescales and it may influence the ecological success of a species (Réale et al. 2007Wolf and Weissing 2012). From the other hand, variation in personality and in individual plasticity influences the strength of selection and the evolution of a species (Réale et al. 2007Wolf and Weissing 2012). In this case, however, the effects depend on the heritability of flexible behavior, because genetically identical individuals might develop a different personality and a different plasticity if they have experienced different environments (Urszan et al. 2018;. For example, in many anurans, tadpoles' feeding activity tends to decrease in the presence of predators (Relyea 2001;Van Buskirk 2001;, although this does not necessarily result in a decrease in the amount of food ingested (Steiner 2007). This flexible response is adaptive, because, as predation risk increases, the costs of moving may increase much more than its benefits (Van Buskirk and McCollum 2000); and it is largely innate, because the decrease in activity is shown to be independent of tadpoles' experiences with predators. The response to predators, however, is not fully immune to experience. In the Italian tree frog, Hyla intermedia, for example, tadpoles raised with predators were always less active (shier) and less plastic than their siblings raised without predators, providing evidence for environmental effects on the development of tadpole personality ). In the wood frog, Lithobates sylvaticus, tadpoles raised with predators were able to learn new predator cues more effectively and retain their memory for longer than their conspecifics raised free of predators (Ferrari 2014).
If variation in personality and individual plasticity affected the adaptive evolution of a species, then differences in personality and plasticity between closely related species might provide important insights into their adaptive meaning. This comparative approach has been rarely adopted in studies of animal personality (Michelangeli et al. 2020;White et al. 2020) and most comparative studies on behavioral plasticity were conducted at the individual-mean level. For example, in a seminal work on predator-induced behavioral plasticity in tadpoles of two North-American frogs, Relyea (2000) showed that, in both species, the proportion of active tadpoles decreased with predators and that the decrease differed between species. In this way, he provided convincing evidence for adaptive plasticity within species and for adaptive differences in plasticity between species. However, since this study described plasticity at the population level only, it could not explain whether these differences arose as the effect of variation in either personality, plasticity, or both.
In this paper, we compare, from an individual perspective, the anti-predator behavior of tadpoles of two closely related brown frogs, Rana dalmatina and Rana latastei. Several aspects of the ecology and the evolutionary history of these species make them a suitable model for this type of studies. R. dalmatina and R. latastei are sister species (Veith et al. 2003;Yuan et al. 2016). They show low genetic variation, which suggests that they survived the Pleistocene glaciations in single refugia in southern Europe (Ficetola et al. 2007;Vences et al. 2013). Post-Pleistocene expansions, however, has had markedly different effects on the two species. Despite their similar ecology, R. dalmatina has succeeded in colonizing the low-plain territories of much of Central and Western Europe, whereas R. latastei has survived only in a restricted area in Northern Italy (Sillero et al. 2014). In a previous study (Castellano et al. 2022), we provided evidence that these differences in the distribution range might explain differences in the plastic behavioral response to heterospecific presence or cues. In fact, R. latastei, which is sympatric to R. dalmatina in most of its range, sensibly increased activity in the presence of the other species, whereas R. dalmatina, which is sympatric to R. latastei only in the periphery of its range, did not. We suggested as a plausible explanation the source-sink hypothesis (Kirkpatrick and Barton 1997;Galipaud and Kokko 2020), according to which local adaptations in the periphery is prevented by gene flow from central regions of the species' range. In both species, we found evidence for animal personality, but no evidence that the plastic responses differed among individuals (i.e., no evidence for a significant V I×E ). Indeed, we found a good correspondence between the patterns of behavioral variation at the individual and species levels.
In the present study, we continue this line of research and show results of an experiment on anti-predator behavior, in which tadpoles of the two species have been raised either with or without predators (fish or dragonfly larvae) and repeatedly tested in open-field trials in the presence of a fish lure, a caged dragonfly larva, or an empty cage. We analyze their behavior at both the species and the individual levels. At the species level, we look for differences that are either consistent or variable across time and contexts (type of predators). At the individual level, we analyze how predators affect tadpoles' personality and individual plasticity. Specifically, we ask four main questions and make the following predictions: (i) How do tadpoles flexibly change their behavior with predators? Our adaptive hypothesis predicts tadpoles to plastically adjust their behavior to reduce predation risk. (ii) Do these changes depend on experience, that is, on the environment where tadpoles are raised? Our hypothesis predicts tadpoles raised with predators to behave more cautiously than those raised without predators, independent of the context. (iii) Do these changes differ between species? As mentioned above, our previous experiment showed that R. latastei did respond plastically to the presence of heterospecific competitors, whereas R. dalmatina did not. If we observe a similar pattern in response to predators then we should conclude that R. dalmatina is less plastic than R. latastei, in general, and not only in response to R. latastei tadpoles. This result would weaken the source-sink hypothesis and support alternative explanations, such as the "pace-of-life" hypothesis (Castellano et al. 2022). In contrast, we predict that the antipredator plastic responses of the two species do not differ or, if they do, that R. dalmatina responds more plastically than R. latastei. (iv) Since behavioral differences between species ultimately depend on differences among individuals, are there differences in the structure of animal personality? If the two species show different developmental plasticities (see question ii), then we predict a large variation in personality and/or in individual plasticity in the more plastic species. However, we acknowledge that the raising environment is just one of many factors responsible for behavioral variation and thus, we consider this question largely explorative.
Materials and methods
On March 2, 2021, we collected eight freshly laid clutches (Gosner stages 1-3) in two breeding sites located in Special Areas of Conservation of the Po-river basin, in Northwestern Italy: Four clutches were of R. dalmatina and were collected in the site "Po morto di Carignano" (IT1110025 SAC). The other four were of R. latastei and were collected in the site "Confluenza Po-Varaita" (IT1160013 SAC). The clutches were transported to our field research station and placed outdoor, in separate 60-l tanks until hatching. Ten days after all clutches have hatched, on April 2, 2021, from each clutch, we haphazardly collected with a dip net 30 tadpoles and placed them, in group of 10, into plastic tanks ( 40 × 34 × 17 cm) in about 12 l of water. The 24 tanks were placed, in groups of four (two of R. dalmatina and two of R. latastei), into 6 fiberglass troughs (217 × 40 × 15 cm) (Lamar, Udine s.r.l.). All troughs were in a lawn under a shelter of 50% knitted shade cloth material, to avoid full-sun exposition.
To allow homogeneous water flow through the containers within a trough, we cut two windows (25 × 10 cm) into the large sides of the containers and sealed them with 1-mm plastic mesh. The 6 troughs were arranged in two blocks and each trough within a block replicated one of three ontogenetic treatments. One trough contained four dragonfly larvae (genus Aeshna); one contained two young specimens of the common rutt (Scardinius erythrophthalmus); and one was used as control. Each dragonfly larva was kept into a perforated plastic cage (base diameter = 15 cm), placed in the trough outside the containers but close to their windows, so that tadpoles could sense the predator presence. Dragonfly larvae were fed twice a week with small tadpoles, to produce digestionreleased alarm cues (Hettyey et al. 2015;. In the fish treatment, predators were free to swim within the trough, but without physical contact with tadpoles. Since in this treatment, predators were not fed with tadpoles, but with dried chironomids, we exposed tadpoles of this treatment to artificial alarm cues. Previously euthanized tadpoles were placed in a mortar and their body grinded to a paste, which was suspended in water. Small (0.5 cm 3 ) pieces of synthetic sponge, soaked with this suspension, were placed in the tadpole tanks and replaced twice a week. On April 28, when tadpoles reached Gosner stages 26-27, from each tank, we haphazardly chose four tadpoles, which were transferred into separate, smaller containers ( 33.5 × 19 × 12 cm) in 5.5 l of water, and raised individually to keep track of their identity. The 96 containers were arranged into four blocks, each with three troughs, one used as a control and two assigned to the predator treatments, as described above; and each container within a trough hosted a single tadpole of one of the eight families. To allow homogeneous water flow, the containers were provided with two windows ( 25 × 10 cm) cut into their larger side and sealed with 1-mm plastic mesh. All tadpoles were fed fish vegetable flakes ad libitum until the end of the experiment, when they were returned to their native ponds.
On May 4, we started the video-recording trials, which terminated on May 15, for a total of 9 daily recording sessions. In a daily session, we carried out six trials, and in a trial we simultaneously recorded the activity of 16 tadpoles, so that all 96 tadpoles were tested once in a daily session. We used 16 arena tanks ( 60 × 40 × 15 cm), half filled with well water. Eighty centimeters above each tank, we placed a Raspberry 3 single board B3 + computer with a Raspberry Pi v2.1 8 MP camera. The Raspberries were connected via internet to a laptop computer, which used a custom-designed software written in Python 3 (https:// github. com/ olivi erfri ard/ raspb erry_ video-recor ding_ coord inator) to control for the recording activity of the 16 cameras. Tadpoles were tested under three acute treatments: (i) the empty-cage treatment (C, control treatment); (ii) the caged-dragonfly treatment (D acute treatment), and (iii) the caged-fish treatment (F acute treatment). In the predator acute treatments, tadpoles were exposed to three types of cues. The predator chemical cues were obtained by letting predators free to move inside the experimental tanks during the night before the recording session. The visual cues are known to play a role in tadpoles' anti-predator response (Hettyey et al. 2012) and were obtained by placing either a living dragonfly larva or a fish lure (i.e., a realistic ribbon perched lure used for fishing trout) inside the cage into the experimental arenas. The conspecific alarm cues were released by a small piece of synthetic sponge soaked with a suspension of smashed conspecifics (see above) placed inside the predator's cage. In both predator treatments, tadpoles were not exposed to digestion-derived cues but only to conspecific alarm cues. Water was not changed during a daily session.
In a recording trial, tadpoles were first let to acclimatize inside a plastic cage for about 5 min; then the cage was lifted and tadpoles were free to move. Recordings were carried out at a 1280 × 720 resolution and a 10-Hz frame rate and lasted for 40 min. The recording sessions were divided into three rounds. In the first round, at day 1, all 96 tadpoles were recorded in the C control acute treatment; at day 2, 48 tadpoles were recorded in the dragonfly acute treatment (D) and 48 in the fish acute treatment (F); at day 3, those previously tested in D were tested in F and vice versa. The same procedure was followed in the second and third rounds.
We analyzed the recorded videos with the semi-automatic tracking software DORIS v.0.0.19 (https:// github.com/ olivierfriard/DORIS), an open-source program in Python, which uses the OpenCV library for image processing and a user-friendly graphical interface (GUI) to set the input parameters of the analysis. To minimize observer bias, blinded methods were adopted during video analyses.
The DORIS program saves, for each video, a table with frame-by-frame Cartesian coordinates of the tracked objects. From the entire set of coordinates, we computed two new variables: the inter-frame speed, which is the Euclidean distance between the tadpole positions in frames f and frame f + 1, multiplied by the video frame rate; and the activity state, a binary variable that scores "1" ("moving state") if the inter-frame speed is greater than or equal to 2 cm/s and "0" ("resting state") otherwise. We used this binary variable to compute movement-bout durations. In this case, we considered a bout of movement only when the tadpole was in a moving state in at least five consecutive frames (i.e., we considered only bouts longer than 0.5 s). From these variables, we derived the eight descriptors of tadpole activity. The first three descriptors were computed on the entire sample of frames and were (i) the mean speed (mSPEED), (ii) its standard deviation (sdSPEED), and (iii) the activity index (IND), defined as the proportion of frames with tadpoles in a "moving state." The remaining five descriptors were computed on the subsample of frames that described the bouts of movements and included (iv) the number of bouts (nBOUTS), (v) their mean duration (mD_BOUT), (vi) the mean speed within a bout (mS_BOUT), (vii) the mean acceleration (mA_BOUT), and (viii) the mean change in direction (MCD_BOUT). To calculate MCD_BOUT, for each frame (with coordinates x and y), we first computed the angular direction as where i indicates the bout and f the frame within that bout. We then computed the absolute values of the differences in direction between successive frames and defined SINUOS-ITY as the mean of these differences.
where B is the total number of bouts and N i is the total number of frames within the bout i.
Statistical analyses
We used Pearson's bivariate correlation analyses to describe the pattern of associations between the eight descriptors of tadpole movements, separately in R. dalmatina and in R. latastei. Since the parameters were often highly inter-correlated, we performed a principal component analysis on the sample that included both species and used the first three components as the dependent behavioral variables in successive analyses. We carried out two series of general linear mixed effect models (Castellano et al. , 2022. In the first, we wanted to investigate the effects of the species and of both the acute and the ontogenetic treatments (first three questions, see the "Introduction" section). We thus used, as fixed factors, the predictors responsible for both inter-individual variation (the species identity and the ontogenetic treatment) and intraindividual variation (the acute treatment and trial order), whereas we included, in the random part of these models, the tadpole identity, the family, and the troughs (to account for uncontrolled differences between the experimental units). Because the "family" factor had too few replicates within species (N = 4), its effects could have not been accurately assessed for each species and/or treatment. We thus introduced this factor in the models to statistically control for it, rather than to accurately estimate its effects. In these analyses, we run the full models with all the two-and three-way interactions. Successively, we run the reduced model with all the fixed and random factors, but with only the statistically significant interactions. Visual inspection of the residuals from all models was conformed to the assumption of residual normality.
The second series of analyses was planned to answer to our fourth question (see the "Introduction" section), which focused on the structure of animal personality. To this purpose, for each behavioral variable, we performed two multivariate mixed models, separately for the two species. By adopting the trial order as a pairing criterion, we split each behavioral variable in three (one for each acute treatment) and used the resulting matrix as the set of dependent variables (Houslay et al. 2018). These models used trial order as a covariate and tadpole identity as a random factor. The ontogenetic treatment and the family factors were excluded, because of their potential effects on among-individual variation and, thus, on personality. In this second series of tests, we adopted a "character state" approach (Houslay et al. 2018), because the three acute treatments could not have been a priori aligned along an ordinal axis (Castellano et al. 2022). From these models, we measured among-individual variances and all cross-treatment correlations. Variances were used to compute behavioral repeatability, as a proxy of animal personality (Nakagawa and Schielzeth 2010). Correlations were used to test for variation in behavioral plasticity ( I × E ). Under the "character-state" approach, the null hypothesis of no variation in individual plasticity is rejected if across-treatment correlations are statistically lower than 1 and/or among-individual variances are statistically different between treatments (Mitchell and Houslay 2021).
To test for between-species differences in the amount and structure of variance, we adopted three approaches. First, for each behavioral trait and statistical parameter, we compared the posterior distributions in the two species and rejected the null hypothesis of no differences if their credible intervals (95% CI, see below) did not overlap. We adopted this over-conservative criterion to minimize the risk of a type I error, which was inevitably high due to the large number of comparisons. The other approaches aimed at testing for more general differences in the patterns of trait (co)variation. To test for between-species differences in repeatability, we carried out a paired t-test, between the posterior modes of the three behavioral traits in the three acute treatments. Finally, to test for between-species differences in the structure of the correlation matrices, we followed White et al. (2020) and calculated the main eigenvector from each correlation matrix and the amount of total variance it explained. As mentioned above, under the null hypothesis of no variation in individual plasticity, all correlations are expected to be close to 1. This means that the main eigenvector is expected to show positive coefficients and the percentage of variance to be close to 100%. Low values of variance indicate high variation in individual plasticity. Moreover, for each behavioral trait, we measured the angles between the main eigenvectors of the two correlation matrices. Angles may vary from 0 to 90°; the more similar the patterns of correlations, the lower the angles.
All mixed models were fitted using the brms package in R (Burkner 2017(Burkner , 2018, which adopts a Bayesian inference based on STAN. In all models, we used the default noninformative priors, and we run 4 chains of 4000 iterations each, with warmups of 1000 iterations. Trace and distribution of all models were checked visually for autocorrelation and sampling stationary (Faraway 2016). Rhat values were used to check for chain convergence (Burkner 2017). The posterior distributions of both fixed and random factors were used to estimate their expected values, with their 95% credible interval (CI). A fixed factor was assumed to have a statistically significant effect on the dependent variable if its credible interval did not include 0.
Results
In Table S1, we show the descriptive statistics of the eight behavioral variables in R. dalmatina and R. latastei tadpoles in the control, fish, and dragonfly acute treatments. In both species, variables are highly inter-correlated (Table S2) and the correlation coefficients of R. dalmatina regress positively against those of R. latastei (b = 0.949; SE = 0.073; P < 0.001), suggesting a similar pattern of multivariate associations among the behavioral variables in the two species. The first three principal components of the correlation matrix explained 83.7% of the total variance (Table 1). The first component is largely a size factor that describes the amount of movements (ACTIVITY): tadpoles that 98 Page 6 of 15 move more score higher on it. The second component is a shape factor that describes variation in speed and acceleration within bouts of movement (SWIFTNESS): tadpoles that swim faster score higher on it. The third component is mainly affected by MCD (SINUOSITY): tadpoles with highly twisted movement score higher on it. These components are used as dependent variables in the successive analyses with general mixed-effect models.
Factors affecting tadpoles' behavior
All three behavioral variables were affected by the acute treatments. In the presence of a caged dragonfly or a fish lure, tadpoles decreased their overall activity (Table 2 and Fig. 1) and moved faster (Table 3 and Fig. 2) with more tortuous movements (Table 4 and Fig. 3) than under the emptycage control condition.
The ontogenetic treatment showed no significant effects on sinuosity (Table 4), but it did affect swiftness (Table 3 and Fig. 2) and, marginally, activity. Independent of the acute treatment, tadpoles raised with dragonfly larvae moved faster than those raised with no predators, whereas tadpoles raised with fish moved slower, but only in the presence of a fish lure (Table 3). Tadpoles raised with fish showed a slightly higher activity than tadpoles raised either with dragonflies or without predators.
Tadpoles of the two species showed significant differences in activity and swiftness, but not in sinuosity. Independent of the acute and ontogenetic treatments, R. dalmatina tadpoles moved more (Table 2 and Fig. 1) and faster (Table 3 and Fig. 2) than R. latastei tadpoles. Moreover, these between-species differences were often context dependent. In the presence of dragonfly larvae, R. dalmatina tadpoles decreased activity and increased swiftness more than R. latastei tadpoles. In contrast, in the presence of a lure fish, the increase in swiftness was higher in R. latastei than in R. dalmatina.
All models included the recording day as a covariate and the family as a random factor. Tadpoles increased the amount (Table 2) and the sinuosity (Table 4) of their movements with time, but not their swiftness (Table 3). The family explained a not-significant portion of variation in swiftness (SD = 0.07, CI = 0.00-0.23), and a low, but significant portion of variation in both activity (SD = 0.17, CI = 0.02-0.41) and sinuosity (SD = 0.27, CI = 0.05-0.65).
Among-individual variation
To analyze individual behavioral differences, we used mixed models, separately for the two species (see the "Methods" section). In Table 5, we show the among-individual standard deviations, the residual standard deviations, and the between-treatment correlation coefficients of the three behavioral variables in the three acute treatments. The (co)variance matrices provide evidence for a significant, though weak, variation in individual plasticity. In fact, the credibility intervals of the correlation coefficients, though positive, do not include 1, and the standard deviations differ between acute treatments in at least some comparisons. Specifically, in the fish acute treatment, tadpoles of R. dalmatina showed a larger among-individual variation in activity than in the control, possibly because tadpoles raised with fish decreased their activity less than those raised without fish (see Table 4). Similarly, tadpoles of R. latastei in the presence of dragonfly predators showed larger variation in sinuosity than in the control, possibly because tadpoles raised with dragonflies tended to increase sinuosity less than those raised without dragonflies. In both cases, differences in among-individual variation and, thus, variation in individual plasticity ( V I×E ) could be interpreted as the effect of developmental plasticity.
In R. dalmatina, the residual variation in activity was significantly higher in the control than in both the fish and dragonfly acute treatments. In R. latastei, the pattern was similar, but the differences were not statistically significant. Unlike activity, in both species, residual variation in swiftness increased in the presence of predators (in particular, with dragonfly larvae), and in R. latastei, differences were statistically significant. Residual variation in swiftness was similar in the three acute treatments.
These results clearly show that the acute treatments affect both among-and within-individual behavioral variations. In Table 6, we show how they affect behavioral repeatability. In general, repeatability was low, but statistically significant in 8 out of the 9 estimates in R. dalmatina, and in 5 out of 9 estimates in R. latastei. With only one exception Fig. 1 Individual variation in the amount of movements (ACTIVITY) as a function of the three acute stimuli (C, control; F, fish; D, dragonfly larvae). In each panel, small solid circles connected by thin transparent lines indicate context-dependent individual means. Large solid circles, connected by the thick colored lines, are the sample means within species (Rana dalmatina and Rana latastei) and ontogenetic treatment (raised without predators, raised with fish, raised with dragonfly larvae). To facilitate comparisons between the control and the two predator ontogenetic treatments, the control mean values (green solid lines) are shown in all panels (activity in the control treatment), the repeatability values of R. dalmatina were higher than those of R. latastei and a paired t-test suggests that, overall, the effect of personality was stronger in R. dalmatina than in R. latastei (t = 2.597; df = 8; P = 0.032).
In Table 7, we show the leading eigenvectors of the between-treatment correlation matrices. These components explain a large portion of the total among-individual variation (range: 45.38%, 76.09%) and the estimates of their angles suggest that the correlation structure of the three behavioral traits (in particular, swiftness and sinuosity) was similar in the two species.
Discussion
In this study, we ask four main questions about the flexible anti-predator behavior of tadpoles. The first is how tadpoles plastically adjust their behavior in the presence of predator cues. In the open field tests, tadpoles of both species move less, faster, and with more tortuous movements when the arenas contained cues of predators than when it did not. Although we did not evaluate the effects of these flexible responses on predation risk, they are likely to decrease both detection and encounter rates with predators (Werner and Anholt 1993). The second question is about the effects of the rearing environment on tadpole behavior. Our hypothesis was that predators in the raising environment increased behavioral flexibility in the direction that reduces predation risk. Results support this hypothesis only in part. In fact, the rearing environment shows weak effects on tadpole activity and no effects on sinuosity, but a stronger effect on swiftness in the predicted direction. The third question is about differences between species. It is at this level of analysis that behavioral differences become more evident. Independent of the acute and ontogenetic treatments, R. dalmatina moves more and faster than R. latastei, and differences are greater when there are no predators in the recording arena than when there is a fish lure or a caged dragonfly larva. From a population point of view, our results thus suggest that R. dalmatina is behaviorally more plastic than R. latastei, with interesting evolutionary consequences that we will discuss below. Finally, the fourth question focuses on between-species differences in the pattern of individual behavioral (co) variation. Results support the expectation that the amongindividual (co)variance structure differs between species, and they provide evidence that behavioral repeatability is higher in the more plastic species, R. dalmatina, than in R. latastei. Below, we discuss in more details these results.
The observed behavioral responses to chemical and visual cues of predators are those predicted by natural selection. A reduction of activity increases survival by reducing detection probability. The effect has been observed in tadpoles of many species (Relyea 2000(Relyea , 2001Benard 2004;Gazzola et al. 2021), and it is stronger when predator cues are associated with prey-borne and/or digestion-released cues (Hettyey et al. 2015). Tadpoles are known to be able to modulate their response to different types of predators (Relyea 2001), and we found that, in both species, they responded more strongly to the presence of dragonfly larvae than of fish lures. These differences might be biologically relevant and reflect the longer co-evolutionary history shared by tadpoles and dragonfly larvae (Polo-Cavia et al. 2020) or they might be an experimental artifact. In fact, both treatments used preyalarm cues, but in the dragonfly acute treatments, they were associated with living predators, whereas in the fish acute treatments they were with predator lures, which might have been less effective in eliciting anti-predator behaviors (Hettyey et al. 2015). Independent of the amount of movements, tadpoles of both species swam faster in the presence of predators. A similar response (an increase in the number of rapid burst) was also observed in these species in response to the invasive crawfish Procambarus clarkii (Melotto et al. 2021), suggesting that it might be a general reaction to a perceived increase in predation risk rather than the response to specific predator strategies. Tadpoles, once detected, might increase their survival by fleeing not only fast, but also unpredictably, that is, by performing what is called a "protean behavior," which prevents predators to anticipate the future position (or action) of their pursuing prey (Richardson et al. 2018). A recent study on tadpoles of R. latastei (Gazzola et al. 2021) shows that path complexity increases with predators, proportionally to the intensity of perceived risk: the increase is weak if tadpoles are exposed to chemical cues of an alien predator (P. clarkii), but stronger if they are exposed to cues of dragonfly larvae. Our study confirms and extends these results to R. dalmatina. Unexpectedly, however, we find that path sinuosity increases in a similar way in the two predator treatments, although the strategy is expected to be more effective against fish, which pursue their prey, than against dragonfly larvae, which wait in ambush. As for the increase in swiftness, also the increase in path sinuosity appears to serve as a general defense against both types of predators, and we find no evidence for predator specificity responses (but see (Relyea 2001). These anti-predator responses are a form of flexible behaviors, which tadpoles perform independent of the type of the environment they previously experienced. As observed in many anurans, however, the raising environment does affect tadpoles' behaviors by modulating their flexibility. For example, in the Italian tree frog, tadpoles raised with dragonfly larvae were less active than those raised without predators, independent of the type of acute stimulus they were exposed to (control vs. caged dragonfly larvae) ). In the Neotropical tree frog Dendropsophus ebraccatus, tadpoles raised with dragonfly larvae moved consistently less than those raised without predators, but those raised with fish showed the opposite trend. In both cases, the induced changes were similar in chronic and acute treatments (Reuben and Touchon 2021). Our study provides no evidence for an effect of the ontogenetic treatment on either tadpoles' activity or sinuosity, but it shows evidence for an effect on swiftness. In fact, tadpoles raised with dragonfly larvae and, to a lesser extent, those raised with fish swim, on average, faster than tadpoles raised without predators. Since the chronic exposure to predators is known to elicit morphological changes with consequences on tadpole swimming performance (Van Buskirk and McCollum 2000;Benard 2004;Fraker et al. 2021), we cannot exclude that the increase in swiftness might be related to predator-induced changes in tadpoles' morphology rather than in the neuronal circuits controlling motor responses. However, if morphological changes are the only cause, we should expect to observe consistent changes across acute treatments. In contrast, we find that the increase in swiftness in predator-raised tadpoles is context dependent: it is high when in the arena there are no predator cues and it decreases with fish cues.
Although qualitatively similar, anti-predator behavioral responses differ quantitatively between species. Independent of the context, R. dalmatina tadpoles are more active and swim faster than R. latastei tadpoles. Similar differences were also observed in a previous study on the behavioral responses to inter-and intra-specific competitors (Castellano et al. 2022). In that study, tadpoles of R. latastei were found to stay longer at the bottom of the tank and to make shorter, more intermittent movements than R. dalmatina tadpoles, which, in contrast, tended to spend more time swimming through the water column. Since, in both experiments, tadpoles are raised in a common environment, differences are likely to be genetically based and, thus, they might reflect some fine-scale differences in the ecology and life history of the two species (Castellano et al. 2022). We suggest that the low activity and swiftness of R. latastei might be an adaptation to benthic habitats of shallow water, where insect predators are more abundant. Stronger and more predictable selection by predators might have favored the evolution of cautious, less plastic behaviors. In contrast, the higher activity and swiftness of R. dalmatina might be an adaptation to open water. In this micro-habitat, the presence of predators might be more unpredictable and selection might have favored bolder, more plastic individuals that escape predators by rapid flees rather than effective hiding places. Interestingly, similar results were found in a study by Semlitsch and Reyer (1992), which compared the anti-predator responses of tadpoles of two closely related pool frogs (Pelophylax lessonae and P. esculentus). Due to their hybridogenetic mating system (Berger 1977), the two species are forced to syntopy, but they show different genetically based anti-predator behaviors, which were interpreted as adaptations to different ecological niches within natural ponds. It is intriguing to notice that the high plasticity that characterizes the behavior, the morphology, and the life history of tadpoles seems to have had no limiting effects on natural selection to promote fine-scale adaptations to different aquatic micro-habitats.
Behavioral differences between species are context dependent. Differences in activity are larger when the arena contains no predators and decrease in the presence of fish and, even more, in the presence of caged dragonfly larvae, providing evidence that tadpoles of R. dalmatina adjust their activity more flexibly than those of R. latastei. These results contrast with those from a previous study on the plastic response to intra-and inter-specific competitors, which showed flexibility in R. latastei, but not in R. dalmatina (Castellano et al. 2022). In that study, we formulated two alternative hypotheses that interpreted differences in flexibility as either adaptive or non-adaptive. According to the adaptive hypothesis (the pace-of-life hypothesis (Wright et al. 2019)), the low sensitivity of R. dalmatina to interspecific competitors reflects a general low plasticity of the species, and it is the effect of natural selection favoring a fast life strategy which combines high metabolic rates and high activity levels with risk-prone behaviors and a general low sensitivity to environmental cues (Wright et al. 2019). According to the non-adaptive hypothesis (the source-sink hypothesis (Kirkpatrick and Barton 1997)), in contrast, the low sensitivity is evidence that natural selection failed to promote adaptive flexibility in the most peripheral populations of R. dalmatina. In fact, while most of the populations of R. latastei are sympatric to R. dalmatina, only the most peripheral populations of R. dalmatina are sympatric to R. latastei (Ficetola et al. 2007). In these peripheral populations, gene flow from the most central populations of the range might have reduced the effects of natural selection in promoting sensitivity to R. latastei competitors. The present
Declarations
Ethics approval The experiment followed ASAB (2020) guidelines for the ethical treatment of animals in behavioral research and complied with Italian national and Piedmont regional laws. Approval by ethics committee was not required. The permit to collect eggs was given by the Italian Ministry of Environment, Land and Sea (U.0031391-15.11.2019-PNM). During the experiment, 50% of water was changed twice a week, and food in excess was removed to optimize rearing conditions. Before video recording and measurement sessions, tadpoles were captured using hand nets and moved using water filled containers.
To produce alarm cues, we adopted two procedures depending on the type of predators. In the dragonfly treatment, alarm cues were produced by feeding dragonfly larvae with small tadpoles (Gosner stage 26-30) twice a week. At this feeding rate, predation occurred shortly after prey introduction, so that prey suffering was minimized. In the fish treatment, alarm cues were produced by euthanizing small tadpoles and grinding their body to a paste (see "Methods" section). For each trough, we used two tadpoles for each species twice a week. At the end of the experiment, all tadpoles were returned to the ponds where eggs had been collected.
Conflict of interest
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 9,577.4 | 2023-08-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Elasticity Tensors in Nematic Liquid Crystals
Abstract: The paper is discussing the contributions to the free energy of elastic distortions in a nematic liquid crystal. These contributions are here given by tensors, which are represented by means of the components of the director, the unit vector indicating the local average alignment of molecules, and by Kronecker and Levi-Civita symbols. The paper is also discussing the elasticity of the second order and its contribution in threshold phenomena.
Introduction
The equation of Oseen-Frank, which is providing the free energy density in nematic liquid crystals [1,2], is representing the part of that energy density coming from an elastic deformation of the bulk of the material. In such approach, the terms depending on the deformations are multiplied by the three elastic constants (K11, K22, K33) of "splay", "twist" and "bend". After, to this energy density, Nehring and Saupe added two terms having elastic constants of "mixed splay-bend" (K13) and of "saddlesplay" (K24) [3] . These terms, which have K13, K24 as elastic constants, are equivalent to contributions to the surface free energy of the material, because they are divergences of some vector functions of the director. These contributions to surface energy are relevant for nematic cells subjected to weak anchoring conditions [4][5][6].
The elastic constants previously mentioned are constants that multiply some scalar functions obtained from the director, the unit vector giving the local average alignment of the molecules, and from its derivatives, in the form of rotors and divergences. In this paper, we will discuss how the terms of the free energy density in nematic liquid crystals can be described by tensors, and how these tensors can be represented using the director components and the Kronecker and Levi-Civita symbols. This paper will also discuss the terms of the second order and their role in threshold phenomena [7,8].
The elastic deformation of a nematic
In an ideal nematic, the molecules are oriented, in average, along the director, the macroscopic unit vector n [9]. This alignment of molecules along a common direction is proper of the nematic phase, which, in this manner, is characterized by an orientational order. This order disappears in the isotropic phase. To describe the nematic phase, a tensor order parameter is given as: In (1), i and j are indices of the Cartesian frame of reference; ni are the director components and δij the Kronecker symbol. Scalar Q depends on the temperature and goes to zero in the isotropic phase. Tensor Qij can be a function of the position vector r. Let us remember that in the macroscopic approach of the continuum mechanics, this position vector does not describe the positions of the single microscopic molecules.
If the order parameter is depending on position r, then we need to assume this dependence through the director as follow: r r r (2) In the continuum approach, the deformation of the bulk of the material is described by means of three fundamental deformations [9]. They are the deformations of splay, twist and bend. The features of them are the following. In the splay, we have div n ≠ 0. In the bend, we have that rot n is perpendicular to n. Then, in the twist deformation, rot n is parallel to n.
Oseen and Frank demonstrated that the density of the free energy given by distortions in the second order of n is represented by the three deformations: In (3), K1 is the elastic constant of splay, K2 of twist and K3 of bend. Since it is possible to have deformations of pure splay, or of twist or bend, each of these elastic constants must be positive. If it were not so, the undistorted deformation would not be that having the minimum energy.
To determine the contribution to the free energy coming from the elastic deformation, Oseen and Frank used the following approach [9]. Let us assume the z-axis along the direction of director n, the x-axis perpendicular to the director and y-axis perpendicular to x-and z-axes, oriented according the right-hand rule.
We can distinguish six elemental distortions, linked to the director variation. Therefore, the deformations of splay, bend and twist can be described by means of six derivatives: For small deformations, we have: z a y a x a n z a y a x a n n y x z 6 5 4 Then, we can write the density of the free energy coming from the distortion of the nematic according to the Hooke law, as a quadratic function of the deformation: In (8), there are six elastic parameters Ki and 36 elastic parameters Kij. Anyway, using symmetries these numbers of parameter are reduced. For instance, the cylindrical symmetry of the nematic liquid crystal implies invariance for rotation about z-axis. So the energy is invariant during such rotation. Then, because of invariance for transformation x'= y, y'=x, z'= z, we have only two independent Ki parameters (K1, K2) and five independent Kij parameters (K11 ,K22 , K33, K 24, K 12 ).
Moreover, we have the condition n = -n, which is telling that the director direction does not influence the features of the medium. In this case, it is necessary that transformation z' =-z, y' =-y, x' =-x, does not change the deformation of the threedimensional structure. So we need K1 and K13 being zero. Also the mirror symmetry exists, and then, according to transformation x' = x, y' = -y, z' = z, we find K2 and K12 being zero.
In this manner, in the elastic energy density of nematics we have just four coefficients: K11, K22, K33 and K24 . However, the term multiplied by coefficient K24 is not proper of the bulk energy, because it is a contribution to the surface energy of the material [4]. Consequently, the Frank equation for the density of the elastic energy is given by the following elemental deformations [ In fact, squaring the second term, the contribution of (K2 /K22) 2 is not relevant, because it does not depend on the deformation. The ratio q0 = K2 /K22 is the modulus of the wave-vector of the cholesteric helical structure, having pitch P0 =2π/q0. In the case 0 2 K , the equilibrium configuration of the cholesteric nematic is that having a spontaneous twisted deformation.
ni,j = ∂ni / ∂xj
Following the Oseen -Frank approach, let us being more general. If the director n does not depend on position, the liquid crystal is undistorted and the free energy density is supposed equal to a certain quantity represented by f0, which is a quantity that does not change if the liquid crystals is subjected to a deformation.
If n = n(r), the nematic is deformed. Let f being the density of elastic energy which is created in the material. We have then that ni,j=∂ni /∂xj are different from zero. Let us suppose that these derivatives of n are enough for describing the distorted nematic, and then: If these derivatives are small, it is possible to consider a series in ni,j, so that: In (129, the components of tensors E and K are given by: ; (12') These quantities are evaluated with respect the undistorted configuration, which is also that having the minimum energy. In (12), we have used Einstein notation which implies summation on a set of indices; Let us write E and K, using combinations made by n, and the Kronecker δij and Levi-Civita εijk symbols.
Since in a nematic, n and ─n are equivalent, each term in (2) must be even in n. Let us remember that, for what is concerning the Kronecker symbol, it is Let us consider tensor E. We can represent its components as: But E1 = E2 = 0, because the nematic is not polar. Moreover: Elasticity Tensors in Nematic Liquid Crystals http://www.ijSciences.com Volume 5 -July 2016 (07) This is a pseudoscalar, being the nematic helicity [11,12]. Since the energy is a scalar, n n rot can be present in it, when this contribution is squared or if the nematic is a cholesteric nematic. In this case, the material is spontaneously showing a deformed configuration of the fundamental state having the minimum energy. For tensor K we have Kijkl=Kklij, because it is present in the term ; so we can write: In (16), we have: We used the fact that the director is a unit vector and then the derivative of its modulus is zero. (II) The surviving terms which are contributing to the elastic energy are: We have [7]: We can change the symbols for the coefficients, to see that this expression is that of the free energy density proposed by Oseen -Frank, with the saddle-splay term too.
As previously told, K11, K22, K33 e K24 are the elastic constants of splay, twist, bend, and saddle-splay. The last term in (21), if we consider the Gauss theorem, is a contribution to the surface energy. Then, the density energy of the bulk is depending on the three elastic constants of splay, twist and bend. If we consider also the helicity, the free energy becomes: If we have not splay and bend, and only twist is present, Equation (22) is minimized by: Then, in the case we have helicity (E ≠ 0), the configuration of the director which is minimizing the energy is a distorted one. As previously told, this happens in the cholesteric liquid crystals. From now on, we assume a nematic having E = 0.
The analysis of Nehring and Saupe
Let n be the director and ni,j , ni,jk the derivatives of first and second order. Let us suppose a free energy density depending on these derivatives Being tensor L multiplying derivatives of the second order, it must be: As we have done before, let us represent the components of tensors L, M and N, using n, δij and εijk. Let us start from L and determine its contribution to energy. Being n and ─n equivalent, the terms different from zero are: Then: We have already found the first three terms of (28), when we have discussed the tensor K. These terms can be added to those previously given. But in (28) we have a new term which is contributing to the surface term, like that having as elastic constant K24. This term is usually written with the constant K13, defined as the elastic constant of splaybend [13,14]. Supposing M = 0 and N = 0 , (24) becomes: Therefore, after renormalizing the constants: This is the free energy density of Nehring-Saupe [3]. However, we have also the terms coming from tensors M and N , to consider as sources of elastic constants, it they are different from zero. To dealt with them we need a second order analysis.
Second order analysis
We have seen in the previous section, that the free energy density, as given by Nehring and Saupe, including the term with elastic constant K13, is originated from tensors K and L. However, other terms exist that we have not yet discussed. To analyse them let us follow the approach given in [7]. In this reference, the free energy density is supposed a function of deformations as: That is, we assume as deformation sources, the firstand second-order derivatives of the director, generalizing the approach of the previous section.
The virtual variation δf of the density of the free energy f , close to equilibrium, can be described to the second order as: In (31), we find the variations of the first-and second-order derivatives of director n. In the linear theory of elasticity, in (31) we have to consider just the derivatives of the first-order, neglecting those of higher order. If the second-order is involved too, tensors λ, μ must be expanded in terms of the sources of deformation, ni,j and ni,jk. Therefore : mp l jk i ijklmp r q p m l k j i ijklmpqr q p lm k j i ijklmpq l k j i ijkl jk i ijk n n N n n n n H n n n D n n A n f , , , , , Let us expand tensors in (35'), using (ni,δij,εijk). After expanding the tensors, the terms that survive are (given in covariant form, that in using divergence and rotor): If we have only ni,j as source of deformation, (35') becomes: In this case, the term with K13 does not appear, in agreement with the elastic theory of the first order as given by Oseen-Frank. Therefore, if we consider K13, and also K24, we can ask ourselves what are the other terms we need to consider too, to have the expression of f coherent to the second order approach given in [7]. In we assume (36a) and (36b), we have the wellknown splay, twist and bend distortions coming from terms due to tensors μ° and A. However, we have many terms from tensors N, D and H (many of them are null because of the parity of n and pseudoscalarity of (n · rot n)). Let us note also that some terms are coming from both N and D, or from D and H, or from the three tensors at the same time.
In the three Tables I-III given after the References we show the contributions to the density of the free energy of a nematic liquid crystal coming from N, D and H. In the first of the three tables, the terms are coming from N. In the second table we have contributions coming from D, and those marked by * are shared with N. In the third table we have the contributions to the density of the free energy coming from H (terms marked by * are shared with D, whereas those marked by ** are shared with N and D).
Second-order elasticity close to a threshold
If we use for a nematic liquid crystal the theory of the first-order elasticity, we have only three constants to deal with. Whereas, if we use the theory of the second order, we have a quite larger number of elastic constant. Then, the use of such theory turns out to be unworkable. In fact, the elasticity of the second order can become quite simple, when it is involved in a threshold phenomenon, when the nematic is changing its configuration from an undistorted to a distorted one.
Let us remember that the most common experimental condition to study the nematic is that of creating a cell composed by two plane-parallel glass slides, spaced a few microns, in which the nematic is inserted by capillarity. The two inner surfaces of the cell may have the same or different treatments, in order to induce the desired nematic configuration (planar, homeotropic or hybrid). The cell can be placed in a thermostat with appropriate optical windows for the observation under polarized microscopy of the phases or of the material configurations. Let us consider a flat deformation of the nematic, that is, one which is obtained for example in a cell with opposite conditions, homeotropic on one of the walls and planar on to the other (this is the so-called hybrid HAN-cell. We can use a frame of reference (x,z) with the origin on the wall with homeotropic anchoring, the x-axis parallel to that wall, and the z-axis perpendicular to it. Locally, the director n is given by: In (37), Ө(z) is the angle shown in the Figure 1.
Vectors i and k are the unit vectors of axes.
If we imagine to decrease the cell thickness d, the deformed configuration becomes unstable and the cell, if we suppose that the planar anchoring is stronger than the homeotropic one, assumes the planar configuration. Or, if we assume the homeotropic anchoring stronger, the cell assumes the homeotropic configuration. The transition that occurs in the HAN cell, when we are reducing the thickness of the liquid crystal, is from a distorted configuration to the undistorted planar (P) or homeotropic (H) configuration (see Figure 2). In order to describe the occurrence of a limit for the mechanical stability of the HAN cell, due to the decrease of the cell thickness d, the free energy density must be given as a function of angle Ө. Close the threshold of the passage from the HAN configuration to the undistorted P or H configurations, we have Ө changing its value. However, the value of this angle depends on the positon of the director in the cell, that is, on z. So let us take as the parameter for studying the problem, the maximum angle assumed by the director with respect to z-axis and call it Өmax. If the planar anchoring is strong, this angle can also be of 90°. In fact, we can think the angle Ө as a function of z, and therefore as a function Ө = Өmax g(z/d).
Now, let us consider some terms from Tables I-III, and is of the second order in Өmax, whereas the other contributions vanish more rapidly, when Өmax goes to zero. As a consequence, we have an additional term to the free energy density which can be written as: In the framework of the theory of elasticity here discussed, only K* is the new elastic constant of the second order which survives in a threshold phenomenon. In this case, the free energy density becomes: Let us note that the first five elastic constants have dimensions of energy divided by length; the elastic constant K* is an energy times a length. Let us evaluate its order as made for (4) Tables I-III: These three Tables are giving the contributions to the density of the free energy of a nematic liquid crystal coming from tensors N, D and H, respectively. In the second Table, contributions coming from D marked by * are those shared with N. In the third Table, contributions to the free energy density coming from H, marked by *, are shared with D, whereas those marked by ** are shared with N and D. | 4,193 | 2016-08-03T00:00:00.000 | [
"Physics"
] |
Construction of nonuniform periodic wavelet frames on non-Archimedean fields
In real life applications not all signals are obtained by uniform shifts; so there is a natural question regarding analysis and decompositions of these types of signals by a stable mathematical tool. Gabardo and Nashed, and Gabardo and Yu filled this gap by the concept of nonuniform multiresolution analysis and nonuniform wavelets based on the theory of spectral pairs for which the associated translation set Λ = {0, r/N}+ 2Z is no longer a discrete subgroup of R but a spectrum associated with a certain one-dimensional spectral pair and the associated dilation is an even positive integer related to the given spectral pair. In this paper, we introduce a notion of nonuniform periodic wavelet frame on non-Archimedean field. Using the Fourier transform technique and the unitary extension principle, we propose an approach for the construction of nonuniform periodic wavelet frames on non-Archimedean fields.
Introduction. The notion of frames was first introduced by Duffin and
Shaeffer [10] in connection with some deep problems in nonharmonic Fourier series and more particularly, with the question of determining when a family of exponentials e iαnt : n ∈ Z is complete for L 2 [a, b]. Frames did not generate much interest outside nonharmonic Fourier series until the seminal work by Daubechies et al. [9]. After their pioneer work, the theory of frames began to be studied widely and deeply, particularly in the more specialized context of wavelet frames and Gabor frames. Frames provide a useful model to obtain signal decompositions in cases where redundancy, robustness, oversampling and irregular sampling occur.
Today, the theory of frames has become an interesting and fruitful field of mathematics with abundant applications in signal processing, image processing, harmonic analysis, Banach space theory, sampling theory, wireless sensor networks, optics, filter banks, quantum computing, and medicine. An important example of frames is a wavelet frame, which is obtained by translating and dilating a finite family of functions. One of the most useful methods to construct wavelet frames is through the concept of unitary extension principle (UEP) introduced by Ron and Shen [17] and subsequently extended by Daubechies et al. [8] in the form of the Oblique Extension Principle (OEP). They gave sufficient conditions for constructing tight and dual wavelet frames for any given refinable function φ(x) which generates a multiresolution analysis. Gabardo and Nashed [11], and Gabardo and Yu [12] introduced the notion of nonuniform multiresolution analysis and nonuniform wavelets based on the theory of spectral pairs for which the associated translation set Λ = {0, r/N } + 2 Z is no longer a discrete subgroup of R but a spectrum associated with a certain one-dimensional spectral pair and the associated dilation is an even positive integer related to the given spectral pair.
In recent years, there has been a considerable interest in the problem of constructing periodic wavelet bases and frames in Hilbert spaces as most of the signals of practical interest are periodic in nature. Apart from signals that are inherently periodic, all signals resulting from experiments with a finite duration can in principle be modeled as periodic signals [14]. The setup of tight wavelet frames provides great flexibility in approximating and representing periodic functions. Using periodization techniques, Zhang [25] constructed a dual pair of periodic wavelet frames for L 2 [0, 1] under the assumption that the support of the wavelet function ψ in the frequency domain is contained in [−π, −ε] ∪ [ε, π], ε > 0. Zhang and Saito [26] have constructed general periodic wavelet frames for L 2 [0, 1] using extension principles. Later on, Lu and Li [15] constructed periodic wavelet frames with dilation matrix.
On the other hand, the past decade has also witnessed a tremendous interest in the problem of constructing wavelet bases and frames on various spaces other than R. For example, R. L. Benedetto and J. J. Benedetto [6] developed a wavelet theory for local fields and related groups. They did not develop the multiresolution analysis (MRA) approach, their method is based on the theory of wavelet sets and only allows the construction of wavelet functions whose Fourier transforms are characteristic functions of some sets. Jiang et al. [13] pointed out a method for constructing orthogonal wavelets on a local field K with a constant generating sequence and derived necessary and sufficient conditions for a solution of the refinement equation to generate a multiresolution analysis of L 2 (K). Subsequently, tight wavelet frames on local fields of the positive characteristic were constructed by Shah and Debnath [23] using extension principles. In the series of papers [1,2,3,4,5,19,20,21,22], we have obtained various results related to wavelet and Gabor frames on non-Archimedean local fields.
Drawing inspiration from the above work, our aim is to extend the notion of wavelet frames to nonuniform periodic wavelet frames on non-Archimedean fields via extension principles. More precisely, we prove that under some mild conditions, the periodization of any nonuniform wavelet frame constructed by the unitary extension principle is a nonuniform periodic wavelet frame on non-Archimedean fields.
The layout of this paper is as follows. In Section 2, we discuss some preliminary facts about non-Archimedean fields and also some results which are required in the subsequent sections. Sections 3 is devoted to our main results about nonuniform periodic wavelet frames.
Preliminaries and nonuniform periodic wavelet system on non-Archimedean fields.
A non-Archimedean field K is a locally compact, nondiscrete and totally disconnected field. If it is of the characteristic zero, then it is a field of p-adic numbers Q p or its finite extension. If K is of the positive characteristic, then K is a field of formal Laurent series over a finite field GF (p c ). If c = 1, it is a p-series field, while for c = 1, it is an algebraic extension of degree c of a p-series field. Let K be a fixed non-Archimedean field with the ring of integers D = {x ∈ K : |x| ≤ 1}. Since K + is a locally compact Abelian group, we choose a Haar measure dx for K + . The field K is a locally compact, nontrivial, totally disconnected and complete topological field endowed with non-Archimedean norm | · | : K → R + satisfying (a) |x| = 0 if and only if x = 0; (b) |x y| = |x||y| for all x, y ∈ K; (c) |x + y| ≤ max {|x|, |y|} for all x, y ∈ K.
Property (c) is called the ultrametric inequality. Let B = {x ∈ K : |x| < 1} be the prime ideal of the ring of integers D in K. Then, the residue space D/B is isomorphic to a finite field GF (q), where q = p c for some prime p and c ∈ N. Since K is totally disconnected and B is both prime and principal ideal, there exists a prime element p of K such that B = p = pD. Let D * = D \ B = {x ∈ K : |x| = 1}. Clearly, D * is a group of units in K * and if x = 0, then we can write x = p n y, y ∈ D * . Moreover, if U = {a m : m = 0, 1, . . . , q − 1} denotes the fixed full set of coset representatives of B in D, then every element x ∈ K can be expressed uniquely as Recall that B is compact and open, so each fractional ideal B k = p k D = x ∈ K : |x| < q −k is also compact and open and is a subgroup of K + . We use the notation from Taibleson's book [13]. In the rest of this paper, we use the symbols N, N 0 and Z to denote the sets of natural, nonnegative integers and integers, respectively.
Let χ be a fixed character on K + that is trivial on D but nontrivial on B −1 . Therefore, χ is constant on cosets of D, so if y ∈ B k , then χ y (x) = χ(y, x), x ∈ K. Suppose that χ u is any character on K + , then the restriction χ u |D is a character on D. Moreover, as characters on D, then, as it was proved in [13], the set χ u(n) : n ∈ N 0 of distinct characters on D is a complete orthonormal system on D.
We now impose a natural order on the sequence {u(n)} ∞ n=0 . We have This defines u(n) for all n ∈ N 0 . In general, it is not true that u(m + n) = u(m) + u(n). But, if r, k ∈ N 0 and 0 ≤ s < q k , then u(rq k + s) = u(r)p −k + u(s). Further, it is also easy to verify that u(n) = 0 if and only if n = 0 and {u( ) + u(k) : k ∈ N 0 } = {u(k) : k ∈ N 0 } for a fixed ∈ N 0 . Hereafter we use the notation χ n = χ u(n) , n ≥ 0.
Let the local field K be of the characteristic p > 0 and ζ 0 , ζ 1 , ζ 2 , . . . , ζ c−1 be as above. We define a character χ on K as follows: The Fourier transform of f ∈ L 1 (K) is denoted byf (ξ) and defined by It is noted that The properties of Fourier transforms on non-Archimedean field K are much similar to those of on the classical field R. In fact, the Fourier transform on non-Archimedean fields of the positive characteristic have the following properties: • The map f →f is a bounded linear transformation of , then we define the Fourier coefficients of f as The series n∈N 0f u(n) χ u(n) (x) is called the Fourier series of f . From the standard L 2 -theory for compact Abelian groups, we conclude that the Fourier series of f converges to f in L 2 (D) and Parseval's identity holds: We also denote the test function space on K by Ω(K), that is, each function f in Ω(K) is a finite linear combination of functions of the form 1 k (x − h), h ∈ K, k ∈ Z, where 1 k is the characteristic function of B k . This class of functions can also be described in the following way. A function g ∈ Ω(K) if and only if there exist integers k, such that g is constant on cosets of B k and is supported on B . It follows that Ω is closed under the Fourier transform and is an algebra of continuous functions with compact supports, which is dense in C 0 (K) as well as in L p (K), 1 ≤ p < ∞. For more details we refer to [16,24].
For an integer N ≥ 1 and an odd integer r with 1 ≤ r ≤ qN − 1 such that r and N are relatively prime, we define where Z = {u(n) : n ∈ N 0 }. It is easy to verify that Λ is not a group on non-Archimedean field K, but is the union of Z and a translate of Z. As in the standard scheme, one expects existence of qN − 1 functions so that their translation by elements of Λ and dilations by the integral powers of p −1 N form an orthonormal basis for L 2 (K).
For j ∈ N 0 let N j denote a full collection of coset representatives of Λ/(qN ) j Λ, i.e., Then, Λ = n∈N j n + (qN ) j Λ and for any distinct n 1 , n 2 ∈ N j , we have n 1 + (qN ) j Λ ∩ n 2 + (qN ) j Λ = ∅. Thus, every nonnegative integer k can uniquely be written as k = r(qN ) j + s, where r ∈ Λ, s ∈ N j . Further, a bounded function W : K → K is said to be a radial decreasing Let a and b be any two fixed elements in K. Then, for any prime p and m, n ∈ N 0 , let D p , T u(n)a and E u(m)b be the unitary operators acting on f ∈ L 2 (K) defined by: For given Ψ := {ψ 1 , . . . , ψ qN −1 } ⊂ L 2 (K), define the nonuniform wavelet system: The nonuniform wavelet system W(Ψ, λ) is called a nonuniform wavelet frame, if there exist positive numbers 0 < A ≤ B < ∞ such that for all f ∈ L 2 (K) The largest A and the smallest B for which (2.8) holds are called nonuniform wavelet frame bounds. A wavelet frame is a tight non uniform wavelet frame if A and B are chosen such that A = B and then the generators {ψ 1 , ψ 2 , . . . , ψ qN −1 } are often referred to as tight nonuniform framelets. If only the right-hand inequality in (2.8) holds, then W(Ψ, λ) is called a Bessel sequence.
Next, we give a brief account of the MRA-based wavelet frames generated by the wavelet masks on non-Archimedean local fields. Following the unitary extension principle, one often starts with a refinable function or even with a refinement mask to construct desired wavelet frames. A function ϕ ∈ L 2 (K) is called a nonuniform refinable function, if it satisfies an equation of the type where a λ : λ ∈ Λ ∈ l 2 (N 0 ). In the frequency domain, equation (2.9) can be written as is an integral periodic function in L 2 (D) and is often called the refinement mask of ϕ. Observe that χ k (0) =φ(0) = 1. By letting ξ = 0 in equations (2.10) and (2.11), we obtain λ∈Λ a λ = 1. The so-called unitary extension principle (UEP) provides a sufficient condition on Ψ = {ψ 1 , ψ 2 . . . . , ψ qN −1 } such that the nonuniform wavelet system W(Ψ, λ) given by (2.7) constitutes a tight frame for L 2 (K). It is well known that in order to apply the UEP to derive wavelet tight frame from a given refinable function, the corresponding refinement mask must satisfy In this connection, Shah and Debnath [23] gave an explicit construction scheme for the construction of tight wavelet frames on local fields of the positive characteristic using unitary extension principles. The following is the fundamental tool they gave to construct tight wavelet frames on local fields. Theorem 2.1. Let ϕ(x) be a compactly supported refinable function and φ(0) = 1. Then, the nonuniform wavelet system W(Ψ, λ) given by (2.7) constitutes a normalized tight wavelet frame in L 2 (K) provided the matrix M(ξ) defined in (2.15) satisfies where σ(V 0 ) := ξ ∈ D : λ∈Λ |φ ξ + λ | 2 = 0 .
Nonuniform periodic wavelet frames on non-Archimedean fields.
In this section, we present an approach for constructing nonuniform periodic wavelet frames on non-Archimedean fields by virtue of the unitary extension principle (UEP). For any f ∈ L 1 (K), we define the periodic version of f as It is easy to see that f per is well defined and it is an N-periodic local integrable function. With same dilation and translation operators as defined in Section 2, we define the nonuniform periodic wavelet system as (3.1) W per (Ψ, λ) := ϕ per , ψ per ,j,λ : 1 ≤ ≤ qN − 1, j ∈ N 0 , λ ∈ N j . In order to establish the main result of this section, we first state and prove the following lemmas. Lemma 3.1. Suppose that the nonuniform periodic wavelet system W per (Ψ, λ) is defined by (3.1). Then, for any periodic function f and given ε > 0, there exists a positive integer J ∈ N such that Proof. Let Γ denote the support of the Fourier coefficients f u r N r∈N 0 . Then, we have Assume that The Fourier coefficients in the above expression can be written as Parseval's formula of the above Fourier series gives . Since Γ is a finite set, there exists a positive number N such that E ⊆ D(N ) := λ ∈ Λ : |λ| ≤ N . Hence, there exists J 1 ≥ 0 such that for all j ≥ J 1 , the elements of D(N ) lie in different cosets of Λ/(qN ) j Λ. Thus, the cardinality of Γ ∩ (λ + (qN ) j Λ) is at most one for each j ≥ J 1 , λ ∈ N j . Consequently, we have Sinceφ(0) = lim ξ→0φ (ξ) = 1, therefore there exists a nonnegative integer J 2 such that Let J = max {J 1 , J 2 }, then with this choice of j ≥ J, we obtain which implies that This completes the proof of Lemma 3.1.
Lemma 3.2.
Let ϕ defined by (2.11) be the nonuniform refinable function with m 0 (ξ) as its refinement mask and let m (ξ), 1 ≤ ≤ qN − 1, be the wavelet masks. The nonuniform wavelet system W(Ψ, λ) given by (2.7) form a normalized tight frame for L 2 (K). Then, for any function f ∈ L 2 (K) we have Proof. For any f ∈ L 2 (K) and j ∈ N 0 , define the linear operators P j and Q j as: Since Ω(K) is dense in L 2 (K) and closed under the Fourier transform, it is sufficient to prove that holds for all the functions f in Ω(K). Therefore, for all f ∈ Ω(K) and j ∈ Z, k ∈ N 0 , using Parseval's identity, we obtain (3.8) Since m 0 (ξ) is an integral-periodic function, equation (3.8) yields Proceeding in the similar manner as above, we can obtain Therefore, we have The unitary extension principle condition (2.17) is equivalent to Therefore, we have and hence, we get the desired result (3.5).
Now we state and prove the main result of this section.
Proof.
For any periodic function f ∈ Ω(D) and ε > 0, we can choose J > 0 by Lemma 3.1 such that for all j > J, we have Also for any j ∈ Z, Lemma 3.3 implies that Repeating the above argument on Letting j → ∞, we obtain This completes the proof of Theorem 3.1. | 4,394.6 | 2020-12-28T00:00:00.000 | [
"Mathematics"
] |
Resonant Cavity Antennas for 5G Communication Systems: A Review
Resonant cavity antennas (RCAs) are suitable candidates to achieve high-directivity with a low-cost and easy fabrication process. The stable functionality of the RCAs over different frequency bands, as well as, their pattern reconfigurability make them an attractive antenna structure for the next generation wireless communication systems, i.e., fifth generation (5G). The variety of designs and analytical techniques regarding the main radiator and partially reflective surface (PRS) configurations allow dramatic progress and advances in the area of RCAs. Adding different functionalities in a single structure by using additional layers is another appealing feature of the RCA structures, which has opened the various fields of studies toward 5G applications. This paper reviews the recent advances on the RCAs along with the analytical methods, and various capabilities that make them suitable to be used in 5G communication systems. To discuss different capabilities of RCA structures, some applicable fields of studies are followed in different sections of this paper. To indicate different techniques in achieving various capabilities, some recent state-of-the-art designs are demonstrated and investigated. Since wideband high-gain antennas with different functionalities are highly required for the next generation of wireless communication, the main focus of this paper is to discuss primarily the antenna gain and bandwidth. Finally, a brief conclusion is drawn to have a quick overview of the content of this paper.
Introduction
Demand of high traffic capacity and speed in wireless communication systems led to the fifth-generation (5G) technologies [1]. The upcoming 5G technologies provide a multitude of advantages including high data rate, high reliability, and low power consumption. More important, it brings new born technologies to have smart cities and factories based on the industry 4.0 [2]. The millimeter-wave (MMW) frequency band has attracted significant attention among academic and industrial sectors, since it has enormous unlicensed bandwidth in comparison with other frequency bands [3]. Thus, MMW band can take an integral role in 5G communication systems. The MMW spectrum brings about compact structures and higher data rate. However, many concerns are remained, which should be addressed in the future communication technologies. One of these concerns is the high cost and complexity of fabrication processes within the MMW band. Another concern is the high energy loss of the MMW spectrum in comparison with the other frequency bands, which can be addressed by increasing the antenna gain.
New research directions have been done to find the effective solutions to address the aforementioned concerns over the MMW frequency band. Different antenna types with a variety of configurations have been proposed to compensate the high loss and propagation issues such as element inside the cavity to excite the entire structure [21]. Open-ended waveguide, patch antenna, stacked antenna, dielectric resonant antenna (DRA), dipole antenna, and crossed bowtie dipole can be used as the main radiating element inside the cavity. The PRS layer might have different configurations as will be discussed later. It can be a full dielectric structure or a periodic structure composed of an array of metallic unit cells. Due to having a cavity with reflective surfaces, multiple reflections of electromagnetic wave happen. A proper cavity thickness (the distance between the PRS and the ground plane) can superimpose in-phase transmitted waves, which enhances the antenna gain significantly. The phase and magnitude of the PRS reflection behaviour have remarkable impact on the performance of the RCAs in terms of gain, bandwidth, beam angle and aperture efficiency. Therefore, designing the PRS structures has an imperative role in the design of the RCAs to achieve the desired performance.
Analytical Methods
The design of RCA structures for desired radiation performances, requires an appropriate theoretical analysis. Many studies have been focused on how these structures can be analyzed, which led to a variety of analytical methods such as ray tracing, transmission line (TL), LW, EBG, and the principle of reciprocity methods. Among the mentioned analytical methods, the ray tracing, TL, and LW methods are more used in the literature. In this subsection, a brief review of these three methods is prepared, and a short comparison between them is drown.
RCAs are parallel-plate waveguides leading to the leakage of the ray and known as 2-D periodic leaky-wave antennas (LWAs) [34]. Leaky-wave antennas are considered as antennas with a directive beam, scanning the space as a function of frequency. Leaky-wave antennas are a kind of phased array antennas without phase shifters, which leads to a compact structure with a low energy consumption. There are many studies in the literature in which the functionality of the resonant cavity is discussed by LW method [25,30,31,[35][36][37][38]. Since comparing to other methods, this method is more efficient and accurate for different configurations of the RCAs; recent advanced studies are carried out using LW models, especially those with steering beam functionalities [34,[39][40][41]. The transverse equivalent network model might be used to derive proper formulas in terms of the angle of the beam, gain, beamwidth, leaky-wave phase and attenuation constants of the structure. The propagation constant is dependent on the PRS reflection coeficient and the distance between the ground plane and the PRS structure placed above the main radiating element.
Another analysis technique for the RCA structures is the ray tracing method. The ray tracing method was first introduced by Terentini [21], in which the RCA is analyzed as a resonant microwave cavity structure known as a Fabry-Pérot cavity. The resonance condition, which is dependent on the phase and magnitude of the reflection coefficient of the PRS needs to be satisfied in order to improve the radiation characteristics of the antenna. According to the resonance condition, the distance between the PRS and the ground plane layers are adjusted to create in-phase superposition of waves leaking out from the structure, which leads to high directive radiation pattern. In the ray tracing model, the diffracted rays are not considered since the structure size is assumed infinite. Consequently, this model gives initial design values, which facilitates the designing process; however, it is not as accurate and general as leaky-wave analyses, due to having some approximations assumed. In fact, this method is more applicable when the goal is to increase the antenna gain and bandwidth, and even to change the antenna polarization [29,42].
The transmission-line (TL) method, as the ray tracing method, gives the initial values of antenna design, making it straightforward and time efficient. Many studies have been carried out to demonstrate the functionality of the resonant cavity structure by the TL model [22,23,30,35,38,43,44]. In [22], the TL analysis is used to derive some formulas related to the bandwidth, gain, and beamwidth of the RCAs. For this purpose, the entire RCA structure is modeled by TLs with different characteristics, and some lumped elements. As a result, the thickness of different parts and the properties of the PRS in terms of refection coefficient are calculated in order to improve radiation performance of the RCA structure. Using this method resulted in better evaluation of the directivity of RCAs [44].
Research Directions
The demand for compact high-gain antennas with simple feeding networks has highly increased for the next generation of communication systems. The RCAs have attracted the interest of antenna designers for a variety of study directions. The latest studies on the resonant cavity structures combined with other newly PRSs and main radiating structures demonstrate the flexibility of RCAs to be designed over MMW spectrum. In this section, the study directions of the RCAs with the recent investigations and applicable examples, specially over the MMW frequency band, are reviewed.
3-dB Gain Bandwidth Improvement
RCAs are regarded as resonant structures with the drawback of narrow 3-dB gain bandwidth [36,37,45,46]. In Reference [45], the inverse proportionality between the maximum gain and 3-dB gain bandwidth of the RCAs is discussed and proved theoretically by the ray tracing analytical method. Higher gain contributes to narrower bandwidth, which has been considered as one of the concerns that has got the attention of researchers for many years. Consequently, many studies with different methods have been introduced to provide a solution to tackle the mentioned shortcomings of the RCAs. This section deals with presenting different methods carried out to increase the bandwidth of the RCAs.
Positive Reflection Phase Gradient
Inverting the reflection phase gradient of the PRS unit cell to achieve a positive slope is among the most famous and applicable methods in order to obtain wider 3-dB gain bandwidth in the design of the RCAs. References [47,48] are among the first studies conducted to demonstrate the possibility of achieving a positive reflection phase gradient to increase the 3-dB gain bandwidth. Changing the phase gradient behaviour can be achieved by using multi-layer PRS structures [49][50][51][52][53][54][55][56], thick full-dielectric PRSs [57], and thin one-layer metallo-dielectric PRSs [29,48,[58][59][60][61][62]. One reason behind using multi-layer PRS structures is creating multiple resonances at different frequencies, which can satisfy the multi resonance conditions over a desired bandwidth. Permitivity and thicknesses of the dielectric slabs, the distances between layers, and other parameters have impact on creating multiple resonant frequencies. It is worth noting that using multi-layer PRSs make the RCAs thicker which might be one of the concerns in some applications.
Several studies have been focused on achieving wider 3-dB gain bandwidth using PRS structures with positive phase slope over millimeter wave spectrum [40,[63][64][65]. As an example, in Reference [65], a wideband high-gain MMW FPCA with the operating frequency of 60 GHz is introduced. A printed ridge-gap waveguide (PRGW) technology is used as the slot antenna feed, because it is a proper candidate to suppress the surface waves with a good functionality over the MMW spectrum. The PRS is composed of gridded square patch (GSP) and square slot-loaded patch (SSLP) structures etched on two different dielectric layers. The configuration of the PRS unit cell, PRGW, and FPCA structures are demonstrated in Figure 1. The wideband characteristic is achieved by using a double-layer PRS unit cell with positive reflection phase gradient as demonstrated in Figure 2a. The "a" is used as a scale factor to control the phase of the unit cell. A maximum gain of 16.8 dBi and 3-dB gain bandwidth of 12.5% are achieved as shown in Figure 2b.
PRS Unit Cell with Sharp Resonance
In the studies reviewed in the previous subsection, it was indicated that having a resonant frequency at the middle of a desired frequency band with a positive phase gradient leads to a wider 3-dB gain bandwidth. In some studies, it is proved that having a sharp resonance at the centre of a frequency band results in 3-dB gain improvement [56,[66][67][68]. The phase variation can get a 180 degree jump at a resonant frequency to achieve a wide 3-dB gain bandwidth. In Reference [67], as demonstrated in Figure 3, a dual-layer full-dielectric PRS structure with a high permitivity laminate substrate is proposed to provide a sharp resonant frequency. A crossed dipole is used as the main radiator inside the RCA structure which can result in a wide impedance bandwidth and suitable CP characteristic. Figure 4a demonstrates the behaviour of reflection coefficient of the proposed PRS with a sharp resonant frequency. As can be seen, the sharp resonance creates extra frequencies (except the center frequency) to satisfy the resonance conditions which is the reason to make the antenna wideband. The incorporation of the PRS and crossed dipole contributes to a 3-dB gain bandwidth of 50.9% with a maximum gain of 15 dBic, as can be seen in Figure 4b. It should be noted that the optimum results are provided by optimizing the distance between the PRS layers along with their size, and the distance between the ground plane and the PRS layers. The proposed antenna possess simple and compact geometry while providing a high gain and circular polarization, which makes it a suitable candidate to be used in base stations.
New Configuration of PRS Structures: Nonuniform PRS Structures
Recent non-uniform configurations of superstrates mostly presented by Baba and Hashmi et al., have been designed to provide a significant increment in 3-dB gain bandwidth of the RCAs [28,[69][70][71][72]. The proposed PRS structures have taken advantage of the integration of different dielectric substrate slabs with different either thickness or permitivity. Basically, such structures are mainly used to compensate the non-uniform phase distribution of the RCA aperture as will be discussed later in the next section. In Reference [69], a planar PRS layer consisted of dielectric slabs with different permitivities is proposed. The proposed PRS named as transverse permitivity gradient (TPG) is a single-layer planar structure with the capability of the aperture phase correction using different sections with different permittivities. Next, Hashmi et al. demonstrated the possibility of a PRS structure composed of multiple dielectric slabs with different permitivity and thickness in order to improve the 3-dB gain bandwidth of a RCA [28,70]. They also investigated the PRS structures with stepped configurations and indicated how these stepped configurations can increase the antenna gain over a wide bandwidth. In Reference [71], the PRS structure is a stepped configuration with laminate substrates of different permitivity and thickness, whereas in [72], the PRS is the similar stepped configuration with just different thicknesses. A slot radiator fed by a waveguide is used as the main radiator for these structures.
Recent investigations with stepped configurations have been done in millimetre wave frequency band to increase the antenna bandwidth [32,73,74], which shows the flexibility of RCAs for different frequency bands. In Reference [74], the application of two different non-uniform PRS structures to enhance the 3-dB gain bandwidth and maximum gain of the RCA are investigated over the MMW spectrum. The first PRS is composed of a four concentric full dielectric rings of different permitivity and thickness, whereas the second PRS is made of a single laminate substrate with the same permitivity and different thicknesses. Both of the PRS structures have an stepped configuration, and an open-ended WR-15 waveguide is used as the main radiator inside the cavity to feed the antenna. The antenna structure without PRSs and PRS prototypes are demonstrated in Figure 5. The antenna with the second PRS has a maximum measured gain of 19.5 dBi with a proper matching from 55.2 GHz to 65 GHz. The simulated and measured results are displayed in Figure 6. These kinds of non-uniform PRS structures achieve remarkable gain-bandwidth product (GBP), which is a true merit used in the comparison between different antennas. The proposed antenna in [74] has a simple structure with a high gain and low cross-polarization, which makes it beneficial for base stations, point-to-point communication systems, autonomous radars, remote sensing satellites, and Internet of things (IoT).
Shape Manipulation of the Conventional RCA Configuration
The demand for wideband high-gain antennas have led researchers to seek different and novel methods to efficiently enhance the antenna bandwidth without sacrificing the antenna performance. It is proved that the performance of FPCAs can be improved by curving the ground plane or PRS architecture. In Reference [75][76][77], manipulating the configuration of the ground plane and PRS structures so that the distance between the PRS layer and the ground plane gradually gets unequal values for different parts of the RCA structures are considered by different methods. These manipulated structures are capable to compensate and correct the phase and magnitude distribution far from the center of the PRS structure that results in a broader 3-dB gain bandwidth. In Reference [75], a shaped ground plane with a semi-spherical configuration is used to make the 3-dB gain bandwidth wider. The configuration of the proposed RCA structure is shown in Figure 7a. The antenna performance is compared with the performance of a conventional RCA with a flat ground plane, and the results are shown in Figure 7b. The RCA structure reported in [75] provides a measured 3-dB gain bandwidth of 25% with a maximum gain of 17.7 dBi as demonstrated in Figure 8. Similar works have been done for MMW spectrum by using unconventional RCA structures [78,79].
Array Feed
Using array antennas instead of single main radiator inside the cavity structure is another conventional method to increase the 3-dB gain bandwidth. This idea has been investigated in many studies [27,33,43,80], which mostly used complicated feeding networks. In Reference [43], an array of patch elements are used as the radiators. Besides, by adding two PRS layers above the array antennas, the radiation performance of the antenna is improved. The antenna configuration and the results of the antenna gain are demonstrated in Figure 9. As can be seen, the maximum gain of a 2 × 2 array patch antenna without any PRS layer is almost the same as a RCA with a single patch as the main radiator. Similarly, the results are same for a 4 × 4 patch array without PRS and a RCA with a 2 × 2 patch array as the main radiator.
Although using array source inside the cavity results in reasonable improvement of the gain and 3-dB gain bandwidth, it is not a good idea for millimeter wave spectrum. Difficult fabrication process and high loss caused by feed networks are two main issues that reduce the tendency to use array structures at those frequencies.
Directivity Enhancement: Aperture Efficiency Improvement
There is another group of studies which have been conducted to compensate the non-uniform aperture magnitude and phase distribution of the conventional RCA structure [66,[81][82][83][84][85][86][87][88] and as a result to enhance the antenna directivity. Conventional and classical structures of the RCAs have non-uniform aperture phase and magnitude distribution, which result in the reduction of the antenna radiation performance. Compensating the non-uniform field distribution of a RCA aperture is an effective attempt to increase the antenna directivity and decrease side lobe level. Some of the studies have focused on the phase compensation [66,[81][82][83][84][85], whereas the others have investigated the aperture magnitude compensation [86,87] for enhancing the antenna performance. In a few studies such as [88], both phase and magnitude are compensated to create remarkable aperture efficiency. Using a non-uniform PRS structure is the most common method to compensate the non-uniform magnitude and phase distribution of electric filed over the antenna aperture. Using phase compensation method is more applicable, because it leads to greater improvement in comparison with the magnitude compensation.
In Reference [82], an stepped superstrate is used to compensate the non-uniform aperture phase distribution of a RCA, which leads to a higher directivity and aperture efficiency. The structure is a phase rectifying transparent superstrate (PRTS), which was fabricated by 3-D printing technology and the main radiating element is a microstrip patch antenna with a full-dielectric PRS structure illustrated in Figure 10. Figure 11 shows the phase distribution and the directivity of the antenna, before and after applying PRTS structure. As can be seen, the phase distribution becomes almost uniform all over the aperture, since the PRTS placed above the PRS compensates the phase delay. According to the Figure 11c, the directivity gets a significant improvement when that the PRTS is placed above the entire RCA structure.
Additionally, a few other methods, such as using different geometries and sizes for the PRS structures located at different distances from the center of the aperture, have been used in the literature to improve the gain and aperture efficiency of the RCAs [79,89,90].
Usually two different methods are used to achieve a high-gain CP RCA. The first one is to use a CP main radiator element and the radiation improvement of the entire structure is obtained through utilizing a PRS structure whose behaviour is independent of the polarization [67,68,[91][92][93][94][95][96][97].
The other method is known as self-polarizing RCA. In this method, the linear-to-circular polarization conversion is the subject that is studied. A linearly polarized (LP) antenna is used as the main radiating feed and its polarization is converted to circular through using a proper PRS placed above the RCA [98][99][100][101][102][103][104]. This method is preferred, since it does not need any feeding networks to make circular polarization; however it might not obtain a wide bandwidth.
Many studies have been reported for designing a high-gain CP RCA over the MMW spectrum in the literature [42,[105][106][107]. Some of them present a practical application of the CP RCAs for the 5G communication systems. In Reference [42], a CP high-gain RCA is designed for 5G multiple-input multiple output (MIMO) applications over 26 GHz to 31 GHz. A single-layer full dielectric PRS structure with a sharp resonant frequency is used to enhance the antenna radiation performance over a wide frequency band. A CP truncated patch antenna with a proper slot is placed inside the cavity to illuminate the entire structure as demonstrated in Figure 12. Figure 13
Reconfigurable RCAs
Reconfigurability of the RCAs is the capability of their structures to alter their radiation features by electrical elements or mechanical mechanism. A reconfigurable antenna might use different methods to alter operation frequency, radiation pattern, polarization, beamwidth, or even a combination of these variables.
Polarization reconfigurability can be achieved by using either a reconfigurable main radiating element [116][117][118][119][120][121][122] or a dependent polarization PRS [123][124][125][126][127]. In Reference [140], a four-polarizationreconfigurable aperture-coupled patch antenna is designed as the main radiator of a RCA structure, which works with two different linear polarizations. The PRS structure is designed with adjustable reflection phase consisting of four PIN diodes to steer the RCA beam. The entire RCA configuration is shown in Figure 14. By a proper arrangement of the main radiating elements and PRS structure, a dual-polarized 2-D beam-steering functionality is achieved as shown in Figure 15. Beam steering property is one of the demands of the new generation of wireless communications that enables the wide coverage. Using phased array antennas is the conventional solution for the communication coverage, in which the beam steering is realized by carefully adjusting the phase difference between the array elements. However, conventional phased array antennas suffer from high-cost fabrication process and complicated feeding networks with significant loss.
A variety of investigations have been carried out to demonstrate the capability of the RCAs to achieve reconfigured pattern in the millimeter wave frequency band. Many studies proposed that the main radiator placed inside the RCA structure takes the responsibility of the pattern reconfiurability [141][142][143] and the antenna radiation performance can be improved by using PRS structures. Other studies have proposed suitable PRS structures to manipulate the RCA radiation pattern instead of using a fed antenna with complicated feed networks [128][129][130][131][132][133][134][135][136][137][138][139]. PRS structures provide more degree of freedom to control the beam of the main radiator with other functionalities simultaneously without applying extra equipment. Therefore, many RCA structures have been proposed with either both pattern and polarization reconfigurability or pattern and beamwidth reconfigurability [115,140,[144][145][146].
In References [139,146], the possibility of using RCA with reconfigurable pattern characteristic for 5G is illustrated. In Reference [139], the beam tilting of a RCA is investigated through four different techniques at 60 GHz. The techniques used for tilting the beam include wedge-shaped dielectric lens (WSDL), discrete multilevel grating dielectric (DMGD), printed gradient surface (PGS), and perforated dielectric gradient surface (PDGS), which are based on the phase gradient surface method. Among the mentioned techniques, PGS and PDGS have better performance inorder to provide reconfigurability feature. Figure 16a shows a printed ridge gap waveguide (PRGW) used as the main radiating element in [139]. Six different structures of the PGS and PDGS were proposed and their functionality is investigated while they are placed above the main radiator. Three designs #1, #2, #3 for PGS and PDGS structures were done for different tilt angles. A maximum gain of around 22 dBi is achieved when these structures are used to tilt the beam of the RCA. The simulated and measured results achieved by using these six superstrates are illustrated in Figure 17. As shown, the tilted beam angles of θ = 14 0 , θ = 27 0 and θ = 44 0 are achieved by placing every design of PGS or PDGS above the main radiator. The proposed antenna in [139] is a potential candidate for narrow-band communication systems. It can be used in mobile devices due to its compact configuration with reconfigurable radiation pattern.
Further Fields of Study: Low Profile and Multi-Band RCAs
Reducing the total size of the RCAs, especially, the cavity height makes them more appealing for the future communication systems. Many works have been done to reduce the profile of RCAs [52,100,101,[147][148][149][150][151][152][153][154][155][156][157]. Minimizing the cavity height so that the antenna performance does not have any degradation has been challenging. Based on the analytical models, the height of RCA structure is dependent on the sum of the reflection phase of the ground plane and PRS layer, which is approximately half of the wavelength. Consequently, a ground plane with zero reflection phase, which can be realized by employing high impedance surfaces (HIS) called artificial magnetic conductors (AMC) can minimize the cavity thickness. A compact multi-band high-gain RCA is presented in [156] using a FSS structure as ground plane with zero reflection phase. In some cases, using a ground plane with an associate reflection phase opposite to the PRS reflection phase sign shrinks the thickness of the cavity a lot [150]. In Reference [153], a miniaturized-element frequency selective surface (MEFSS) cover consisted of a PRS and HIS is proposed to improve the radiation gain of the antenna in chip while having a small cavity height. The MEFSS cover is designed to be placed on the chip to increase the antenna radiation performance in RFICs. As shown in Figure 18, the antenna radiation gain can be increased by 9 dB by placing the MEFSS cover on top of the antenna, while the cover is designed to have a very small cavity height by designing the PRS and HIS layers. A maximum gain of 14 dBi with a cavity height of λ/30 is achieved through using the proposed MEFSS. This antenna is applicable for radar and communication systems, as well as wireless sensor networks for the future 5G systems. Multi-band high-gain antennas are highly required for the next communication systems. The capability of the RCA structures to possess the multi-band characteristic has been extensively investigated in the literature [156,[158][159][160][161][162][163]. Generally, creating different resonances that satisfy the resonance conditions is the way to design multi-band RCAs. In Reference [159], a dual-band high-gain RCA with two vertical and horizontal polarizations is proposed. A dual-feed microstrip patch antenna is used as the main radiator, which has dual-band and dual-polarization characteristics. A PRS composed of double-layer orthogonal dipole arrays to create V-pol and H-pol is placed above the main radiator. The RPS unit cell is designed so that the resonance condition is satisfied over two different frequency bands with orthogonal polarizations in order to improve the radiation characteristic of the RCA. The configuration of the main radiator and PRS are shown in Figure 19. The lower band and upper band have a maximum gain of 19.6 dBi and 18 dBi at 10 GHz and 11.6 GHz, respectively, as shown in Figure 20. The polarization of the RCA is vertical and horizontal at lower and upper frequency bands, respectively. Finaly, Table 1 lists some of the RCAs discussed in this paper to comprehensively summarize different works. The performance of the antennas in terms of maximum gain, overlapped bandwidth, operating frequency, polarization, reconfigurability, entire size, the height of cavity, and the number of PRSs used is demonstrated.
Conclusions
Since there is a high interest both in academia and industry for the high-gain antennas, this paper provides a comprehensive discussion on the RCAs in terms of their challenges, applications, and research trend. The RCAs owing to their advantages such as simple feed structure, planar configuration, high-gain, low-cost fabrication, and ease of integration with other systems are promising candidates to be used in the next communication systems.
The mechanism of the RCAs has been presented by different techniques; among them are ray tracing, TL, and LW models. The research fields of the RCAs were briefly discussed through demonstrating the practical examples and recent studies covering their low-profile, high aperture efficiency, wide 3-dB bandwidth, beam-steering and CP features and multi-beam and reconfigurability capabilities, especially in the millimeter-wave frequencies for future developments.
Having wider 3-dB gain bandwidth is still challenging in the implementation of the RCAs. Efficient designs and techniques to make wider 3-dB gain bandwidth including using PRS unit cells with positive reflection phase gradient, sharp resonance, and non-uniform configurations were reviewed. Besides, the possibility to generate a wider 3-dB gain bandwidth by manipulating the configuration of the ground plane and PRS structures was briefly discussed.
Multi functional antennas are the desired solution for the 5G millimeter wave frequencies to avoid no extra equipment, which lead to bulky structures. Reconfigurability of RCAs offer altering operation frequency, polarization, radiation pattern, and beamwidth to address the potential problems of the next communication systems. Beam-steering and CP characteristics add more flexibility to the antennas to tolerate the environment issues and to have stable performance, as explained in this paper.
Multi-band and low-profile RCAs are attractive topics covered in this review paper. In order to decrease the cavity height, decreasing the sum value of reflection phase of the PRS unit cell and ground plane is required, which can be obtained by using AMC or HIS structures as the ground plane. It is explained that to design a multi-band RCA, it is required to create different resonances that satisfy the resonance condition, simultaneously.
In summary, it is the stable functionality and stunning performance of the RCAs over different frequency bands that make them attractive for the next generation wireless communication systems, i.e., fifth generation (5G).
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 6,862.8 | 2020-07-01T00:00:00.000 | [
"Computer Science"
] |
Two-Element Dielectric Antenna Serially Excited by Optical Wavelength Multiplexing
A single pulsed laser beam containing multiple wavelengths (wavelength multiplexing) is employed to activate two semiconductor antennas in series. The dielectric nature of the semiconductors permits serial cascading of the antenna elements. Recently observed nonlinear characteristics of the radiated field as a function of the free carrier accelerating (bias) voltage are used to minimize the small interactions between elements. We demonstrate that the temporal electromagnetic radiation distribution of two serial antennas is sensitive to the three-dimensional pattern of the optical excitation source. One can, in turn, vary this distribution continuously by optical means to reconfigure the array.
Received October 1, 1998 A single pulsed laser beam containing multiple wavelengths (wavelength multiplexing) is employed to activate two semiconductor antennas in series. The dielectric nature of the semiconductors permits serial cascading of the antenna elements. Recently observed nonlinear characteristics of the radiated f ield as a function of the free carrier accelerating (bias) voltage are used to minimize the small interactions between elements. We demonstrate that the temporal electromagnetic radiation distribution of two serial antennas is sensitive to the three-dimensional pattern of the optical excitation source. One can, in turn, vary this distribution continuously by optical means to reconf igure the array. In this Letter we report the performance of two photoconductive antenna elements excited in series by picosecond laser pulses. Rather than provide a separate optical pathway to each element, we entrain two wavelengths in a single optical beam. This approach results in a more compact antenna array. Each semiconductor element is tailored to respond to only one of the wavelengths. Such an array can be reconf igured by the wavelength content of the optical beam. When this array is combined with coplanar excitations (multiple excitations of the same element), threedimensional electromagnetic (EM) source antennas are readily achieved. Because of their large bandwidth, pulsed microwave sources operating in the gigahertz range have been suggested for a number of applications, such as groundpenetrating radar (antipersonnel mine identification, utilities location), remote triggering encoders in mining, automobile anticollision systems, aircraft type identif ication, and portable medical and security scanning. 1 The use of semiconductor radiating elements combined with diode lasers and laptop-sized computers should result in compact, portable units compared with systems in use today. 2 Such conf igurations do not require the microwave plumbing and the mechanical steering associated with conventional centimeterwavelength sources and antennae.
Lasers (operating in either the cw or the pulse mode) activate the semicoductor elements by generating free carriers. 3 -6 Those photocarriers are accelerated in an externally applied electric field to produce EM radiation. When they are not illuminated, the semiconductor elements are essentially transparent to high-frequency EM fields and therefore are not interactive with other components. Because of their dielectric nature, inactive semiconductor elements also have a lower microwave cross section then their metallic counterparts.
The concept and the general experimental setup of the laser-induced pulsed picosecond EM source have been described in detail elsewhere. 5 Brief ly, a modelocked Q-switched YLF laser system provides 50-mJ pulse energy with 80-ps pulse duration at a wavelength of 1053 nm. The laser pulses are selectively chosen by use of a Pockels cell at a repetition rate of 378 Hz, and a KDP frequency doubler converts the pulses to a wavelength of 527 nm. The doubling is accomplished in such a manner as a to allow a significant amount of energy to radiate at 1053 nm as well. A photodetector is used to trigger a TEK11802 sampling scope. Pulses of 100-ps duration are readily resolved, and 10-ps resolution is possible with this setup. A 20-kV 5-ms bias pulse is synchronized with the laser pulse. Using pulsed bias instead of dc bias reduces heating and surface f lashover on the elements. Figure 1 illustrates a serial conf iguration of the photoconductive array of two semiconductor elements photoactivated by such a dual-wavelength laser source.
The two-element system consists of a GaAs wafer in the rear and an InP wafer in front. Undoped GaAs (bandgap E g 1.43 eV at room temperature) strongly absorbs the 526-nm wavelength but is transparent at the 1053-nm wavelength, whereas the InP with Fe impurities (E g 1.32 eV at room temperature) The nonlinear characteristics of the radiated field versus the bias field generated by a gigahertz photoconducting antenna have been reported elsewhere. 7 The amplitude of such fields, E͑t͒, is proportional to their final velocity ͑v͒: where e is the unit charge quantity, v is the carrier velocity, R is the optical ref lectivity of the semiconductor at frequency v, en is the photon energy, I op is the optical intensity, and t r is the photocarrier lifetime. Typically the accelerating carriers (usually electrons) in the semiconductor can reach their final velocity within a few picoseconds or less, i.e., a time much shorter than the optical pulse duration of ϳ80 ps. Consequently the waveform of the generated microwave pulse emulates the prof ile of the optical pulse in the time domain. The carrier velocity is not linear with the bias electric field, and some threshold value exists above which a saturation plateau is observed. 7 This relationship is shown in Fig. 2.
When the bias field is set at the plateau, the photoinduced EM signal will become insensitive to variations of the bias field. Taking advantage of this property, we established the bias field of the front element above the threshold value for InP (6 kV͞cm for GaAs and 12 kV͞cm for InP). Therefore the EM field arriving from the rear (GaAs) element does not affect the generation of the EM field in the front (InP) element (see Fig. 3). Furthermore, the EM pulse and the optical pulse will arrive essentially simultaneously at the front element, thus ensuring that the combined (front and rear elements) EM field will be in phase. In this conf iguration the couplings between the elements are expected to be minimal, and the total radiation signal in the far field is given as the superposition of the individual EM fields. Higher gain and beam-width narrowing of the far-f ield EM radiation are thus expected. 8,9 These expectations were confirmed experimentally, as shown in Fig. 4. Note that in Fig. 4 the pattern from the combined elements approaches the cosine squared shape, whereas the singleelement pattern is significantly broader. This finding is consistent with dipole versus monopole behavior. The final velocity of free carriers in III-V semiconductors decreases with electric field above some threshold. 10 Such negative differential conductivity behavior is applicable to photocarriers as well 11 and can be used to further reduce interelement coupling. By taking advantage of the fact that the microwave signal has the opposite polarity from the bias field, 12 we can ensure that the electromagnetic pulse from the rear element, when it arrives at the front element, will decrease the bias field there. Referring to Fig. 3, if the bias on the front element is to the right of the plateau, decreasing the bias will actually increase the carrier velocity. Note, finally, that the direction of the far-f ield maximum amplitude aligns with the optical beam direction, permitting the EM pulse to be steered optically. Two curves are scaled to have a same radiation field magnitude at the bias field of 12 kV͞cm. 7 Fig. 3. Typical nonlinear behavior of the radiation power versus the bias field for the III -V photoconductive antenna. The shift of the actual bias from E b to E b 0 owing to other arriving EM pulses will cause only insignif icant disturbance since E r 0 (radiation field) is nearly equal to E r . Thus the coupling among the photoconductive antenna elements is minimal. Fig. 4. Polar plot of the EM radiation pattern produced by a single-beam dual-wavelength laser source. Two patterns are scaled to have the same boresight gain. EM radiation beam-width narrowing was observed. The cos 2 u plot is shown for comparison.
It is important to understand that the power that might be realized from laser-induced pulsed picosecond EM source devices is relatively independent of the laser power, provided that there is suff icient f luence to ensure that all the illuminated semiconductor volume is saturated. We have estimated 5 individual peak powers of 1 kW (a conversion of the static electric energy,´E 2 ͞2, stored in a 10-cm 2 photoconducting antenna area with a typical 1-mm optical absorption depth for a 50-ps pulse). From our measurements of radiated field strength, at a distance of 2 m it was clear that the field was much less than the 1-kV estimate. This finding can be attributed to the fact that the dc-electrode conf iguration was not optimized and that the laser power was insuff icient in these initial trails to achieve saturation f luence. Also, the total radiated power should increase with the square of the number elements.
In summary, we have demonstrated for what is believed to be the first time that two photoconductive antenna elements can be optically excited in an inline serial conf iguration with a single laser source. The array elements are controlled and reconfigured by optical wavelength multiplexing. We reduced mutual couplings or cross talk among the elements by setting the bias fields at the plateau voltages for the individual semiconductor sources. The radiation beam directionality as well as its intensity can be manipulated by use of a fiber-optic network and choice of input optical powers. | 2,199.4 | 1999-02-15T00:00:00.000 | [
"Physics"
] |
QCD Equation of State of Dense Nuclear Matter from a Bayesian Analysis of Heavy-Ion Collision Data
Bayesian methods are used to constrain the density dependence of the QCD Equation of State (EoS) for dense nuclear matter using the data of mean transverse kinetic energy and elliptic flow of protons from heavy ion collisions (HIC), in the beam energy range $\sqrt{s_{\mathrm{NN}}}=2-10 GeV$. The analysis yields tight constraints on the density dependent EoS up to 4 times the nuclear saturation density. The extracted EoS yields good agreement with other observables measured in HIC experiments and constraints from astrophysical observations both of which were not used in the inference. The sensitivity of inference to the choice of observables is also discussed.
The properties of dense and hot nuclear matter, governed by the strong interaction under quantum chromo dynamics (QCD), is an unresolved, widely studied topic in high energy nuclear physics.First principle lattice QCD studies, at vanishing and small baryon chemical potential, predict a smooth crossover transition from a hot gas of hadronic resonances to a chirally restored phase of strongly interacting quarks and gluons [1,2].However, at high net baryon density i.e., large chemical potential, direct lattice QCD simulations are at present not available due to the fermionic sign problem [3].Therefore, QCD motivated effective models as well as direct experimental evidence are employed to search for structures in the QCD phase diagram such as a conjectured first or second order phase transition and a corresponding critical endpoint [4][5][6].Diverse signals had been suggested over the last decades [7][8][9][10][11], but a conclusive picture has not emerged yet due to lack of systematic studies to relate all possible signals to an underlying dynamical description of the system, both consistently and quantitatively.
Recently, both machine learning and Bayesian inference methods have been employed to resolve this lack of unbiased quantitative studies.A Bayesian analysis has shown that the hadronic flow data in ultra relativistic heavy-ion collisions at the LHC and RHIC favors an EoS similar to that calculated from lattice QCD at vanishing baryon density [12].In the high density range where lattice QCD calculations are not available, deep learning models are able to distinguish scenarios with and without a phase transition using the final state hadron spectra [13][14][15][16][17].
This work presents a Bayesian method to constrain quantitatively the high net baryon density EoS from data of intermediate beam energy heavy-ion collisions.A recent study has attempted such an analysis by a rough, piecewise constant speed of sound parameterization of the high density EoS [18].In this study, a more flexible parameterization of the density dependence of the EoS is used in a model which can incorporate this density dependent EoS in a consistent way and then make direct predictions for different observables.
In this work, the dynamic evolution of heavy-ion collisions is entirely described by the microscopic Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model [19,20] which is augmented by a density dependent EoS.This approach describes the whole system evolution consistently within one model.No parameters besides the EoS itself are varied here.
UrQMD is based on the propagation, binary scattering and decay of hadrons and their resonances.The density dependent EoS used in this model is realized through an effective density dependent potential entering in the nonrelativistic Quantum Molecular Dynamics (QMD) [7,21,22] equations of motions, ṙi = ∂H ∂p i , ṗi = − ∂H ∂r i .
Here H = i H i is the total Hamiltonian of the system including the kinetic energy and the total potential energy V = i V i ≡ i V n B (r i ) .The equations of motion are solved given the potential energy V , which is related to the pressure in a straightforward manner [23].
Here, P id (n B ) the pressure of an ideal Fermi gas of baryons and is the single particle potential.Evidently, the potential energy is directly related to the EoS and therefore the terms potential energy and EoS are interchangeably used in this letter.
This model assumes that only baryons are directly affected by the potential interaction [24].A much more detailed description of the implementation of the density dependent potential can be found in [23,25].Note that this method does yield for bulk matter properties, strikingly similar results as the relativistic hydrodynamics simulations when the same EoS is used [25].
To constrain the EoS from data, a robust and flexible parameterization for the density dependence of the potential energy that is capable of constructing physical equations of state (EOSs) is necessary.For densities below twice the nuclear saturation density (n 0 ), the EoS is reasonably constrained by the QCD chiral effective field theory (EFT) calculations [26,27], data on nuclear incompressibility [28], flow measurements at moderate beam energies [7,[29][30][31] and Bayesian analysis of both neutron star obervations and low energy heavy-ion collisions [32].This work focuses on the high density EoS, particularly on the range 2n 0 -6n 0 , which is not well understood yet.Therefore, the potential energy V (n B ) is fixed for densities up to 2n 0 by using the Chiral Mean Field (CMF) model-fit to nuclear matter properties and flow data in the low beam energy region [23].For densities above 2n 0 , the potential energy per baryon V is parameterized by a seventh degree polynomial: where h=-22.07MeV is set to ensure that the potential energy is a continuous function at 2n 0 .This work constrains the parameters θ i and thus the EoS, via Bayesian inference using the elliptic flow v 2 and the mean transverse kinetic energy ⟨m T ⟩ − m 0 of mid rapidity protons in Au-Au collisions at beam energy √ s NN ≈ 2 − 10 GeV.[40][41][42].Important, sensitive observables such as the directed flow [9,43] are then used to cross check the so extracted EoS.The choice of proton observables (as proxy to baryons) is due to the fact that interesting features in the EoS at high baryon density and moderate temperatures are dominated by the interactions between baryons and protons form the most abundant hadron species, actually measured in experiments, for beam energies considered in present work.Further details on the choice of data and calculation of flow observables are given in appendix A, which includes Ref. [44].
The experimental data D = {v exp 2 , ⟨m T ⟩ exp − m 0 } are used to constrain the parameters of the model θ = {θ 1 , θ 2 , ..., θ 7 } by using the Bayes theorem, given by P (θ|D) ∝ P (D|θ)P (θ). ( Here P (θ) is the prior distribution, encoding our prior knowledge on the parameters while P (D|θ) is the likelihood for a given set of parameters which dictates how well the parameters describe the observed data.Finally, P (θ|D) is the desired posterior which codifies the updated knowledge on the parameters θ after encountering the experimental evidence D.
The objective is to construct the joint posterior distribution for the 7 polynomial coefficients (θ) based on experimental observations, for which Markov Chain Monte Carlo (MCMC) sampling methods are used.For an arbitrary parameter set, the relative posterior probability up to an unknown normalisation factor is simply given by the prior probability as weighted by its likelihood.To evaluate the likelihood for a parameter set, the v 2 and the ⟨m T ⟩ − m 0 observables need to be calculated by UrQMD.The MCMC method then constructs the posterior distribution by exploring the high dimensional parameter space based on numerous such likelihood evaluations.This requires numerous computationally intensive UrQMD simulations which would need unfeasible computational resources.Hence, Gaussian Process (GP) models are trained as fast surrogate emulators for the UrQMD model, to interpolate simulation results in the parameter space [12,[45][46][47].Cuts in rapidity and centrality that align with that of the experiments are applied on UrQMD data to create training data for the GP models.The constraints applied to generate the physical EoSs to train the models, the performance of the GP models and other technical details can be found in appendix B.
The prior on the parameter sets is chosen as Gaussian distributions with means and variances evaluated under physical constraints.More details on the choice of the priors are given in appendix C. The log-likelihood is evaluated using uncertainties from both the experiment and from the GP model.The prior, together with the trained GP-emulator, experimental observations and the likelihood function are used for the MCMC sampling by employing the DeMetropolisZ [48, 49] algorithm from PyMC v4.0 [50].
Closure tests.In order to verify the performance of the Bayesian inference method described above, two closure tests are performed.The first test involves constructing the posterior using v 2 and ⟨m T ⟩ − m 0 , simulated with the experimental uncertainties from UrQMD for a specific but randomly chosen EoS.The inference results are then compared to the known 'ground-truth'.Figure 1 shows the posterior constructed in one such test for a random input potential.The black curve in the plot is the 'ground-truth' input potential while the color contours represent the reconstructed probability density for a given value of the potential V (n b ).Two specific estimates of the 'ground-truth' potential are highlighted in the figure besides the posterior distribution of the potential.These are the Maximum A Posteriori (MAP) estimate, which represents the mode of the posterior distribution as evaluated via MCMC and the 'MEAN' estimate as calculated by averaging the values of the sampled potentials at different densities.The comparison of the MAP and the MEAN curves with the 'ground-truth' shows that the reconstruction results from the Bayesian Inference are centered around the 'ground-truth' EoS and the sampling converges indeed to the true posterior.
From the spread of the posterior it can be seen that the EoS in the closure test is well constrained up to densities 4n 0 for the observables used in the present study.For densities from 4n 0 up to 6n 0 the generated EoSs have larger uncertainties.However, the mean potentials follow closely the true potential.
The second closure test is done in order to determine the sensitivity of the inference to the choice of the observational data.Hence, the procedure is similar to the previous test, except that the ⟨m T ⟩ − m 0 values for √ s NN = 3.83 and 4.29 GeV are not used in this test to estimate the posterior.When these two data points are excluded, the agreement of the 'ground-truth' EoS with the MAP and MEAN estimates decreases considerably for densities greater than 4n 0 .This indicates that these data points are crucial indeed for constraining the EoS at higher densities.Further details about these closure tests, and the sensitivity on excluding different data points, can be found in appendices D, E and F. There, also a comparison of the prior and posterior probability distributions is shown to highlight the actual information gain obtained through the Bayesian inference.
Results based on experimental data: The results of sampling the posteriors by using experimental data, for the two cases, with and without the ⟨m T ⟩ − m 0 values at √ s NN = 3.83 and 4.29 GeV, are shown in figure 2. The upper panel corresponds to using 15 experimental data points while the lower panel shows the results without the two ⟨m T ⟩ − m 0 values.The data as used in this paper do well constrain the EoS, for densities from 2n 0 to 4n 0 .However, beyond 4n 0 , the sampled potentials have a large uncertainty and the variance is significantly larger for the posterior extracted from 13 data points.Beyond densities of about 3n 0 , the posterior extracted using 13 data points differs significantly from the posterior extracted using all 15 points.This is quite different from our closure tests, where the extracted MAP and MEAN curves did not depend strongly on the choice of the data points used.This indicates a possible tension within the data in the context of the model used.
To understand this significant deviation which appears when only two data points are removed, the MAP and MEAN EoS resulting from the two scenarios are implemented into the UrQMD model to calculate the v 2 and the ⟨m T ⟩ − m 0 values which are then compared with the experimental data which were used to constrain them.Figure 3 shows the MAP and MEAN curves together with 1-sigma confidence intervals from the posterior.Both results, with different inputs, fit the v 2 data very well ex- cept for the small deviation at the high energies.The fit is slightly better when the ⟨m T ⟩ − m 0 values at the lowest energies are removed.At the same time, using all data points results in larger ⟨m T ⟩ − m 0 values for both the MAP and MEAN curves.The bands for ⟨m T ⟩ − m 0 are much broader than the bands for v 2 .Yet, the uncertainty bands clearly support the differences in the fit portrayed by the MEAN and MAP curves.The model encounters a tension between the ⟨m T ⟩ − m 0 and the v 2 data.This tension may either be due to a true tension within the experimental data, or due to a shortcoming of the theoretical model used to simulate both the ⟨m T ⟩ − m 0 and the v 2 data at high beam energies for a given equation of state.It should also be noted that at higher beam energies the contributions from the mesonic degrees of freedom to the equation of state becomes more dominant which may make an explicitly temperature dependent equation of state necessary.
Finally, the extracted EoS can be tested using various observables like differential flow measurements (see appendix G, which include Refs.[51][52][53][54][55]) or different flow coefficients.The slope of the directed flow dv 1 /dy at mid rapidity are calculated using the reconstructed MEAN and MAP EoSs.The results together with available experimental data are shown in figure 4. The dv 1 /dy prediction closely match the experimental data, especially at the higher energies, for the MEAN EoS extracted from all 15 data points.The 1-sigma confidence intervals are indicated as colored bars.It is shown only for one beam energy due to the high computational cost.It can be seen that at high energies, in the 13-points case, the prediction clearly undershoots the data while in the 15-points case, the experimental data lies at the border of the 1-sigma band.The reconstructed EoSs for all other energies are consistent with the dv 1 /dy data though it was not used to constrain the EoSs.
To relate the extracted high density EoS to constraints from astrophysical observations, the squared speed of sound (c 2 s ) at T = 0 is presented for the MEAN EoSs as a function of the energy density in Figure 5, together with a contour which represents the constraints from recent Binary Neutron Star Merger (BNSM) observations [60] [61].The speed of sound, as the derivative of the pressure is very sensitive to even small variations of the potential energy.The c 2 s values estimated from all data points show overall agreement with the c 2 s constraints from astrophysical observations and predicts a rather stiff equation of state at least up to 4n 0 .In particular, both the astrophysical constraints (see also [62]) and the EoS inference in the present work gives a broad peak structure for c 2 s .This is compatible with recent functional renormalization group (FRG) [63] and conformality [64] analyses.However, if only the 13 data points are used, the extracted speed of sound shows a drastic drop, consistent with a strong first order phase transition at high densities [8,9].This is consistent with the softening phenomenon observed for ⟨m T ⟩ − m 0 data shown in Figure .3. In order to give an estimate of the uncertainty on the speed of sound, we have calculated the speeds of sound for 100000 potentials which lie within the 68% credibility interval of the coefficients, however excluding those which lead to acausal equations of state for densities below 4.5 n0.
Conclusion.
Bayesian inference can constrain the high density QCD EoS using experimental data on v 2 and ⟨m T ⟩ − m 0 of protons.Such an analysis, based on HIC data, can verify the dense QCD matter properties extracted from neutron star observations and complements astrophysical studies to extract the finite temperature EoS from BNSM merger signals as well as constrain its dependence on the symmetry energy.
A parametrized density dependent potential is introduced in the UrQMD model used to train Gaussian Process models as fast emulators to perform the MCMC sampling.In this framework, the input potential can be well reconstructed from experimental HIC observables available already now from experimental measurements.The experimental data constrain the posterior constructed in our method for the EoS, for densities up to 4n 0 .However, beyond 3n 0 , the shape of the posterior depends on the choice of observables used.As a result, the speed of sound extracted for these posteriors exhibit obvious differences.The EoS extracted using all available data points is in good agreement with the constraints from BNSMs with a stiff EoS for densities up to 4n 0 and with-out a phase transition.A cross check is performed with the extracted potentials by calculating the slope of the directed flow.Here, a MEAN potential extracted from all 15 data-points gives the best, consistent description of all available data.The inferences encounter a tension in the measurements of ⟨m T ⟩ − m 0 and v 2 at a collision energy of ≈4 GeV.This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS.Note, that the data are from different experiments that have been conducted during different time periods.The differences in the acceptances, resolutions, statistics and even analysis methods of experimental data makes it difficult for us to pin down the exact sources of these effects.
Tighter constraints and fully conclusive statements on the EoS beyond density 3n 0 require accurate, high statistics data in the whole beam energy range of 2-10 GeV which will hopefully be provided by the beam energy scan program of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR and future experiments at HIAF and NICA.It is noted that, when approaching higher beam energies, which would be important in extending the constraints to higher temperatures and/or densities, the currently used transport model needs to incorporate further finite-temperature and possible partonic matter effects together with relativistic corrections, which we leave for future studies.Further effort should be put into the development and improvement of the theoretical models to consistently incorporate different density dependent EoSs for the study of systematic uncertainties [65].In future, the presented method can also be extended to include more parameters of the model as free parameters for the Bayesian inference, which would also require more and precise input data.In addition, other observables such as the higher order flow coefficients and v 1 can be incorporated into the Bayesian analysis, if permitted by computational constraints, for a more comprehensional constraint of the EoS in the future.The GP emulators are trained on a set of 200 different parameter sets, each with a different high density EoS and the performance of these models is then validated on another 50 input parameter sets.15 different GP models are trained, each one predicting one of the observables (v 2 for 10 collision energies + ⟨m T ⟩ − m 0 for 5 collision energies).The trained GP models can be evaluated by comparing the GP predictions with the "true" results of UrQMD simulations.The performance of the GP models in predicting the v 2 and ⟨m T ⟩ − m 0 observables for 50 different EoSs in the validation dataset are shown in figures 8 and 9 respectively.As evident in these plots, the GP models can accurately predict the simulated observ- ables, given the polynomial coefficients.Hence, the GP models can be used as fast emulators of UrQMD during the MCMC sampling.All the posterior distributions presented in this work are constructed by 4 different MCMC chains.Each chain generates 25000 samples after 10000 tuning steps.
Appendix C: The prior In the following we will explain the choice of the prior distributions which is used as starting point of the Bayesian inference.Technically speaking, the prior distribution of parameters θ i are chosen as Gaussian distributions whose means and variances are estimated from the randomly sampled EoSs, under physical constraints, used in the training of the Gaussian Process Emulators.These constraints were introduced to ensure numerically stable results in training the GP models.To create such a robust training dataset, different physics constraints were applied as discussed in appendix B. These constraints eliminate some of the wildly fluctuating and superluminal EoSs from the training data.
To ensure that the prior in the analysis is broad enough to reflect an a priori high degree of uncertainty (i.e., without introducing a bias) the mean and width of the distributions in the constraint GP training where used also in the prior.However, the polynomial coefficients θ i resulting from these constraints, used to construct the prior distributions for the Bayesian inference, are then sampled independently and are thus not correlated as they would be in the GP model training.Thus, the priors for the Bayesian inference are much broader than the distributions used for the GP model training.The means and standard deviations of the Gaussian priors for the polynomial coefficients are shown in the table I.
Regarding the prior for the Bayesian inference, it is important to note that a prior based only on the GP training constraints could also be a good starting point for the parameter estimation but not a necessary one.The physics constraints can disfavor the acausal range for the parameters.However, we employ this range only as a soft constraint in the prior as we use the mean and width of each coefficient independently, thereby the prior is not limited by the correlations between the coefficients from the GP-training set.This results in inferred potentials which can also be outside the training range for the the various equations of state required to train the Gaussian Process emulator.Once the EoS is constrained, of course, many observables for many beam energies and system sizes can be predicted and compared.We are also planning to make the model available in the future so that all these possibilities can be explored.
In addition to the directed flow, which was shown in the letter, a comparison with recently published HADES data on the differential elliptic flow in Au-Au collisions at E lab = 1.23AGeV [55] is presented here.This comparison of the two different MEAN EoS to HADES data is shown in figure 15.As one can see, the extracted EoSs reproduce the p T dependence nicely up to a proton momentum of 1 GeV.Above this range, the model slightly overestimates the elliptic flow compared to HADES data.The reason for this is likely a small momentum dependence of the potential interaction which is not considered in the present approach.It is however important to note that the integrated elliptic flow is only sensitive to the flow around the maximum of the proton p T distribution which corresponds roughly to p T between 300 and 400 MeV.
Figure 1 .
Figure 1.(Color online) Visualisation of the sampled posterior in the closure test.The color represents the probability for the potential at a given density.The 'ground-truth' EoS used for generating the observations is plotted as black solid line.The red dashed and orange dot-dashed curves are the MAP and MEAN EoS for the posterior.
Figure 2 .
Figure 2. (Color online) Posterior distribution for the EoS inferred using experimental observations of v2 and ⟨mT ⟩−m0.The top figure is the posterior when all 15 data points were used while the bottom figure is obtained without using the ⟨mT ⟩ − m0 values for √ sNN= 3.83 and 4.29 GeV.The MAP and MEAN EoSs in both cases are plotted in red dashed and orange dot-dashed curves respectively.The vertical, grey line depicts the highest average central compression reached in collisions at √ sNN=9 GeV.The CMF EoS is plotted in violet for density below 2n0.
Figure 3 .
Figure 3. (Color online) v2 and ⟨mT ⟩ − m0 values from UrQMD using the MEAN and MAP EoS as extracted from measured data.The observables for both MAP and MEAN EoSs, extracted by using all 15 data points are shown as solid and dashed red lines respectively, while those generated using only the 13 data points are shown as solid and dashed black lines respectively.The experimental data are shown as blue squares.The uncertainty bands correspond to a 68 % credibility constraint constructed from the posterior samples
Figure 4 .Figure 5 .
Figure 4. (Color online) Slope of the directed flow, dv1/dy, of protons at mid rapidity.The experimental data [37-39, 55-59]are shown as blue squares.The colored bars correspond to a 68% credibility constraint constructed from the posterior samples.
Figure 7 .
Figure 7. (Color online)Visualisation of the v2 and ⟨mT ⟩−m0 for 50 random EoSs from the training data.The upper plot is the v2 and the lower plot is the ⟨mT ⟩ − m0 as a function of √ sNN.The experimental measurements are plotted in blue squares while the gray lines are from the training EoSs.
Figure 9 .Figure 15 .
Figure 9. (Color online) Performance of the Gaussian Process models in predicting the ⟨mT ⟩ − m0 for 5 different collision energies.The predictions are shown in blue while the black, dashed line depicts the true= predicted curve.
Table I .
Means (µ) and standard deviations (σ) of the Gaussian priors for the seven polynomial coefficients (θi). | 5,943 | 2022-11-21T00:00:00.000 | [
"Physics"
] |
Hall Effect at the Focus of an Optical Vortex with Linear Polarization
The tight focusing of an optical vortex with an integer topological charge (TC) and linear polarization was considered. We showed that the longitudinal components of the spin angular momentum (SAM) (it was equal to zero) and orbital angular momentum (OAM) (it was equal to the product of the beam power and the TC) vectors averaged over the beam cross-section were separately preserved during the beam propagation. This conservation led to the spin and orbital Hall effects. The spin Hall effect was expressed in the fact that the areas with different signs of the SAM longitudinal component were separated from each other. The orbital Hall effect was marked by the separation of the regions with different rotation directions of the transverse energy flow (clockwise and counterclockwise). There were only four such local regions near the optical axis for any TC. We showed that the total energy flux crossing the focus plane was less than the total beam power since part of the power propagated along the focus surface, while the other part crossed the focus plane in the opposite direction. We also showed that the longitudinal component of the angular momentum (AM) vector was not equal to the sum of the SAM and the OAM. Moreover, there was no summand SAM in the expression for the density of the AM. These quantities were independent of each other. The distributions of the AM and the SAM longitudinal components characterized the orbital and spin Hall effects at the focus, respectively.
Introduction
In 1909, Poynting [1] predicted that left-handed circularly polarized light has a spin angular momentum (SAM), or in short, a spin of −1, and right-handed circularly polarized light has a spin of +1. More precisely, he predicted that each photon could have a spin equal to Planck's constant: either − or . In 1936, Beth [2] verified this experimentally by showing that when linearly polarized light passes through a quarter-wave plate, the plate acquires a torque. In 1992, Allen showed [3] that light, including each photon with a vortex phase described by the angular harmonic exp(in ϕ), has an orbital angular momentum (OAM) n, where n is the topological charge (TC). In the paraxial case, the SAM and OAM are independent and are preserved separately during light propagation in free space. However, spin-orbit conversion (SOC) can occur when light is sharply focused near the focus [4]. So far, a lot of studies have been devoted to the investigation of SAM, OAM, and SOC [5]. Paper [5] is a small review of SOC in a tight focus of structured light. In [6], the tight focusing of radially polarized light was studied. In this work, it was shown that the intensity of the longitudinal light component at the focus increases with increasing numerical aperture, and becomes equal to the intensity of the transverse component at a unit numerical aperture. The Hall effect was investigated at the focus of an optical vortex with radial polarization [7]. It was shown in [7] that for the tight focusing of an optical vortex with radial polarization, the SAM is positive at the focus near the optical axis if the TC of the vortex is +1, and the SAM is negative if the TC of the vortex is −1. This is the so-called catalyst-like effect. In [8], the authors showed that when focusing an optical vortex with radial polarization, the longitudinal projection of the SAM vector has different signs at different distances from the optical axis in the focal plane. This is the radial spin Hall effect. In [9], the 3D SAM was studied in the tight focus of an optical vortex with linear polarization. In this study, the force vector was calculated that will act on an ellipsoidal particle that is in focus. The tight focusing of an optical vortex with azimuthal polarization was observed in [10]. It was shown in [10] that when the optical vortex TC sign changes near the optical axis in the focus, the sign of the SAM longitudinal projection also changes, and therefore, the particle placed in the focus changes the rotation direction around its own axis and around the optical axis. In [11], the angular momentum (AM) in a sharp focus of hybrid cylindrical vector beams was studied. It was shown in [11] that for such light fields, the longitudinal component of the SAM is equal to zero at the focus. The orbital motion of microparticles in a tight focus of optical vortices with circular and radial polarization was investigated in [12]. In [13], SOC was considered in nonparaxial beams with hybrid polarization. It was shown in [13] that when light with a high-order hybrid polarization is tightly focused, regions are formed in the focus in which the longitudinal components of the OAM and the SAM change signs. That is, the spin and orbital Hall effects take place. Beams with hybrid polarization in a tight focus were observed in [14]. It was shown in [14] that when focusing light whose polarization changes only along the radius, the polarization in the focal plane will also change along the radius. Linear polarization and elliptical polarization will alternate but have the same sign. In [15], the paraxial focusing of Bessel beams with circular polarization was studied. It was shown in [15] that the sign of the angular momentum vector will be different on different sides of the light intensity ring in the beam. The tight focusing of high-order Poincare beams was considered in [16]. In our recent works, we investigated the Hall effect in the tight focus of high-order cylindrical vector beams [17], beams with hybrid inhomogeneous polarization [18], Poincaré beams [19], and optical vortices with circular polarization [20]. Another version of the Hall effect in a sharp focus appears when the center of a vortex laser beam gravity is shifted in the case of its limitation by a diaphragm [21]. The Hall effect in the tight focus of an optical vortex with linear polarization has not been considered before.
We note that the spin Hall effect arises not only in a tight focus but also when light is scattered by inhomogeneous structures. Thus, it was shown theoretically in [22] and experimentally in [23] that when a laser beam with linear polarization is reflected from a microresonator with Bragg mirrors, four regions with circular polarization of different signs are formed in the beam. Furthermore, in [24], it was experimentally shown that the scattering of a Hermite-Gaussian (HG 0,1 ) beam with linear polarization on a silver nanowire (AgNW) also gives the spin Hall effect. It was shown in [25] that due to SOC, gold particles placed in the tight focus of the Laguerre-Gaussian vortex beam (LG 0,1 ) rotate at different speeds for light with left and right circular polarization.
In this study, we considered the tight focusing of an optical vortex with an integer TC and linear polarization. Using Richards-Wolf theory [26], which accurately describes light in the vicinity of a tight focus of coherent light, exact analytical expressions were obtained for the longitudinal components of the SAM, OAM, and AM vectors in the focal plane for an optical vortex with linear polarization. It was shown that the longitudinal SAM and OAM components averaged over the beam cross-section were preserved in the initial and the focal planes. It was also demonstrated that there was a separation of regions with different signs of the SAM longitudinal component and regions with different signs of the AM longitudinal component at the focus. It was found that the AM and the SAM values were independent and sufficient to describe the light at the focus, while the meaning of the OAM value at the focus was not clear since the AM was not the sum of the SAM and OAM. However, it was easy to prove the conservation of the OAM value, while it was not possible to prove the conservation of the AM.
Components of the Electric and the Magnetic Fields and the Energy Flux at the Focus
Consider the initial Jones vector for an optical vortex with linear polarization: where (r, ϕ) are polar coordinates in the beam cross-section, n is the TC and is an integer, and the linear polarization vector is directed along the horizontal x-axis. In [27], the electric and magnetic field components in the plane of tight focus for the initial field (1) were obtained: E x = i n−1 2 e inϕ 2I 0,n + e 2iϕ I 2,n+2 + e −2iϕ I 2,n−2 , E y = i n 2 e inϕ e −2iϕ I 2,n−2 − e 2iϕ I 2,n+2 , E z = i n e inϕ e −iϕ I 1,n−1 − e iϕ I 1,n+1 , H x = i n 2 e inϕ e −2iϕ I 2,n−2 − e 2iϕ I 2,n+2 , H y = i n−1 2 e inϕ 2I 0,n − e 2iϕ I 2,n+2 − e −2iϕ I 2,n−2 , H z = i n+1 e inϕ e −iϕ I 1,n−1 + e iϕ I 1,n+1 .
(2) Formula (2) includes functions I ν,µ that depend only on the radial variable r: where k = 2π/λ is the wavenumber of monochromatic light with a wavelength of λ; f is the focal length of the lens; α is the maximum angle of the rays' inclination to the optical axis, which determines the numerical aperture (NA) of the aplanatic lens NA = sin α; and J µ (krsinθ) is the µth order Bessel function of the first kind. In Equation (2) and everywhere below, the indices ν and µ can take the following values: ν = 0, 1, 2; µ = n − 2, n − 1, n, n + 1, n + 2. A(θ) is a real function that determines the radially symmetric initial field amplitude, which depends on the inclination angle θ of the beam emanating from a point on the initial spherical front and converging to the center of the focus plane. The description of the light field at the focus using (3) was obtained for the first time in the classic study by Richards and Wolf [26]. Next, we found the components of the Poynting vector: where E and H are vectors of the electric and the magnetic fields, the signs "*" and "×" mean complex conjugation and vector product, Re is the real part of a complex number, and c is the speed of light in a vacuum. In the following, we omitted the constant c/(2π). Substituting (2) into (4), we obtained the following at the focus of the field in polar coordinates (1): It follows from (5) that the transverse energy flux at the focus of the field (1) rotates counterclockwise if Q(r) > 0 and clockwise if Q(r) < 0. The longitudinal component of the energy flow at different radii r can be positive or negative. It can be shown that the total energy of each term in P z at the focus is equal to the expression: Equation (6) was obtained using Equation (3) and the orthogonality of the Bessel functions: It can be seen from (6) that the energy (or power) does not depend on the order of the Bessel function µ. Applying Formula (6) to the axial energy flow crossing the focus (5), we obtainP In (6), W is the total power of the laser beam. It can be shown that the power W 0 is approximately seven times greater than the power W 2 (it is exactly seven times greater for α = π/2 and for |A(θ)| ≡ 1). Therefore, the total flow (7) is always positive, although the energy flux density (5) at different radii r can be both positive and negative (reverse energy flux [28]). Equation (7) shows that not all of the power W crosses the focus plane from left to right (in the positive direction of the z-axis). The part of the power 2W 1 propagates in the direction perpendicular to the optical axis and does not cross the focus plane. Part of the power W 2 crosses the focus plane in the opposite direction and another part of the power W 0 flows along the positive direction of the z-axis. It is interesting that the power ratio (7) does not depend on the TC of the beam (1).
The Longitudinal Component of the SAM Vector at the Focus
Next, we found the axial SAM component, which shows the presence of light with elliptical and circular polarization in the focus. The longitudinal SAM component is defined as follows [29]: where Im is the imaginary part of a complex number. Substituting (2) into (8), we obtained the axial SAM component at the focus for the field (1): It can be seen from (9) that if the first factor is not equal to zero, then there are four regions in the focus plane where the SAM sign is different. The centers of these areas lie on the Cartesian axes: two regions centered on the vertical axis and two regions centered on the horizontal axis. If I 2,n+2 − I 2,n−2 > 0, then at ϕ = 0 and ϕ = π, the second factor is positive and S z > 0, while at ϕ = π/2 and ϕ = 3π/4, the second factor in (9) is negative and S z < 0. If, conversely, I 2,n+2 − I 2,n−2 < 0, then the SAM is positive on the vertical axis and negative on the horizontal axis. The first factor is equal to zero only in the absence of an optical vortex (n = 0). Thus, it follows from (9) that the spin Hall effect takes place for the field (1) at the focus at n = 0. It leads to the separation of the vectors with left and right elliptical polarizations (with different spins) from each other and their localization in four regions in pairs on the vertical and horizontal axes. Since the axial SAM component in the initial plane (1) is equal to zero (due to linear polarization), then the total spin at the focus must be equal to zero. Indeed, when we integrated the SAM in (9), we obtained Integration over the entire focus plane of the first and second terms gives the difference between two identical energies (6). The third term depending on cos(2ϕ) is zero when integrated over an integer number of the angle ϕ periods. Since the total spin at the focus is zero, regions with different spins must appear in pairs to cancel each other out.
It can be seen from (13) that at ϕ = 0 and ϕ = π, there are regions with L z > 0 in focus, and at ϕ = π/2 and ϕ = 3π/2, there are regions with L z < 0. That is, there is a spatial separation of the OAM with different signs at the focus of the field (1). Moreover, the location in the focal plane of these four regions with centers on the horizontal and vertical axes correlates with areas of elliptical polarization with different signs (9). It should be noted that in the initial plane (1), the OAM axial component (11) is equal to L z = nW, where W is the total beam power. When we integrated (12) over the entire focal plane, we found that the terms containing cos(2ϕ) disappeared since the integration over the angle was performed over an integer number of periods. Integration of other terms led to the following expression: The last equality in (14) follows from the power balance of the entire beam and its components at the focus. The balance can be obtained by integrating the intensity distribution over the entire beam cross-section. The intensity distribution at the focus follows from (4) and is equal to I = 1 2 2I 2 0,n + I 2 2,n+2 + I 2 2,n−2 + 2I 2 1,n+1 + 2I 2 1,n−1 + 2 cos(2ϕ)(I 0,n I 2,n+2 + I 0,n I 2,n−2 − 2I 1,n+1 I 1,n−1 )].
We integrated expression (15) for the intensity over the entire beam cross-section at the focus and obtained W = ∞ 0 2π 0 I(r, ϕ)rdrdϕ = 1 2 ∞ 0 2π 0 rdrdϕ 2I 2 0,n + I 2 2,n+2 + I 2 2,n−2 + 2I 2 1,n+1 + 2I 2 1,n−1 + 2 cos(2ϕ)(I 0,n I 2,n+2 + I 0,n I 2,n−2 − 2I 1,n+1 I 1,n−1 )] To obtain (16), Equation (6) and the fact that the integration of the term with cos(2ϕ) over the period gives zero were used. It can be seen from (16) that the total beam power is equal to Equation (17) was used in the last step of (14). Thus, we showed that the longitudinal OAM component averaged over the beam cross-section was preserved for the field in (1). Preservation of the full OAM during propagation of the beam (1) was the reason for the formation of an even number of regions in the focus, in which the OAM component had a different sign (the orbital Hall effect).
The Longitudinal Component of the AM Vector at the Focus
Next, we compared the longitudinal components of the AM and of the sum of SAM and OAM. The AM is given by the equation [30]: The longitudinal AM component is determined only by the angular component of the energy flux at the focus (5) and is equal to J z = rQ(r) = r[I 1,n+1 (I 0,n + I 2,n+2 ) + I 1,n−1 (I 0,n + I 2,n−2 )]. (19) From (19), it can be seen that the AM longitudinal component on the optical axis is always equal to zero since the "leverage" is equal to zero. We compared expression (19) with the sum of SAM (9) and OAM (12) for the field (1) in the focus: S z + L z = 1 2 {2nI 2 0,n + 2(n + 1)I 2 1,n+1 + 2(n − 1)I 2 1,n−1 +(n + 1)I 2 2,n+2 + (n − 1)I 2 2,n−2 + 2 cos(2ϕ) ×[(4n + 3)I 0,n I 2,n+2 + (4n − 3)I 0,n I 2,n−2 − 2nI 1,n+1 I 1,n−1 ]}. (20) A comparison of (19) and (20) shows that the AM is not equal to the sum of the SAM and OAM. For example, the angular momentum (19) is radially symmetric and does not depend on the angle ϕ, while the sum of SAM and OAM (20) depends on the azimuth angle as cos(2ϕ). Therefore, there must be a third term X z , which must be added to the sum (20) in order for the equality to hold: Several questions arise from this information. What is transferred to the particle and causes it to rotate along a circular path: AM (19) or OAM (12)? Furthermore, what should be called the orbital Hall effect: separation of regions with different OAM signs (12) or AM signs (19)? Most likely, the orbital Hall effect is determined by the different directions of the transverse energy flow (5) since the transverse flow "catches" the microparticle and forces it to rotate along the "orbit" [31]. Therefore, the AM, which is proportional to the transverse energy flux Q(r), is responsible for the rotation of the particle along a circular trajectory.
Physical Meaning of the Third Term in the Equation for the AM
In this section, we show that the terms SAM and OAM in (21) were formed artificially and that only two characteristics are sufficient for the light field. These characteristics are SAM and AM, which are not related to each other. We start with the definition of the AM (18) and explicitly write out the quantities included in it: In (22), all dimensional constants are omitted. Further, for definiteness, we consider obtaining the longitudinal component of the SAM vector in Cartesian coordinates. From (22) we obtained Let us write in a general form the expression for the OAM longitudinal component (11), but using Cartesian coordinates: Comparing (23) and (24), there are four terms at x and y in (23) and three terms at x and y in (24). Therefore, in order to form a separate term in (23) as in (24), we added and subtracted two terms in (23). Then, instead of (23), we obtained The added terms in (25) are marked with angular parentheses. They do not change the value of the expression (23). Now, in (25), we grouped the terms in order to explicitly separate the term equal to L z (24): Next, we added and subtracted the SAM longitudinal component (8) in (26) and obtained Thus, from (23), we obtained (21). In (27), the difference between the two terms in small triangular brackets is equal to the SAM with the opposite sign. That is, OAM and SAM in the expression for AM were artificially formed by adding and subtracting additional terms. As a result, the third term X z appeared, which has no meaning in the general case. However, in some cases, a certain meaning can be attributed to it. For instance, if L z + S z = 0, then the term X z is equal to the angular momentum of the light field (X z = J z ). The conclusion from this subsection is presented below. The orbital Hall effect occurs at the focus when the regions with the AM longitudinal component of different signs are separated, that is, the regions appear with a different direction of the transverse energy flow rotation. The spin Hall effect occurs at the focus when the regions with the SAM longitudinal component of different signs are separated from each other, that is, the regions in which the polarization vector rotates in different directions are separated. Figure 1 shows the distributions of the intensity, as well as the densities of the SAM, the OAM, and the AM of the beam (1) in a tight focus at n = 1 (Figure 1a-d), n = 3 (Figure 1e-h), and n = 5 (Figure 1i-l). Figure 1 confirmed Formula (9), according to which the maximum and minimum values of the SAM density were achieved on the Cartesian axes. Figure 1 also confirmed Formulas (9) and (13), according to which the OAM density was symmetric with respect to the Cartesian axes, and the AM density had radial symmetry. It follows from Figure 1 that the spin Hall effect occurred at the focus (Figure 1b,f,j), when four local regions with positive and negative (approximately equal in absolute value) SAM were formed at different radii in the focal plane. The orbital Hall effect also took place at the focus (Figure 1d,h,l). However, first, it was radial, and, second, it was weakly expressed since the positive AM distributed over a ring of one radius was much larger by a modulus of a negative AM distributed over a ring of another radius. In Figure 1d,h,l, the blue ring with the negative AM was not visible, but the value of the negative AM is shown on the horizontal color scale. Figure 2 illustrates the dependences of the total intensity (power) and the total longitudinal power flux on the distance to the focus. Figure 1. Distributions of the intensity (a,e,i), the SAM density (b,f,j), the OAM density (c,g,k), and the AM density (d,h,l) of the beam (1) in a tight focus at n = 1 (a-d), n = 3 (e-h), and n = 5 (i-l) with the following calculation parameters: wavelength λ = 532 nm, focal length f = 10 μm, numerical aperture NA = 0.95, and size of the computational domain = 4 × 4 μm 2 . The scale mark in all figures means 1 μm. The numbers on the color scales below each figure indicate the minimum and maximum values. Figure 2 illustrates the dependences of the total intensity (power) and the total longitudinal power flux on the distance to the focus. The upper curve is the total intensity(power) and the lower curve is the total longitudinal power flow. The calculation parameters were the same as in Figure 1. The graphs show the curves for n = 1 and n = 3, but they almost coincide with each other. It was assumed that the distribution of the focused light was uniform (|A(θ)| ≡ 1). In this case, the total energy was 2πf 2 ≈ 628 µm 2 . Numerically obtained values were approximately equal to Idxdy ≈ 600 µm 2 . The theoretical value of the total longitudinal power flow was πf 2 ≈ 314 µm 2 . Numerically obtained values were approximately equal to P z dxdy ≈ 310 µm 2 . Figure 2 confirmed Formula (17), according to which the total light energy (power) should be equal to 2πf 2 (at α = π/2 and |A(θ)| ≡ 1), and Formula (7), according to which the total longitudinal power flow should be equal to πf 2 . That is, the representation of the longitudinal power flow through the forward flow, the perpendicular flow, and the reverse flow was also confirmed.
Discussion of Results
In this study, we showed that the longitudinal SAM and OAM components averaged over the focus plane for an initial optical vortex (with arbitrary radially symmetric real amplitude) with linear polarization are preserved separately. However, this is not true in all cases. For example, if we considered the tight focusing of an optical vortex with circular polarization [20], then the averaged axial SAM and OAM components were not conserved. Instead, only their sum was conserved. Indeed, the densities of the longitudinal components of the SAM and OAM vectors at the focus of an optical vortex with right-hand circular polarization have the form, respectively: L z = nI 2 0,n + (n + 2)I 2 2,n+2 + 2(n + 1)I 2 1,n+1 .
We integrated both of these quantities (29) and (30) dϕ nI 2 0,n + (n + 2)I 2 2,n+2 + 2(n + 1)I 2 1,n+1 Moreover, the sum of (31) and (32) is equal tô We obtained the following expressions for the SAM and the OAM in the initial plane: It can be seen from (33) that in the initial field for an individual photon, the sum of the spin and the OAM for the beam (28) was equal to S z + L z = (n + 1) , while for the entire beam, it wasŜ z +L z = (n + 1)W. During focusing, the total spin of the beam (28) decreased in the focal plane, while the total OAM increased in it: This effect is called SOC [4]. Therefore, if the initial field (1) has no spin (no SAM), then there is no SOC in the focus and the total spin is zero (10):Ŝ z = 0. However, the spin Hall effect (9) can be formed at the focus. The OAM for the field (1) is also preserved (14) and is equal toL z = nW, and there is an orbital Hall effect ( (12) and (13)) at the focus. If there is an SAM (29) in the initial field (28), then due to the SOC, it is not kept in focus, but decreases, according to (35), partially converting into OAM. The beam (28) also has spin and orbital Hall effects [20]. However, both of these effects are radial, i.e., the sign of the SAM and the OAM is different at different radii from the optical axis.
Conclusions
In this study, the following results were obtained. It was shown that during tight focusing of an optical vortex with an arbitrary radially symmetric amplitude function and with linear polarization, the distribution of the SAM axial component (9) in the focal plane depends on the azimuthal angle ϕ according to cos(2ϕ), and therefore, for a TC n = 0, the spin Hall effect takes place at the focus. This effect leads to the formation of two regions on the vertical and horizontal axes in which the polarization vector rotates in different directions (clockwise and counterclockwise) and SAM has different signs. Similarly, it was derived that the OAM axial component (12) depends on the azimuth angle ϕ according to cos(2ϕ) at the focus. However, we cannot call these four regions with different signs of the OAM longitudinal component a manifestation of the orbital Hall effect since we do not know how the transverse energy flow behaves in these regions (changes the direction of rotation or not). It was also demonstrated that the transverse energy flux rotates in the plane of focus in opposite directions at different radii from the optical axis (5). Such a distribution of the transverse energy flux at the focus can be called the radial-orbital Hall effect since the energy flux will rotate dielectric microparticles trapped at different radii at the focus clockwise or counterclockwise (the angular tractor [31]). Funding: This research was funded by Russian Science Foundation, grant number 22-22-00265 (in the part of theory). This work was performed within the State Assignment of FSRC "Crystallog-raphy and Photonics" RAS (in part of modelling).
Conflicts of Interest:
The authors declare no conflict of interest. | 6,791.4 | 2023-03-31T00:00:00.000 | [
"Physics"
] |
CONTAGION ACROSS REAL ESTATE AND EQUITY MARKETS DURING EUROPEAN SOVEREIGN DEBT CRISIS
2012 ABSTRACT. Standard methods of testing contagion may not work well if the data set is not normally distributed. To cope with this problem, Hatemi-J and Hacker (2005) proposed a new case-resampling bootstrap method to test contagion. In this paper, we extend this method to test the parameters in the Forbes-Rigobon multivariate (FRM) test. The new method has the advantage that the bivariate model is extended to a multivariate framework which jointly models and tests all combinations of contagious linkages. We apply our method to investigate contagion across equity and real estate markets of four countries: Greece, U.K., U.S. and Hong Kong, during the European sovereign debt crisis, and compare the result with that by performing the FRM test directly. Two important results are found. Firstly, both tests we use give similar p-values of the coefficients which indicate the significance of contagion. Secondly, for both tests, the contagion pattern in the equity and real estate markets are different. Our study has an implication to investors that they should regularly review their portfolio and be aware of contagion triggered by a crisis. This would help them reduce their loss and is useful in strategic property
INTRODUCTION
Most investors are risk-averse, i.e. they seek for opportunities to reduce the risk of their investment. A well-known method of reducing risk is diversification. Therefore, investors are always advised to invest in different types of asset in different countries in order to diversify their risk. However, during a financial crisis, correlation of a type of asset market between two countries usually increases. They often move down together due to a worsening environment. Even correlation between different types of asset markets may increase, too. As a result, the opportunity of diversification is reduced. This phenomenon is called contagion. The World Bank Group (2011) gives three definitions of contagion: Broad definition: contagion is the crosscountry transmission of shocks or the general cross-country spillover effects.
Restrictive definition: contagion is the transmission of shocks to other countries or the cross-country correlation, beyond any fundamental link among the countries and beyond common shocks.
Very restrictive definition: contagion occurs when cross-country correlations increase during "crisis times" relative to correlations during "tranquil times".
Most of the literatures adopted the very restrictive definition of contagion given by the World Bank. Our paper also adopts the World Bank's very restrictive definition of contagion. There are also other definitions of contagion. For example, Pericoli and Sbracia (2003) stated five definitions of contagion which are adopted in some literatures.
As mentioned, contagion usually occurs during "crisis times", when shocks transmit from a country to others, causing co-movements (usually downward) of asset prices. There are a number of crises triggering shocks around the world. The most typical one is the Great Depression in the 30's, causing the deepest global recession ever. Other crises include the oil crisis in the 70's, the 1987 U.S. stock market crash, the 1994 Mexico Peso crisis, the 1997-1998 East Asian crisis, the 2008 global financial tsunami and the current European sovereign debt crisis. In particular, the financial tsunami in 2008, when nearly all risky assets fell together at the same time, was the worst global financial crisis since the Great Depression. Many investors lost a lot on their investment. If they knew that there was a sign of contagion (e.g. when the correlation between two assets suddenly increases sharply, which is the simplest indication of contagion), they might be able to reallocate their investment to minimize their loss. Furthermore, the contagion patterns in the past crises can act as reference for examining contagion patterns in future crises. Therefore, it is important to study contagion. Due to this importance, the topic of contagion has attracted a number of people to study on in the past.
Our work adds value on existing literature on three ways. Firstly, previous literature on contagion mostly studied equity markets (e.g. Forbes and Rigobon, 2002). However, there was an increasing concern on contagion across real estate markets in recent years. According to Hudson- Wilson et al. (2003), real estate can reduce overall portfolio risk, achieve high absolute returns and hedge unexpected inflation or deflation. Hence there are additional motivations for investors to include real estate in their portfolio. Therefore, it is increasingly important to study contagion across real estate markets. Moreover, as mentioned by Hatemi-J and Roca (2010), the recent globalization and internationalization of real estate markets lead to increasing integration, which is expected to cause more co-movements of prices among global real estate markets. However, according to Hui and Zheng (2012b), real estate is a special commodity which can act not only as consumption goods, but also as in investment tool. Due to this special feature of real estate, the contagion pattern of real estate markets may not be the same as those of other asset markets, so it is worth studying the contagion among global real estate markets. Furthermore, the limited number of previous works on this topic has led to mixed results (see Section 2). This paper contributes to the limited research on whether there is contagion between real estate markets.
Secondly, most of the previous work studied the Asian financial crisis in 1997 (e.g. Bond et al., 2006;Wilson and Zurbruegg, 2004). There are also a few articles about contagion across asset markets during the global financial tsunami in 2008, like Fry et al. (2010) and Dungey (2009). However, the European sovereign debt crisis happened just recently, so there are still no publications on contagion across asset markets during the crisis. This crisis involves emerging markets in Europe which previous studies seldom worked on, so we do not know much about the contagion patterns among these countries. This is the main motivation of our research. This paper fills in the missing gap that contagion in real estate markets during European sovereign debt crisis was insufficiently addressed in previous works.
Furthermore, the majority of methodologies used in previous literature are based on correlation. For example, Forbes and Rigobon (2002) used the ordinary correlation coefficient to derive the adjusted correlation coefficient, and hence constructed the Forbes-Rigobon test for contagion. Many other tests, such as the Chow test (Dungey, 2005a), the coskewness test (Fry et al., 2010) and the cokurtosis test (Hui and Chan, 2012), are extensions of the Forbes-Rigobon test. However, as Hatemi-J and Roca (2010) pointed out, the above standard methods may not work well on data which do not satisfy the conditions of normality and constant variance. To cope with this problem, Hatemi-J and Hacker (2005) proposed a caseresampling bootstrap method to test contagion, and applied this method to test for contagion from Thai to Indonesian equity markets during the Asian financial crisis. Their approach is applied by Hatemi-J and Roca (2010) to test contagion across real estate markets of different countries during the U.S. subprime crisis. As previous studies applied this method on a bivariate model only, we extend their bivariate approach to a multivariate framework. This multivariate framework jointly models and tests all combinations of contagious linkages. Thus it has an advantage that all directions of contagion can be tested, so we get a more complete picture of the contagion pattern.
In this paper, we investigate contagion across equity and real estate markets of four countries: Greece, U.S., U.K., and Hong Kong, during the European sovereign debt crisis. We use the FRM test and the case-resampling bootstrap method, and compare the results of the two different tests. We extend the caseresampling bootstrap method to our multivariate framework so that all contagious linkages can be tested. We highlight the importance of the case-resampling bootstrap method that it performs well under non-normality and heteroscedasticity, which standard methods do not work well in these cases.
The paper proceeds as follows. Section 2 reviews previous works on contagion across real estate markets. In Section 3, we describe the tests of contagion we use. In Section 4, we explained how the crisis period and the indices are selected. Section 5 gives the results of the tests and an analysis of the results. We draw up conclusion and provide a further discussion of the topic in Section 6.
LITERATURE REVIEW
This section gives a review of previous studies on contagion across real estate markets (or contagion between real estate and equity markets). The following is a list of part of such studies. Bond et al. (2006) investigated the contagion across real estate markets during the 1997-98 East Asian crisis using the latent factor model, and found that contagion among the markets existed. On the contrary, Fry et al. (2010), who tested the existence of contagion across global real estate markets during the East Asian crisis and the U.S. subprime crisis using higher order moments, found no significant evidence of contagion. Using the Forbes-Rigobon test, Wilson and Zurbruegg (2004) examined contagion from the Thai real estate market to other East Asian real estate markets during the 1997 Asian crisis. They found only little evidence of contagion. Wilson et al. (2007) applied the method of structural time series to measure spillover effects across Asian property markets during the Asian financial crisis in 1997. They found a broad level of interdependence that transcended the Asian financial crisis. Yunus and Swanson (2007) applied a number of tests to examine long-run relationships and short-run causal linkages among the public property markets of the Asia-Pacific region and the U.S. over from January 2000 to March 2006. In the short run, there were no significant lead-lag relationships between the property markets of the U.S. and the Asia-Pacific region. While in the long run, from the perspective of U.S. investors, Hong Kong and Japan's markets provided greater diversification benefits. Hence U.S. real estate investors could benefit from diversification both in the short and long run. Liow (2008) investigated the changes in long-run relationship and short-term linkage among the US, UK and eight Asian securitized real estate markets before, during, and after the 1997-1998 Asian financial crisis as well as in the most recent period. He found a stronger interdependence in Asian securitized real estate markets since the Asian financial crisis, both in the long run and in the short run. Furthermore, this interdependence seemed to be on a rising trend recently. Yunus (2009) examined the degree of interdependence among the securitized property markets of six major countries and the U.S. He found that over a period from January 1990 to August 2007, the property markets of Australia, Hong Kong, Japan, the United Kingdom and the U.S. were tied together. Ryan (2011) implemented a specific class of Vector Autoregression (VAR) models to examine the level of integration between international listed property markets during the Asian financial crisis and the current global credit crisis. The result showed that diversification benefits evaporated during the crisis in both hedged and un-hedged cases as a result of cointegration between the markets. Hui and Chan (2012), who used the coskewness and cokurtosis tests to examine contagion between U.S., U.K., China and Hong Kong during the financial tsunami in 2008, found significance evidence between those countries. In particular, the greatest significance of contagion was found between China and Hong Kong, and between U.S. and U.K. Hui and Zheng (2012a) investigated the dynamic conditional correlations (DCCs) between housing returns and retail property returns, and the existence of volatility spillover between the two property markets of Hong Kong. From the findings, they suggested that Hong Kong's retail property market was generally more volatile than the residential market. They also found a unilateral volatility spillover from residential property to retail property in the Hong Kong market. Serrano and Hoesli (2012) used fractional cointegration analysis to examine the existence of long-run relations between securitized real estate returns and three sets of variables frequently used in the literature as the factors driving securitized real estate returns. They found strong evidence of fractional cointegration between securitized real estate and the three sets of variables.
Some people did research on integration/interdependence between real estate and equity markets. For example, Okunev and Wilson (1997) tested whether or not there existed a relationship of co-integration between the REIT and the S&P 500 indices. The results indicated that the real estate and stock markets were fractionally intergrated. Okunev et al. (2000) conducted both linear and nonlinear causality tests on the US real estate and the S&P 500 Index and concluded that there exists unidirectional relationship from real estate to stock market when using the linear test, but there is a strong unidirectional relationship from the stock market to the real estate market when using the nonlinear test. Knight et al. (2005) constructed models of asymmetric dependence using the copula function to examine the relationship between securitized real estate and equity markets. They found that for both U.K. and global markets, the securitized real estate and equity markets exhibited strong tail dependence -particularly in the negative tail, suggesting that real estate securities offer, at best, limited diversification protection when other asset markets were falling. Zhou (2010) applied the wavelet analysis to examine the comovement among international securitized real estate markets and the cross-market comovement between the stock and securitized real estate markets. Using data from 17 different countries over 14 years, Quan and Titman (1999) found a significant positive relation between stock returns and changes in commercial real estate values. Some studies found a long-term positive correlation between real estate and stock prices. Tse (2001) studied the impact of property prices on stock prices in Hong Kong from 1974 to 1998, and found that the property and stock prices are cointegrated. Liow (2006) also found long-term positive correlations between real estate and stock prices in general. Similar results were found by Hui et al. (2011), who examined the relationship between real estate and stock markets in U.K. and Hong Kong by the method of data mining. They found not only a positive correlation, but also a co-movement, between the two markets. Case et al. (2012) used the Dynamic Conditional Correlation model with Generalized Autoregressive Conditional Heteroskedasticity (DCC-GARCH) to examine dynamics in the correlation of returns between publicly traded REITs and non-REIT stocks. They found that REIT-stock correlations formed three distinct periods. Liow (2012) investigated comovements and correlations across eight Asian securitized real estate markets over 1995-2009, and found that real estate-global stock correlations co-moved significantly and positively with real estate-regional stock correlations and real estate-local stock correlations. The above research studies are evidences that the cointegration between equity and real estate markets is stronger than before.
The above summarizes previous literatures on contagion across real estate markets. In the next section, we describe two of the contagion tests: the Forbes-Rigobon multivariate (FRM) test (also called the multivariate version of Chow test) and the case-resampling bootstrap method. Dungey et al. (2005a) proposed the Chow test of contagion. One of its advantages is that it provides a natural extension of the bivariate approach to a multivariate framework that jointly models and tests all combinations of contagious linkages. The main idea is to use linear regression. For example, to test contagion from country i to country j, the equation of regression is
The Chow test and its multivariate version
where: represents the ( ) 1 x y T T + × scaled pooled data set by stacking the pre-crisis and crisis scaled data; , x i σ denotes the standard deviation of the asset price of country i in the pre-crisis period; T x and T y denote the sample sizes of the pre-crisis and crisis periods respectively; and d t is a dummy variable defined by: and t ε is an error term. The null hypothesis is against the alternative hypothesis of Hence, to test the null hypothesis of no contagion, we can perform a one-sided t-test to 3 3 γ σ , where 3 γ is the ordinary least square (OLS) estimator of 3 γ , and 3 σ is the standard error of 3 γ . To study contagion, usually three or more countries are involved. For the sake of convenience, there is a multivariate version of the Chow test, which is also called the Forbes-Rigobon multivariate (FRM) test. For example, for three countries, the set of equations is (Dungey et al., 2005b , , , where: z i are defined by (2) (6), it is not strictly necessary to standardize the data z i,t . 2. We follow Forbes and Rigobon (2002)'s assumption of no endogeneity between markets.
The case-resampling bootstrap method
The Chow test described in the previous section has a disadvantage that if the data set is not normally distributed, or its variance is not constant, then the result may not be accurate. To cope with this problem, Hatemi-J and Hacker (2005) developed an alternative test of contagion using the case-resampling bootstrap method. One advantage of this method is that it performs accurately when the assumption of normality and constant variance is not fulfilled (Hatemi-J and Roca, 2010). Hatemi-J and Hacker (2005) also presented simulations on how well the case-resampling bootstrap method worked, and suggested that it worked better than the OLS method. Hatemi-J and Hacker (2005), Hatemi-J and Roca (2010) applied this method to estimate the coefficient γ 3 in the single equation (1) (their equation is different from ours that z i,t is not divided by σ x,i ) and use a two-sided test to test its significance. For details of proof of this method, please refer to Hatemi-J and Hacker (2005).
The case-resampling bootstrap method can be applied to estimate the coefficients in the set of equations (6) of the FRM model. In this paper, we extend Hatemi-J and Hacker (2005)'s approach to a multivariate framework that jointly models and tests all combinations of contagious linkages. We use a one-sided test to test the significance of contagion. For each of the equations in (6), we undergo the following procedure (the steps are similar to those of Hatemi-J and Roca (2010) γ which are negative in the numerator since we are conducting a one-sided test. In this paper, we set N = 500.
DATA SOURCE
In the previous section, we described two tests of contagion: the Chow test and the case-resampling bootstrap method. We apply these two tests to test contagion across equity and real estate markets of four countries: Greece, U.K., U.S. and Hong Kong, during the European Before conducting the tests, we have to select the data for the tests first. We select the following equity and real estate indices for the four countries (see Table 1).
All indices are daily equity price indices obtained from Datastream. The equity indices are the major stock indices of each of the four countries which cover most of the largest listed companies of the countries. The real estate indices are, in fact, real estate stock indices (or securitized real estate indices) compiled by Datastream. The returns are computed as the difference of the natural logarithms of daily price indices.
The whole period of observation is set to be from July 1, 2009 to June 30, 2010, a total of 261 observations. We divide the whole timeline into pre-crisis and crisis periods as follows: -Pre-crisis period: July 1, 2009 to April 6, 2010 (200 observations). -Crisis period: April 7, 2010 to June 30, 2010 (61 observations). We choose April 7 as the start of the crisis period as the indices began to fall more sharply than before since that day, indicating worsening of the crisis. Now we can apply the two tests described in Section 3 to the data selected over the period of observation. The test results are shown in the next section.
Preliminary statistics
We obtain the daily equity and real indices of the four countries over the period of observation, and calculate the continuous compounded daily return of the indices. We first calculate the mean and standard deviation of the returns of the indices throughout the whole period, and in the two separate periods specified in Section 4, as shown in the Table 2. From the Table 2, we can see that for both equity and real estate markets, the average returns of the indices of all four countries are lower in the crisis period than in the pre-crisis period. The standard deviations of the index returns of most countries increase, except for Hong Kong that both the equity and the real estate markets become slightly less volatile in the crisis period.
Next, we use the Anderson-Darling test to test for normality of the data. The result of the normality test is shown in the Table 3.
From the Table 3, we can see that for both equity and real estate markets, the p-values of the normality test of most countries are very small (except for Hong Kong), showing that the null hypothesis of normality is strongly rejected. Hence the standard approaches to test for contagion, like the Chow test, may not work well in our case.
Results of the contagion tests and analysis
Here we apply the FRM test and the case-resampling bootstrap method to the data obtained. Tables 4, 5, 6 and 7 show the results of the tests. From the tables, both the FRM test and the case-resampling bootstrap method show that the patterns of contagion in the equity and real estate markets are different. From Table 4, the FRM test shows that for the equity market, at 5% significance level, there is significant evidence of contagion between Greece and U.S., and between U.K. and Hong Kong, in both directions. However, for the real estate market, at 5% significance level, significant evidence of contagion is found only between U.K. and Hong Kong, in both directions. All other contagious linkages are insignificant at 5% significance level, as seen from Table 5. From Tables 6 and 7, we can find similar results shown by the case-resampling bootstrap method. The only difference is that the case resampling bootstrap method shows significance evidence of contagion in the real estate market from U.K. to Greece, but the FRM test shows that contagion in this direction is insignificant. To compare the result of the two tests, we compare Tables 4 and 6 for the equity market, and compare Tables 5 and 7 for the real estate market. From the above tables, we can see that for both equity and real estate markets, the estimations , In comparison, Hatemi-J and Hacker (2005)'s result showed that the OLS and the case-resampling bootstrap method gave similar estimated values of the coefficients, but the p-values based on the case-resampling bootstrap method was much smaller, i.e. there was a much more significant evidence of contagion. On the contrary, the result given by Hatemi-J and Roca (2010) showed no significant evidence of contagion. The discrepancies between the results are due to the fact that the observations are selected randomly from the given sample in the case-resampling bootstrap method. Comparing with Hatemi-J and Hacker (2005), Hatemi-J and Roca (2010)'s method, our method has an advantage that we test all combinations of contagious linkages, but Hatemi-J and Hacker (2005), Hatemi-J and Roca (2010) assumed a fixed source of contagion.
CONCLUSION
In this paper, we use the FRM test and the case-resampling bootstrap method to investigate contagion across equity and real estate markets of four countries during the European sovereign debt crisis. The results are given in Section 5. From the results, we can see that the overall effect of contagion is not so great. For equity markets, the major pattern of contagion is between Greece and U.S. (in both directions), and between U.K. and Hong Kong (in both directions). For real estate markets, the major pattern of contagion is between U.K. and Hong Kong (in both directions), and from U.K. to Greece (for the case-resampling bootstrap method only). Our results have the following practical implication to investors. If there is significant evidence of contagion from a type of asset price of one country to that type of asset price of another country (e.g. Hong Kong ↔ U.K. for both equity and real estate markets in our case), holding that type of asset of both countries together would suffer because they would tend to move together in the same direction. Therefore, investors and portfolio managers should constantly review their portfolio to avoid loss caused by contagion. This also applies to real estate investment and will lead to better strategic property management.
One observation which can be seen from the result is that the patterns of contagion in the equity and real estate markets are different. Most of the previous work of contagion focused on equity markets. There aren't so many previous studies on contagion across real estate markets since it is difficult to find a reliable, accurate and high-frequency real estate index. Therefore, mixed results occurred (refer to Section 2), and the contagion pattern of the real estate markets is not fully explored. Our result shows that the real estate markets have a different contagion pattern from the equity markets. There are two main reasons for this.
Firstly, the real estate market is a special type of asset market which behaves differently from other types of asset markets such as the equity market. As mentioned in introduction, real estate can serve as both consumption goods and an investment tool. This special feature of real estate makes it behave differently from other types of assets such as equity. Secondly, the government can take a more active role in the real estate market than in the equity market. The government can control the supply of land and housing by monitoring the number of land auctions and building public housing estates. Thus the real estate market is more easily affected by government manipulation and hence may behave in a way different from the equity market.
Another finding is that the FRM test and the case-resampling bootstrap method give similar results. However, this does not mean that the case-resampling bootstrap method has no use at all. As mentioned, traditional methods like the Chow test do not work well when the data is not normally distributed or has non-constant variance. Our result shows that the null hypothesis of normality is strongly rejected. Thus alternative methods have to be used. The case-resampling bootstrap method has the advantage that it performs accurately under non-normality and heteroscedasticity, so it can provide clear and reasonable results.
Both of the tests we use fail to show the expected result that Greece is the main source of contagion. There are a number of reasons for this. Firstly, the European sovereign debt crisis is a new crisis which is still ongoing. Up till now, there are still no publications studying contagion patterns during this crisis. Secondly, countries like Greece are emerging markets which previous work seldom studied on. They may not behave in the same way as the markets of the developed countries do. Furthermore, only a few of the previous studies used the case-resampling bootstrap method to test contagion. In particular, no one has ever used this method to investigate contagion during the European sovereign debt crisis. This paper gives a first insight of contagion across Greece and the major economies in the world during the European sovereign debt crisis. Hatemi-J and Hacker (2005) applied the case-resampling bootstrap method to the univariate linear regression model, and we extend their method to the multivariate model so that all combinations of contagious linkages can be tested. However, there are many kinds of models of contagion besides the linear regression model. The application of the case-resampling bootstrap method to other models would be a scope of future research. | 6,417.6 | 2013-09-23T00:00:00.000 | [
"Economics"
] |
The 10 parsec sample in the Gaia era
The nearest stars provide a fundamental constraint for our understanding of stellar physics and the Galaxy. The nearby sample serves as an anchor where all objects can be seen and understood with precise data. This work is triggered by the most recent data release of the astrometric space mission Gaia and uses its unprecedented high precision parallax measurements to review the census of objects within 10 pc. The first aim of this work was to compile all stars and brown dwarfs within 10 pc observable by Gaia, and compare it with the Gaia Catalogue of Nearby Stars as a quality assurance test. We complement the list to get a full 10 pc census, including bright stars, brown dwarfs, and exoplanets. We started our compilation from a query on all objects with a parallax larger than 100 mas using SIMBAD. We completed the census by adding companions, brown dwarfs with recent parallax measurements not in SIMBAD yet, and vetted exoplanets. The compilation combines astrometry and photometry from the recent Gaia Early Data Release 3 with literature magnitudes, spectral types and line-of-sight velocities. We give a description of the astrophysical content of the 10 pc sample. We find a multiplicity frequency of around 28%. Among the stars and brown dwarfs, we estimate that around 61% are M stars and more than half of the M stars are within the range M3.0 V to M5.0 V. We give an overview of the brown dwarfs and exoplanets that should be detected in the next Gaia data releases along with future developments. We provide a catalogue of 540 stars, brown dwarfs, and exoplanets in 339 systems, within 10 pc from the Sun. This list is as volume-complete as possible from current knowledge and provides benchmark stars that can be used, for instance, to define calibration samples and to test the quality of the forthcoming Gaia releases. It also has a strong outreach potential.
Introduction
Determining the number of stars in the sky must have been in the minds of many people since the dawn of humanity.Ancient astronomers, such as Timocharis of Alexandria and Hipparchus of Nicaea, started to count and catalogue stars visible to the naked eye and built the first magnitude-limited catalogues.Modern astronomers prefer using volume-limited catalogues, with different maximum distance limits (e.g.Jenkins 1937;van Biesbroeck 1961;Reid et al. 2004;Gliese & Jahreiss 2015;Henry et al. 2018), because any magnitude-limited sample is biased against intrinsically faint (and single) objects (Malmquist 1925).A good example concerns the low-mass stars (M 0.5 M ).We now know that they constitute an important part of the objects in our Galaxy, while even the brightest of them (AX Mic) is invisible to the naked eye.Astronomers such as Max Wolf and Frank E. Ross catalogued stars with a large proper motion to try discovering faint, but nearby stars (Wolf 1917;Ross 1926).Willem J. Luyten produced many catalogues (e.g.Luyten 1979) with different cuts in proper motion and corresponding names (i.e.LFT for five-tenths of an arcsec limit, LTT for two-tenths, and LHS for half a second).
Ever since the first stellar parallaxes were measured (Bessel 1838;Henderson 1839; von Struve 1840, see Reid & Menten 2020 for a review), astronomers have tried to map out our nearest neighbours.Individual measurements have been followed by increasingly larger trigonometric parallax catalogues across the 20th century, providing fundamental data for volume-limited catalogues: 72 stars by Newcomb (1904), 1870 stars in the First General Catalogue of trigonometric parallaxes computed by Frank Schlesinger and edited by the Yale University Observatory in 1924, 6399 stars in the Yale Parallax Catalogue (Jenkins 1963), 7879 stars in the Fourth General Catalogue of trigonometric parallaxes (van Altena et al. 1995), etc.The end of the 20th century was marked by the first astrometric space mission, Hipparcos (HIgh Precision PARallax COllecting Satellite, Perryman et al. 1997) providing a catalogue of 117 955 relatively bright stars (V 12.4 mag).The second astrometric space mission, Gaia (Gaia Collaboration et al. 2016), provides another dramatic increase, both qualitatively and quantitatively, with all sky parallax measurements for about 1.5 billion objects.It offers the means to complete volume-limited samples with larger distance limits.The Gaia Catalogue of Nearby Stars (hereafter GCNS), based on the Gaia Early Data Release 3 (hereafter Gaia EDR3, Gaia Collaboration et al. 2021a), pushes the limit to 100 pc (Gaia Collaboration et al. 2021b, hereafter GSS21).
Our first motivation to compile the 10 pc sample was to use it as a quality assurance test of the GCNS and, therefore, to verify the Gaia EDR3 before its publication.Such information could be derived from the work of the REsearch Consortium On Nearby Stars (RECONS1 ), who have focused on the detection and characterisation of nearby star systems for several decades.They have published their results in a large series of papers.Part of them, as well as statistics, are listed in the RECONS webpage.Yet the compilation of a 10 pc catalogue from this resource is not straightforward.
According to RECONS, the 10 pc sample as of 12 April 2018 included 462 objects in 317 systems (Henry et al. 2018).The publication of the second Gaia data release (hereafter Gaia DR2, Gaia Collaboration et al. 2018) a few days later provided new, more precise parallaxes that moved some objects inside or outside of the 10 pc limit.It also provided individual parallaxes for components in systems.It resulted in 418 objects in 305 systems, with eight systems added by Gaia (Henry et al. 2019).
However, Gaia DR2 also contained a large number of spurious objects: A simple cut at a parallax ≥ 100 mas in Gaia DR2 returns 1722 objects.Using a random forest classifier to disentangle between good and bad astrometric solutions, GSS21 found that 15 sources, although classified as good from the classifier, lie closer than Proxima Centauri (see their Fig.12).On the contrary, with one more year of observations, better reduction and calibration procedures of the Gaia EDR3, a parallax ≥ 100 mas selection returns only 315 objects with a very high and improved precision, of which three had an obvious spurious solution and were rejected from the random forest classifier.The GCNS essentially offered a reasonably clean sample, with no new discoveries, but with higher precision astrometry and the first individual parallaxes for five objects in systems.
In the framework of the GCNS, the 10 pc compilation was not exhaustive but restricted to objects that should have been visible to Gaia, given its magnitude limits at the bright (G 2.5 mag) and faint (G 21 mag) ends.In the present work, we give a more complete census of the 10 pc sample using our knowledge of the nearby objects, including stars and their companions, brown dwarfs, and planets.For many of the objects, it also benefits from the exquisite parallaxes obtained from the last data release based on 34 months of operation of Gaia.This list will be used for further Gaia quality assurance.It includes all objects (i.e.planets and unresolved components) as separate entries as many of these will be detected in future Gaia releases.We also believe that it could be of general use to the community as it provides a complete list of benchmark and vetted objects and we are making it publicly available.For the foreseeable future, the 10 pc sphere is the only volume that it will be possible to find and characterise all objects.Finally, the 10 pc sample has significant outreach potential.
Following in the steps of Louise F. Jenkins, who published a list of 127 stars with their known companions and gathered the knowledge at that time on the neighbours whose distance is less than 10 pc from the Sun (Jenkins 1937), we give here the current snapshot of the nearby sample within 10 pc.In Sect.2, we describe the catalogue and how we constructed it.In Sect. 3 we explore the content of the catalogue and give a few statistics.Sect. 4 places the catalogue in the context of ongoing and future observational programmes that will impact the sample.Sect. 4 also illustrates the potential of this catalogue for outreach.Finally, conclusions are given in Sect. 5.
Catalogue compilation
We started our compilation using the Set of Identifications, Measurements, and Bibliography for Astronomical Data (SIM-BAD) database 2 (Wenger et al. 2000).This database provides information on astronomical objects of interest beyond the Solar System that have been studied and reported in scientific publications.We retrieved 378 stars and brown dwarfs with a parallax greater or equal than 100 mas through the SIMBAD table access protocol (TAP) service 3 with the following query, SELECT * FROM basic LEFT JOIN allfluxes on oid=oidref WHERE plx_value > = 100, which returns the information on the object type, its astrometry, its photometry in U, B, V, R, I, J, H, and K s bands, and, when available, its spectral type.To the SIMBAD list we added 21 cool brown dwarfs from recent parallax programmes, but they are not yet included in the database (Sect.3.3).
Since the query is based on the parallax, SIMBAD sometimes returns only one object while the literature papers refer to separate components (e.g.very close astrometric binaries and spectroscopic binaries), so we added these components as explained in Sect.3.1.We removed objects whose binarity has been refuted by Gaia parallaxes, from confusion with activity, or, on the contrary, confirmation by high-contrast imaging studies.We finally completed the list by adding confirmed exoplanets, starting from existing exoplanet databases and reviewing their status to add only confirmed discoveries (Sect.3.4).
This compilation resulted in 540 objects in 339 systems that are listed in Table A.1 4 .It contains 375 stars from F to early-L spectral type, including 20 white dwarfs (plus one candidate in a system).It also lists 85 brown dwarfs and 77 confirmed exoplanets.We also tabulated, numbered from 1001 and higher in the catalogue, two low-mass star systems, namely G 100-28 (GJ 1083) and Ross 440 (GJ 352), 13 ultra-cool T and Y brown dwarfs whose 1σ parallax uncertainties will allow them to be located within 10 pc, and the two components of a brown dwarf binary with a photometric parallax estimate larger than 100 mas.
The sample was constructed by setting a strict parallax limit of 100 mas.However, the parallax measurements carry uncertainties, and objects located within 3-sigma of this limit may not belong to the 10 pc sample when their measured parallax is larger than this limit or when the value is smaller.We used a SIM-BAD query with a 20 mas parallax cut and replaced the SIMBAD parallax by the more accurate Gaia EDR3 value when available.We find 16 objects with parallaxes within 3-sigma from our 100 mas parallax limit.This number does not include the 15 brown dwarfs already identified at a 1-sigma level.However, in general we expect the true distance of an object to be larger than the inversion of the measured parallax so we expect to lose more objects than we gain due to errors at this border zone.Indeed, considering the bayesian distances computed in the GCNS, the number of objects with parallaxes > 100 mas is 312 while the number of objects with median distances < 10 pc is 310.
The description of the catalogue is reported in Table 1, with the first object of the list, Proxima Centauri, shown as an example.The references in the catalogue are given with the bibcode assigned by the SAO/NASA Astrophysics Data System5 .The full references are given in the Appendix.
The 10 pc sample and Gaia
The ability to fully catalogue and characterise the 10 pc sample renders it a fundamental dataset to test the quality of upcoming Gaia releases.Some example quality checks for Gaia using this dataset are as follows: (i) Catalogue completeness: To check the completeness of the overall stellar sample and white dwarf population, one can extrapolate from the local stellar density as was done by GSS21.(ii) Exoplanet detections: While the bulk of the expected large catalogue of exoplanets detected from Gaia astrometry will only appear in the fourth data release, the first sample of exoplanet detections might already be announced in the third data release.It will likely include a subset of those planetary companions within 10 pc with detectable astrometric signatures (see Sect. 3.4).(iii) Magnitude limits: The 10 pc sample has stars that are too bright for Gaia and brown dwarfs that are too faint.It provides an empirical estimate of the magnitude limits.(iv) Binarity detection: there are at least 94 multiple systems in our 10 pc sample.They cover a wide parameter space in mass ratios, magnitude differences, angular separations, inclinations, and orientations.Binary systems for which we do not find solutions in the Gaia pipeline should be understood.
By comparing the Gaia EDR3 to our 10 pc sample, we found that there are eight nearby stars too bright to be observed by Gaia and 54 brown dwarfs that are probably too faint.Of the 402 remaining objects, 90 do not have a full astrometric solution in Gaia EDR3; they are all in close binary systems.Yet 14 of them had a full astrometric solution in Gaia DR2.With twelve more months of observations, the residuals went up and the Gaia EDR3 solution did not meet the restrictive quality cuts (astrometric_sigma5d_max < 1.2 mas or visibility_periods_used ≥ 9; Lindegren et al. 2021).This should no longer be a problem in the next data release with the improved astrometric solution, taking the orbital motion into account.
Astrometry
We replaced the SIMBAD output astrometry by that of Gaia EDR3 when available (312 stars and brown dwarfs), except for three cases in binary systems from Benedict et al. (2016) (GJ 831 A, GJ 791.2 A, and CD-68 47 A), who accounted for orbital motion and determined their astrometry with a higher precision.Whereas the Hipparcos determinations were more precise compared to Gaia DR2 values for some bright stars, this was no longer the case compared to Gaia EDR3 values.Estimated from M G = 25 mag b , lower limit 48 Notes. (a) G magnitude estimated from the absolute magnitude versus spectral type calibration computed as part of the GCNS. (b) G magnitude of late T and Y brown dwarfs too faint for Gaia computed assuming an arbitrarily absolute G magnitude set to 25 mag.
Photometry
In Table A.1, we also provide the Gaia photometry (G, G BP , and G RP ) for 345 objects.The photometry of all of them is from Gaia EDR3, except for three brown dwarfs (2MASS J17502484-0016151 A, 2MASS J08354256-0819237, and Luhman 16 B) that are in Gaia DR2, but not in Gaia EDR3.In unresolved systems, we often tabulate just the primary (or system) magnitudes.We included an estimate of the G magnitude (G_ESTIMATE) for the other 87 objects using different procedures, as indicated by the G_CODE value in the catalogue and summarised by Table 2.The addition of the G_ESTIMATE column provides a quick way to identify objects that should be detectable by Gaia, but they should not be used for scientific purposes.
Spectral type and object category
We reviewed the output spectral types provided by default from the SIMBAD query.We did not calculate an average spectral type from all determinations, but took the most recent reliable spectral type based on spectra.In Table A.1 we indicate the method used for the spectral type determination, from photometry or spectroscopy, in the optical or near-infrared.Only 40 objects, mainly in close binary systems, have no spectral type.We classified all the objects of the 10 pc sample in five categories (OBJ_CAT6 in Table A.1): stars (K and earlier spectral types), low-mass stars (M and early-L types), white dwarfs, brown dwarfs (including the M9-type object BD+16 2708 Bb from its dynamical mass determined by Dupuy & Liu 2017), and exoplanets.For components in close binaries with no information on the spectral type, we assigned them to the low-mass stars category by default.However, this classification should be taken with caution since we know that some of them can actually be brown dwarfs, such as L 768-119 B, GJ 867 D, and Wolf 227 B, which have mass estimates that may place them in the substellar range (Nidever et al. 2002;Davison et al. 2014;Winters et al. 2018).The probable brown-dwarf nature of the three star candidates is indicated in the COMMENT field of Table A.1.
Line-of-sight velocities
The SIMBAD query provides line-of-sight velocities for 287 objects.Among them, 129 come from Gaia DR2 or its catalogue of radial velocity standard stars (Soubiran et al. 2018), and 48 precise measurements come from Lafarga et al. (2020).Radial velocities of multiple systems may be inaccurate, but we tried to use the same source for the two components of a binary system Notes. (a) In the column Type, O, B, A... Y stand for stellar and substellar spectral types, D for white dwarfs, and N/A for objects without a spectral type.The Sun (G2 V star) and its eight planets are not included. (b) The name of the triple, quadruple, and quintuple systems are given in Table 5.
when available, or we only listed a single measurement when this was not the case.
Astrophysical content and statistical exploration
In this section we describe the content of the 10 pc sample in terms of astrophysical objects, as illustrated by Fig. 1.It shows the G-band absolute magnitude as a function of the G − J colour for all objects with Gaia G and 2MASS (Two Micron All Sky Survey; Skrutskie et al. 2006) J magnitudes and spectral type determination.As a comparison, the GCNS objects with G and J magnitudes are also shown.In Table 3 we summarise the spectral and multiplicity distribution of our sample.
Multiple systems
Multiple systems are reported in large databases such as the Catalog of Components of Double and Multiple Stars (Dommanget & Nys 2002) or the Washington Double Star catalog 7 (Mason et al. 2001).Multiplicity is also indicated in the SIM-BAD output (OBJ_TYPE=**).For all multiple system candidates we confirmed that the hypothesis of being part of that system was consistent with the most recent parallax determinations.We discarded five companion candidates: BD+42 2320 with β CVn, BD+02 521 with κ 01 Cet, and 2MASS J12141817+0037297 with GJ 1154, based on their Gaia DR2 parallax, while the companions of HD 50281 AB and BD+43 2796 were identified in the Gaia EDR3 with low parallaxes.In addition, three spectroscopic binary candidates (BD+19 5116 A, BD+19 5116 B, and G 13-22) that are known to be active stars were discarded.Details on these discarded components are given in the COMMENT column of As already stated in Section 2.2, the future Gaia data releases will provide solutions for a large number and type of binary (astrometric, spectroscopic, and eclipsing) with periods from 0.2 d to more than 5 yr, in amounts of hundreds of thousands in the third Gaia data release (Gaia DR3) and millions in the fourth Gaia data release (Gaia DR4), as predicted by the Gaia Universe Model Snapshot (Robin et al. 2012).Within the 10 pc sphere, one can expect very good forthcoming astrometric solutions, including orbital parameters.Such new astrometry would complete the characterisation of the systems, even the closest ones; the expected limit is 0.12 arcsec, but Gaia astrometry will provide information on binarity even for objects it cannot resolve.
White dwarfs
Twenty objects are white dwarfs, six of which are part of multiple systems.Their spectral type distribution is nine DA, five DQ, four DZ, and two DC.They all have a precise parallax from Gaia EDR3 except for Procyon B, most likely due to the short current separation and brightness difference of about 10 mag with respect to Procyon A. With a more eccentric orbit than Procyon B and a similar brightness ratio with the primary, Sirius B offered, however, a more favourable situation to be detected by Gaia.
Our 10 pc sample may be supplemented with new faint, dark, white dwarfs in the future, in particular in unresolved multiple systems.For example, we found two candidates in our list.G 203-47 is a spectroscopic binary Reid & Gizis (1997) with one possible white dwarf component: Delfosse et al. (1999) argued that the companion's mass is too large (M > 0.5M ) to be something other than a degenerate star.Likewise, CD-32 5613 was quoted as an unresolved double white dwarf by Toonen et al. (2017).
Brown dwarfs
According to Smart et al. (2017), Gaia can detect L5 dwarfs to 29 pc, T0 dwarfs to 14 pc, T6 dwarfs to 10 pc, and T9 dwarfs to 2 pc, assuming a magnitude limit G of 20.7 mag.These predictions are consistent with the 10 pc sample: The latest object with a Gaia parallax determination is just T6.There are, however, a few examples for which Gaia does not determine an astrometric solution or even a detection.This is the case for the nearest pair of brown dwarfs, Luhman 16 AB (L7.5+T0.5;Luhman 2013).Whereas both components are in Gaia DR2, Gaia EDR3 tabulates only the A component and neither of the two releases provides a solution with a parallax.The other cases where Gaia failed to acquire astrometry are listed in Table 4. Except for three objects close to the faint limit of Gaia, all are in multiple systems for which the current Gaia astrometric solution is applying a single star solution.For nearby objects in multiple systems, the orbital motion induces large residuals in the single star solution that the pipeline marks as errors and a full solution is not provided.Future Gaia data release will employ multiple star solutions so we expect full astrometric solutions for these brown dwarfs.
We completed our compilation of brown dwarfs with those presented by Kirkpatrick et al. (2019), Best et al. (2020), and, especially, Kirkpatrick et al. (2021), from where we added 38 ultra-cool objects with new or more precise parallaxes.We expect the 10 pc census to be further supplemented with cool Tand Y-type dwarfs in the near future.Of the 19 candidates with NB_OBJ ≥ 1001 (Sect.2.1), 15 are T and Y dwarfs, while Kirk-
Exoplanets
The existing exoplanet catalogues, such as the Extrasolar Planets Encyclopaedia 8 , the NASA Exoplanet Archive 9 , the Exoplanet Orbit Database 10 , or the Open Exoplanet Catalogue 11 , are not fully consistent.Discrepancies are partly due to different selection criteria, notations, and diligence in updating their data bases, and they are also due to the heterogeneity of information provided in discovery papers that different catalogues capture in different ways.As a consequence, it is almost impossible to achieve full homogeneity, and any direct comparison between catalogues can be difficult depending on the specific application.This is the case even for the sample of exoplanetary systems nearest to the Sun.
We cross-matched the Extrasolar Planets Encyclopaedia and the NASA Exoplanet Archive in order to select the most reliable set of stars with exoplanets within 10 pc, and we added those that we considered to be confirmed to our catalogue.The most recent discovery added to our catalogue is the transiting rocky planet GJ 486 b (Trifonov et al. 2021).The astrometry given in Table A.1 is the one of the host star, and the discovery reference is given in the SYSTEM_BIBCODE field.
Non-listed candidate, unconfirmed, or controversial exoplanets are enumerated, though, in the corresponding COMMENT field of the host star.For them, we employ the term 'candidate' when the publication reported the companion with that terminology or when the statistical evidence was not strong for the presence of the signal.A notable example is the second long-period planet candidate orbiting Proxima Centauri c (Damasso et al. 2020).We use the term 'unconfirmed' or 'controversial' when the radial-velocity or imaging signal has not been seen by different groups analysing the same datasets, when different groups use different datasets and do not find the same signals, or when the radial-velocity signal can be explained in terms of stellar activity variations.This nomenclature is also used to point at discovery announcements in papers that have only appeared on the 8 http://exoplanet.eu/ 9https://exoplanetarchive.ipac.caltech.edu/ 10http://exoplanets.org/ 11http://openexoplanetcatalogue.com/arXiv open-access repository of electronic preprints, but have not been accepted for publication after a reasonable amount of time.Noteworthy examples include radial-velocity signals of an unclear nature and period in the time series of the nearby K dwarf HD 219134 (Motalebi et al. 2015;Vogt et al. 2015;Gillon et al. 2017), the putative directly imaged planet around Fomalhaut (e.g.Kalas et al. 2008;Janson et al. 2020;Pearce et al. 2021, and references therein), and a number of terrestrial-mass companions tentatively detected inside and outside the temperate zones of nearby M dwarfs, such as GJ 581 (which harbours the most highly debated habitable-zone system, see, Trifonov et al. 2018, and references therein), τ Ceti (Tuomi et al. 2013b;Feng et al. 2017a), GJ 667 C (Delfosse et al. 2013;Anglada-Escudé et al. 2013;Feroz & Hobson 2014), and HD 40307 (Mayor et al. 2009b;Tuomi et al. 2013a;Díaz et al. 2016).
After applying the filters above, we came up with a total of 77 known and confirmed planets within 10 pc.In the case of a circular orbit, their true (for the few that are seen in transit) or minimum (for the radial-velocity-detected companions) astrometric signal in arcsec is α = (M p /M ) × (a p /d), with M p and M in the same units, d in pc, and a p in au.The astrometric signature of the vast majority of short-period (P < 100 d) super-Earths and sub-Neptunes within 10 pc are not expected to be detected by Gaia, as their amplitude will usually fall well below the endof-nominal-mission systematic noise floor for the along-scan astrometric measurements in the bright star regime (∼ 50 µas for a single CCD crossing, see e.g.Lindegren et al. 2021).Indeed, the much expected catalogue of tens of thousands of exoplanets will be mostly populated by gas giants in the 1-4 au separation regime (e.g.Sozzetti & de Bruijne 2018, and references therein).However, there are over a dozen exoplanet candidates for which we expect to see the astrometric signature.Some of these exoplanets may have already been detected with the threeyear time baseline of Gaia DR3, and most of them should be detected in Gaia DR4.However, their actual detectability in future Gaia data releases relies on the effectiveness of a successful calibration of the astrometric data in the very bright star regime (G 9 mag).We list below the radial-velocity exoplanet candidates that should be detected by Gaia: -GJ 15 Ac is expected to induce α > 570 µas, but it has a period in the neighbourhood of 20 yr (Pinamonti et al. 2018).Even a 10-yr Gaia mission will not see more than half of the orbit.The planet's motion should be detected as a curvature effect in the stellar proper motion and described in terms of an acceleration solution.
-Eri b, with an orbital period of ∼ 7.4 yr (Hatzes et al. 2000;Mawet et al. 2019) and an expected α ∼ 1000 µas, should be easily detectable.The host star is, however, very bright (G = 3.46 mag) and the major source of uncertainty is the effective calibration of the astrometric time-series.-Ind A b has a semi-major axis of ∼ 10 au, but the minimum mass of a massive super-Jupiter (Feng et al. 2019).It should be detected as an acceleration solution; however, simlar to Eri, the host star is very bright (G = 4.32 mag).-GJ 649 b, with a p = 1.13 au (Johnson et al. 2010), should induce α > 65 µas.It might be detectable if its true inclination is small.-GJ 3512 b has a p = 0.33 au (Morales et al. 2019), but it orbits a mid-M dwarf; therefore with α > 130 µas, it should be detectable.
-GJ 849 c has a long period P ∼ 15-20 yr (Feng et al. 2015), so it might be detectable as an acceleration solution on top of the signal induced by GJ 849 b. -GJ 433 c with α > 100 µas is, in principle, detectable; however, its period is > 10 yr (Feng et al. 2020a), so it might be described in terms of an acceleration solution.
It is in principle detectable, but the K-dwarf host star is very bright (G = 5.23 mag), so the same calibration issues as in the case of Eri and Ind A will need to be successfully addressed.
-GJ 876 b, with α ∼ 250-350 µas (depending on the actual inclination angle, see Correia et al. 2010), was detected by the Hubble Space Telescope (Benedict et al. 2002) and it is expected to be clearly identified by Gaia.-GJ 876 c has an expected α ∼ 70 µas, but with a period of only 30 d (Marcy et al. 2001) it will likely be very difficult for Gaia due to the possible degeneracy with periodic aliases of the scanning law.-GJ 832 b as a p ∼ 3.5-4.0au (Bailey et al. 2009), and with α > 1000 µas should be clearly detectable by Gaia, either as acceleration or a full orbital solution.
-GJ 9066 c has a p = 0.87 au (Feng et al. 2020b), and with α > 200 µas it is expected to be detectable by Gaia.-The candidate Proxima Cen c with a p = 1.5 au is expected to induce α > 170 µas (Damasso et al. 2020).A confirmation of its existence by Gaia should be possible.
Statistics
In terms of statistical studies, the 10 pc sphere is two-fold.Only in this nearby volume one can expect to detect and characterise all objects, but it also probes a small volume and, thus, offers small statistics.As a result, the 10 pc sphere is complementary to statistically more significant samples with larger volumes, but that suffer from incompleteness.Keeping that in mind, below, we provide a few numbers on the multiplicity rate, spectral type distribution, and luminosity class, which give an overall picture of the immediate vicinity to our Sun.There is no giant star within 10 pc and only four evolved stars, which are all sub-giants.These are β Hyi, µ Her Aa, δ Pav, and δ Eri.There are only about five pre-main-sequence stars within 10 pc: the triple system AT Mic A, AT Mic B, AU Mic being a bona-fide member, and YZ CMi being a candidate member of the ∼ 24 Myr β Pictoris association (Zuckerman et al. 2001;Alonso-Floriano et al. 2015;Mamajek & Bell 2014), and AP Col, which may belong to the ∼ 50 Myr Argus / IC 2391 association (Riedel et al. 2011, but see Bell et al. 2015 about the existence of the Argus association).
Almost half of the stars and brown dwarfs are in multiple systems.As summarised in the bottom part of Table 3, our 10 pc sample contains 246 single, 69 double, 19 triple, three quadruple, and two quintuple systems (NB_SYS in Table A.1).Following the definitions of Reid & Gizis (1997), for example, and adding the Sun as a single star, these numbers translate into a multiplicity frequency (which quantifies the number of multiple systems within the sample) and a companion frequency (which quantifies the total number of companions) of 27.4±2.3%and 36.5±3.2%,respectively.In Table 5, we give the names of the triple, quadruple, and quintuple systems for convenience.
The spectral type distribution is shown in Figure 2. We found 249 M stars among the 423 objects with a measured spectral type, which translates into a ratio of 58.9 ± 5.8%.This relatively small value is in contrast with other higher previous determinations of the order of 70% (e.g. Henry et al. 2006;Bochanski et al. 2010).This small value probably comes from a more complete sample of brown dwarfs compared to older studies.In the substellar regime, L-type objects amount to only half the number of T-type ones.
There are 41 objects without a spectral type measurement, all being secondary components of close binaries.They could slightly bias these proportions, so we used published individual masses, either computed from orbit fitting or estimated from adaptive optics contrast measurements, to estimate their spectral types.We found 36 possible M stars, four possible L dwarfs, and one possible white dwarf (all of them marked in the column COMMENT in Table A.1).The proportion of M stars now becomes 61.3 ± 5.9%, which is not significantly different from the ratio derived from measured spectral types only.
More than half of the M dwarfs (57.0 ± 7.3%) have spectral types M3.0 V to M5.0 V.This proportion remains stable when including the estimated spectral types of the unresolved secondary components (57.4 ± 6.9%).Translating these numbers into an observed mass function requires some care, but it seems to indicate that the number of stars increases up to about 0.3 M (∼ M4.0 V; Cifuentes et al. 2020) and decreases for later M spectral subtypes.This maximum of the mass function, similar to other slope changes observed in very young open clusters (e.g.Peña Ramírez et al. 2012), corresponds to the fully-convective transition in the main sequence.
Science cases and the next upgrades
Apart from multiplicity studies, mass function analyses, and long-term exoplanet surveys, there is a number of science topics that can be covered with the 10 pc catalogue.One of them is kinematics and membership in thin and thick disc populations and stellar moving groups and associations.There are on-going efforts to relate precise Galactocentric space velocities to youth features for a sample of over 2000 nearby M dwarfs (M.Cortés-Contreras, priv. comm.) and to measure, for the first time, radial velocities of a number of late-type ultracool dwarfs (W.Cooper, priv. comm.).These works will be presented in forthcoming publications and will complement future releases of the 10 pc catalogue.
Further improvements of the 10 pc catalogue include adding the new Gaia DR3 astro-photometric data (expected in 2022), updating spectral types for poorly investigated companions and radial velocities of the faintest brown dwarfs, and adding more parameters useful for a variety of topics, such as atmospheric astrophysical parameters (T eff , log g, [Fe/H]), chromospheric (equivalent widths of Hα and Ca ii) and coronal (X-rays) activity indicators, and rotational velocity.Some novel parameters, for instance the exozodi level, will also be useful for future space missions such as the Large Interferometer for exoplanets (LIFE collaboration et al. 2021).
Obsolescence
This catalogue will inevitably need to be updated when Gaia and other surveys issue their next data releases.Apart from extremely cool objects similar to WISEA J085510.74-071442.5, new objects probably hide in the Milky Way plane (see the discoveries by Beamín et al. 2013;Scholz 2014;Scholz & Bell 2018;Faherty et al. 2018).Such new objects, in spite of their expected large proper motions, will likely be detected by state-of-the-art photometric surveys from the ground such as the Panoramic Survey Telescope and Rapid Response System (Kaiser et al. 2002), J-PLUS/J-PAS (Solano et al. 2019), and, specially, the Legacy Survey of Space and Time (LSST Science Collaboration et al. 2009), as well as from the space.In particular, the ESA medium-class Euclid space mission will cover more than 35 % of the celestial sphere in the red optical and near-infrared Y, J, and H bands with an unprecedented depth and spatial resolution (Laureijs et al. 2011).The Euclid Legacy Science on Ultracool Dwarfs will be particularly sensitive to low Galactic-latitude, high proper-motion, very red late-type dwarfs that have not been identified yet (Martín et al. 2020).The NASA SPHEREx space mission, with its all-sky, low-spectralresolution capabilities in the 0.75-5.0µm range (Crill et al. 2020), will also help to discover new ultracool objects.
In addition to the yield from these photometric surveys, we expect that most of the new additions to the 10 pc sample will be very close companions to our targets.First, current and future spectroscopic surveys and adaptive optics observations will probably resolve some of the single stars into multiple components (e.g.Baroch et al. 2018;Fouqué et al. 2018;Winters et al. 2019a).Second, the component of the 10 pc catalogue that will see the largest increase in number corresponds to new exoplanets that will be discovered or confirmed in the coming years as most stars are orbited by at least one exoplanet.For instance, Dressing & Charbonneau (2015) predicted 2.5 ± 0.2 small and close-by planets per M star, so we could expect more than 600 new exoplanets to be discovered, outnumbering the number of stellar and sub-stellar objects within 10 pc.Such an optimistic estimation is in line with the recent discovery of small planets around the closest stars, such as Proxima Centauri (two planets: Anglada-Escudé et al. 2016;Damasso et al. 2020;Kervella et al. 2020), Barnard's star (one planet: Ribas et al. 2018), or Lalande 21185 (one planet: Díaz et al. 2019;Stock et al. 2020) 12 .However, even if these new planets are predicted from Kepler's results, we will probably not detect more than a fraction of them in the coming years for several reasons: (i) Planets with periods close to that of stellar rotation will mostly go undetected; (ii) stellar activity will prevent others from being detected; and (iii) close in planets on highly inclined orbits with small sin i values imply small radial-velocity semi-amplitudes.
While the closest planets have, in general, been discovered with precision radial-velocity spectrographs working in the red optical and/or near-infrared (especially designed for M-dwarf surveys; e.g.CARMENES, ESPRESSO, GIANO-B+HARPS-N, HPF, IRD, MAROON-X, SPIRou, and, in the future, NIRPS and CRIRES+)13 , the NASA TESS space mission (Ricker et al. 2015) is also discovering small transiting planets at less than 10 pc, such as GJ 357 b and GJ 486 b (Luque et al. 2019;Trifonov et al. 2021; see also GJ 436 b, a Neptune-sized planet at 9.8 pc discovered by Butler et al. 2004 andGillon et al. 2007).A few more transiting planets in the immediate vicinity might also be detected in the near future with SPECULOOS from the ground (Sebastian et al. 2021) and PLATO from space (Rauer et al. 2014).Finally, global astrometry with Gaia, particularly in the case of a fully extended 10 year mission, might unveil the presence of ∼ 10 − 20 new cold giant planets up to Jupiter-like orbital separations (Sozzetti & de Bruijne 2018, see Sect. 3.4).
Didactics
The 10 pc sample has tremendous outreach potential.The objects are our nearest neighbours, they cover a large range of stellar and brown dwarfs parameter space, and many of them have a significant historical story that can be shared.If we just consider the first few objects, α Cen A is almost a solar twin, α Cen C (Proxima) harbours the nearest terrestrial habitable-zone planet and has another candidate planet, Barnard's star is an old thick-disc dwarf with the largest proper motion on the sky, Luhman 16 AB is a brown dwarf binary, and WISEA J085510.74-071442.5 is the coolest brown dwarf known to date.Among the first ten objects, only one system, α Cen, is visible to the naked eye; the brightest star, Sirius, is 12th in distance; and, the object with a first measured parallax, 61 Cyg, is 28th in distance.
To aid in this outreach, we produced some divulgative material that we release with this contribution.The three-dimensional nature of the dataset makes creating maps more of a challenge than for more traditional terrestrial cartography.We generated maps in several different formats from the data, including a rotating animation of all the objects in the catalogue, a 3D fly-through JavaScript web application, a top down poster (see Fig. B.1), and two 5 pc and 10 pc maps with 'star columns' showing distance above and below the galactic plane.All the resources are available online14 .
Conclusions
We provide a catalogue of all objects closer than 10 pc from the Sun.It contains 540 objects divided between 375 stars, including 20 confirmed white dwarfs and one candidate white dwarf, 85 confirmed and three candidate brown dwarfs, and 77 confirmed exoplanets in 339 systems made up of 69 binaries, 19 triplets, three quadruplets, and two quintuplets.
During the catalogue compilation, we extensively checked all individual entries from what is available in the published literature.In particular, it contains the most recent astrometry from the last Gaia data release when available.
The catalogue will be used to assess the quality of the forthcoming Gaia releases to place limits on the frequency of planets and other components within multiple systems, as well as providing targets for focused planetary searches.The 10 pc sample is incredibly varied: Our first ten neighbouring systems include two confirmed and two candidate planets, a thick-disc object, a white dwarf, and four brown dwarfs.We recognise the didactic value of this sample and have provided various materials for that exploitation.
The latest addition to the 10 pc sample is the planet GJ 486 b (Trifonov et al. 2021), but the last free floating objects have been discovered using the WISE (Wide-field Infrared Survey Explorer; Wright et al. 2010) survey.The coolest and lowestmass object WISEA J085510.74-071442.5, a >Y4-type ultracool dwarf, was discovered by Luhman (2014) as the result of significant data-mining, and we concur with the result of Kirkpatrick et al. (2021) that the 10 pc volume is probably still not complete for objects later than spectral type Y2.The distribution of these lowest mass objects will indicate the minimum mass cutoff for stellar formation; therefore, finding all objects in this local volume will provide an important constraint for formation mechanisms.In addition, as the latest addition attests, the discovery of planets and other components within known systems is on the increase as our detection ability improves.Hence, while we expect the number of very low mass objects, planets, and low mass components with 10 pc within systems to increase, we do not expect to add any more higher mass, isolated, earlier type objects to the 10 pc census.
Fig. 1 .
Fig. 1.Colour-absolute magnitude diagram of the 10 pc sample, overimposed on the GCNS (grey dots).The colour bar indicates the spectral type.White dwarfs are in dark blue.
Fig. 2 .
Fig. 2. Spectral type distribution of the 10 pc sample.D are white dwarfs.The different symbols indicate single stars, primaries, and companions.
Table 1 .
Example of content of the 10 pc catalogue (the first object, Proxima Centauri).
Table 2 .
G_CODE values for retrieved or estimated G magnitudes.
Table 3 .
Summary of the 10 pc sample a .
Table 4 .
Brown dwarfs expected to have a full astrometric solution in future Gaia data releases.
Table 5 .
Names of triple and higher order systems. | 9,731.4 | 2021-04-30T00:00:00.000 | [
"Physics"
] |
Growth dynamics of noise sustained structures in nonlinear optical resonators
The existence of macroscopic noise sustained structures in nonlinear optics is theoretically predicted and numerically observed in the regime of convective instability The advection like term necessary to turn the instability to convective for the parameter region where advection overwhelms the growth can stem from pump beam tilting or birefringence induced walk o The growth dynamics of both noise sustained and deterministic patterns is exempli ed by means of movies This allows to observe the process of formation of these structures and to con rm the analytical predictions The ampli cation of quantum noise by several orders of magnitude is predicted The qualitative anal ysis of the near and far eld is given It su ces to distinguish noise sustained from deterministic structures quantitative informations can be obtained in terms of the statistical properties of the spectra c Optical Society of America OCIS codes Nonlinear optics transverse e ects in Fluc tuations relaxations and noise References and links L A Lugiato S Barnett A Gatti I Marzoli G L Oppo H Wiedemann Quantum aspects of nonlinear optical patterns Coherence and Quantum Optics VII Plenum Press A Gatti H Wiedemann L A Lugiato I Marzoli G L Oppo S M Barnett A Langevin approach to quantum uctuations and optical patterns formation Phys Rev A A Gatti L A Lugiato G L Oppo R Martin P Di Trapani A Berzanskis From quantum to classical images Opt Express I Marzoli A Gatti L A Lugiato Spatial quantum signatures in parametric downconversion Phys Rev Lett M Hoyuelos P Colet M San Miguel Fluctuations and correlations in polarization patterns of a Kerr medium submitted R J Deissler Noise sustained structure intermittency and the Ginzburg Landau equation J Stat Phys M Santagiustina P Colet M San Miguel D Walgraef Convective noise sustained structures in nonlinear optics Phys Rev Lett L A Lugiato R Lefever Spatial dissipative structures in passive optical systems Phys Rev A W J Firth A G Scroggie G S McDonald L A Lugiato Hexagonal patterns in optical bista bility Phys Rev A R L A Lugiato F Castelli Quantum noise reduction in a spatial dissipative structure Phys Rev Lett K Staliunas Optical vortices during three wave nonlinear coupling Opt Commun G L Oppo M Brambilla L A Lugiato Formation and evolution of roll patterns in optical parametric oscillators Phys Rev A L A Wu H J Kimble J Hall H Wu Generation of squeezed states by parametric down conversion Phys Rev Lett M Santagiustina P Colet M San Miguel D Walgraef Two dimensional noise sustained struc tures in optical parametric oscillators submitted M Haelterman G Vitrant Drift instability and spatiotemporal dissipative structures in a nonlinear Fabry Perot resonator under oblique incidence J Opt Soc Am B G Grynberg Drift instability and light induced spin waves in an alkali vapor with feedback mirror Opt Commun A Petrossian L Dambly G Grynberg Drift insta bility for a laser beam transmitted through a rubidium cell with feedback mirror Europhys Lett N Bloembergen Nonlinear Optics Benjamin Inc Publ Y R Shen The principles of nonlinear optics Wiley P D Drummond K J Mc Neil D F Walls Non equilibrium transitions in sub second harmonic generation semiclassical theory Opt Acta
Introduction
Optical patterns o er the very attractive possibility of studying the interface between classical and quantum patterns.Macroscopic and spatially structured manifestations of quantum correlations are foreseen to occur in these patterns 1].Such correlations are expected since, at a microscopic level, the physical mechanism behind the pattern formation process is often the simultaneous emission of twin photons, four wave mixing processes or other processes involving highly correlated photons.Correlations are easily observed in the far eld and should encode speci c features of quantum statistics.In the search for the manifestations of quantum noise in optical patterns it is natural to look for situations in which noise is enhanced or ampli ed.Critical uctuations close to an instability point are one of these situations recently considered 2, 3,4].The noisy precursor, observed just below threshold anticipates the pattern to be formed beyond the instability.This precursor occurs because uctuations with the wavenumberto be selected above threshold become weakly damped as the threshold is approached.A more dramatic manifestation of noise occurs above a convective instability threshold 5].Here uctuations are ampli ed (instead of being weakly damped) while being advected away from the system.This gives rise to macroscopic structures that are continuously regenerated by noise and hence the name of noise sustained patterns.This phenomenon acts as a microscope, with ampli cation factors of several orders of magnitude, to observe noise and its spatially dependent manifestations 6].In this paper we give t wo examples of noise sustained optical patterns.Our emphasis here is in showing how these patterns grow dynamically from noise, invading part of the system and being there maintained by noise.
The two examples to be considered are paradigmatic in the eld of optical pattern formation and quantum noise properties.The rst is a cavity lled by a K e r r t ype nonlinear medium and pumped by an external laser beam 6].This system was a prototype model for pattern formation in optics 7,8] and it has also been where the question of quantum uctuations in patterns was rst addressed 9].The second example is an optical parametric oscillator (OPO), also a paradigm for studies of pattern formation 10] and generation of squeezed and nonclassical light 11].A necessary condition for the existence of a convective instability is the presence of an advection-like term in the governing equations this term can have di erent origins.In our rst example this originates in any p u m p misalignement we will study this example in a simple transverse one-dimensional geometry to clarify the main concepts.For the OPO we consider a type-I phase matching in a uniaxial crystal.Here, the advection term originates in the walk-o between the ordinary and extraordinary rays, due to birefringence.
Thus, the outline of the paper is as follows.In Section 2 we brie y recall the de nition of the convectively unstable regime and the linear stability analysis which allows to determine the di erent regimes.In Section 3, we describe the convective instabilities and noise sustained structures in Kerr nonlinear resonators with one transverse dimension.We show i n two m o vies the growth dynamics of the pattern in the convectively and absolutely unstable regimes.The role of the noise in sustaining the structure in the convective regime is clearly exempli ed.The distinction between these two regimes manifests qualitatively in the time evolution of the far eld of the pattern.Section 4 is devoted to noise-sustained structures in OPO with two transverse dimensions.We s h o w the diagram in parameter space where the zero solution becomes unstable either convectively or absolutely.T w o m o vies, displaying the growth dynamics of the noise-sustained and the deterministic patterns, are also presented, for the near-and the far-eld.Finally we compare a snapshot of the pattern formed in each of the two cases with the noisy precursor below threshold.Conclusions are presented in section V.
De nitions and linear stability analysis
We start by brie y recalling the notion of convective and absolute instabilities readers may refer to 5,6,12] for more details.The steady-state of a generic system is de ned to be absolutely stable (unstable) when a perturbation decays (grows) with time.However, a third possibility is that the perturbation grows (i.e. is unstable) but at the same time is advected so quickly that, at a xed position, it actually decays.In this case the state is called convectively unstable.Note that the de nition is unambiguous only if a xed frame of reference is de ned in the cases we consider here the xed frame corresponds to the pump beam.The usual linear stability analysis of the steady-state can be re-formulated in order to take i n to account the above distinction of convectively and absolutely unstable regimes.In general, the calculation of the pump amplitude thresholds of the instability for the systems we are considering entails the evaluation of the linearized asymptotic behaviour of a generic perturbation of the steady-state 5,12].The convective threshold turns out to correspond to the instability threshold which c a n be calculated as if the advection was not present, i.e. < (q)] > 0 where (q) is the linearized eigenvalue of largest real part and q the wave-vector of the perturbation.This means that at threshold all unstable modes are convectively unstable.As regards the threshold of the absolutely unstable regime, its determination reduces to the calculation of the pump amplitude which satis es the following conditions: where the complex vector ks de ned by r k ( k = ks ) = 0 (2) is a saddle point f o r < ( ks )] in the complex vector space.
A detailed mathematical explication of the procedure to get the above conditions can be found in 5,6,12].Here, for the sake of simplicity w e just mention that wave-vectors q are extended to the complex space k in order to evaluate the integral, in the wave-vector space, which determines the asymptotic linearized evolution of a perturbation.
Kerr resonators
For a nonlinear resonator containing a Kerr medium 7] an advection-like term can stem from the input pump beam tilting 13].A one-dimensional (1D), transversal model is used in order to simplify the analysis and to clarify the main concepts.The 1D assumption can be also justi ed from an experimental viewpoint 1 4 ] .
The equation governing the electric eld A(x t) is 6,7,13]: @ t A ; 2 0 @ x A = i@ 2 x A ; 1 + i ( ; j Aj 2 )]A + E 0 + p (x t) where: 0 represents the tilt angle, the sign of the nonlinearity, t h e c a vity detuning and E 0 the pump.Di raction is represented by the rst term on the r.h.s. and mirror losses by the rst in the squared brackets (exact de nitions can be found in 6]).We h a ve introduce a complex additive noise (x t), Gaussian, with zero-mean and correlation h (x t) (x 0 t 0 )i = 2 (x ;x 0 ) (t ;t 0 ), which is a standard semiclassical model of noise.
For the linearized version of the Langevin equations of the optical parametric oscillator a similar term describes quantum noise in the Wigner representation, as considered in 2].In our case it can also account for thermal and input eld uctuations.Through conditions (1,2) applied to the linearized eigenvalue of eq. ( 3) we h a ve estimated the threshold of the absolute instability for a xed set of parameters for the steady-state A 0 , solution of A 0 1 + i ( ; j A 0 j 2 )] = E 0 .F or the same parameters we h a ve i n tegrated the equation for a pump amplitude slightly above this threshold and slightly below, in the regime of convective instability.The time evolution of the near-eld (A) and far-eld (the space Fourier transform of A) of eq. ( 3) con rm that di erent, unstable regimes actually exist.In movie 1 the near eld intensity t i m e e v olution (left side) can be observed, for the pump amplitude above the threshold of absolute instability.The initial condition is the steady-state plus a weak perturbation: noise is not applied ( = 0) because it is not necessary to generate the pattern.After a certain transitory a drifting structure is generated.In spite of the drift, the pattern tends to invade all the at top region where the pump is above the threshold.The evolution of the far-eld (right) shows that after the transitory, w ell de ned harmonics are generated (due to the multiple wave mixing).Their linewidth is scarcely in uenced by t h e presence of noise as demonstrated in 6] in the equivalent time analysis.Movie 2. Near eld (left) and far eld (right) growth dynamics in the convectively unstable regime.
In the convective regime (movie 2) we applied noise ( ' 10 ;5 ) and a pattern forms again.However, note that the structure, even for long times does not invade all the system but rather gathers its saturated value at a random spatial position.This is due to the fact that noise needs to drift for enough time in order to be ampli ed.When the noise source is turned o the pattern (after the delay due to the drifting) eventually disappears.In the far-eld, the noticable broadening of the spectral lines with respect to the previous case con rms the di erent, noisy, nature of the pattern observed.
Parametric oscillators
In the optical parametric oscillator, i.e. when the nonlinear medium inside the resonator has a quadratic response, the advection-like term stems naturally from the birefringence of the nonlinear crystal, which is exploited to phase-match the nonlinear interaction.In fact, in a birefringent medium the ordinary and extraordinary polarizations can be subject to a transversal walk-o 15].In particular, we consider, a degenerate, type I OPO (scattered photons are thus frequency and polarization degenerate).The pump (A 0 (x y t) at frequency 2! 0 ) and the signal (A 1 (x y t) at frequency !0 ) evolution is described by the following set of coupled equations 10,16]: @ t A 0 = 0 ;(1 + i 0 )A 0 + E 0 + ia 0 r 2 A 0 + 2 iK 0 A 2 1 ] + p 0 0 (x y t) @ t A 1 = 1 ;(1 + i 1 )A 1 + 1 @ y A 1 + ia 1 r 2 A 1 + iK 0 A 1 A 0 ] + p 1 1 (x y t) (4) where: 0 1 represent the losses, 0 1 the detunings, a 0 1 the di raction, K 0 the nonlinearity, 1 the walk-o , E 0 the pump (see 10,12] for details).Noise terms 0 1 have t h e same characteristics of the Kerr case and are uncorrelated.
The uniform steady-state, whose stability we are interested in, is: A 0 = E 0 =(1 + i 0 ) A 1 = 0.It turns out that it can become unstable along the signal component o f the eigenvector, A 1 , a n d t h us it is necessary to consider only one linearized equation.
We can calculate the predicted absolute instability thresholds through conditions (1,2) with determined from eqs. (4).In summary, the stability diagram for the OPO is presented in gure 1 as a function of the signal detuning 1 .The linear analysis also reveals that the rst mode to become unstable satis es q x = 0 , i.e. is parallel to the x axis.This stems from the breaking of the rotational system due to the walk-o .The walk-o does not a ect the growth rate but rather the spreading velocity of the perturbation.The rst mode to become absolutely unstable is that which balances the advection with spreading 12].
The growth dynamics for A 1 is shown in movie 3, for the absolutely unstable, and in movie 4 for the convectively unstable regime.The pump A 0 was a supergaussian beam and we show i n t h e m o vie the central region of space where the pump was at.
Movie 3. Near eld (left) and far eld (right) growth dynamics in the absolutely unstable regime.
In the initial stage, noise generates a randomly oriented pattern in both cases later the two e v olutions start to di er.In the former case the stripes generated are parallel to the x-axis, as predicted.The deterministic pattern invades the whole region of pumping, stripes are well de ned and no defects of the horizontal orientation can be observed waiting a long enough time (see also gure 2, top, which corresponds to the snapshot of the nal time of the evolution).In the convective regime (movie 4) the pattern is continuously generated by noise with random orientation at the bottom of the window and ampli ed while drifting.During this process stripes get parallel to the x-axis.Note that the dynamical orientation is less marked than in the previous case and defects can be still seen.The location where the pattern gathers its saturated value is random as in the Kerr case (see gure 2, middle).The average space delay a n d i t s v ariance depends on the noise intensity.The far eld observation su ces to distinguish the two di erent regimes.At the rst stage all modes on a r i n g o f radius q c = p ; 1 =a 1 are excited later, in the absolute regime two narrow spots form, in correspondance with the rst mode that become absolutely unstable (q x = 0), in the convective regime two broadenend arcs of the ring remain visible even for very large times (see gure 2b).
Quantitative results which help to sharply distinguish the two regimes can be obtained by means of a time spectral analysis 12].To summarize we p r e s e n t three situations in gure 2, i.e. from the top to the bottom: absolutely unstable, convectively unstable and absolutely stable (close to threshold).The rst is a deterministic pattern, the second a noise-sustained one and the last is a noisy-precursor we h a ve referred to in the introduction.Signatures of a deterministic pattern are: the high intensity, the pattern orientation (if 2D), orthogonal to the drift direction due to the symmetry breaking, the fact that it invades all the system, narrow spatial dispersion in the far eld.Noise-sustained structures show: high intensity due to large noise magni cation factors, preferential selection of the stripe orientation (in 2D), although defects are clearly observable, only partial and random occupancy of the system, broadened far elds.Noisy precursors below t h e instability threshold are characterized by: low intensities and random orientation (in 2D), because noise is only selectively enhanced by the ltering e ect of the nonlinearity, and very broaden far eld.
Conclusions
We h a ve theoretically predicted the existence of macroscopic, noise-sustained transversal structures in nonlinear optical resonators.Numerical solutions con rm the qualitative and quantitative predictions.Noise-sustained structures can be found in the regime of convective instability which can be induced either by a tilt in the input pump beam or by the walk-o due to birefringence.The growth dynamics of noise-sustained as well as deterministic patterns is presented and helps to distinguish the nature of the structures.This work is supported by QSTRUCT (Project ERB FMRX-CT96-0077).Financial support from DGICYT (Spain) Project PB94-1167 is also acknowledged.
Movie 1 .
Near eld (left) and far eld (right) growth dynamics in the absolutely unstable regime.
Fig. 2 .
Fig. 2. Snapshots of the near(far)-eld at time t = 2000 on the left (right) hand side.Parameteres of the top, middle and bottom images correspond respectively to (*, +, X) of Fig. 1.
Movie 4 .
Near eld (left) and far eld (right) growth dynamics in the convectively unstable regime. | 4,422.6 | 1998-07-20T00:00:00.000 | [
"Physics"
] |
Cavity electro-optics in thin-film lithium niobate for efficient microwave-to-optical transduction
Linking superconducting quantum devices to optical fibers via microwave-optical quantum transducers may enable large scale quantum networks. For this application, transducers based on the Pockels electro-optic (EO) effect are promising for their direct conversion mechanism, high bandwidth, and potential for low-noise operation. However, previously demonstrated EO transducers require large optical pump power to overcome weak EO coupling and reach high efficiency. Here, we create an EO transducer in thin-film lithium niobate, leveraging the low optical loss and strong EO coupling in this platform. We demonstrate a transduction efficiency of up to $2.7\times10^{-5}$, and a pump-power normalized efficiency of $1.9\times10^{-6}/\mathrm{\mu W}$. The transduction efficiency can be improved by further reducing the microwave resonator's piezoelectric coupling to acoustic modes, increasing the optical resonator quality factor to previously demonstrated levels, and changing the electrode geometry for enhanced EO coupling. We expect that with further development, EO transducers in thin-film lithium niobate can achieve near-unity efficiency with low optical pump power.
Introduction
Recent advances in superconducting quantum technology [1] have created interest in connecting these devices and systems into larger networks. Such a network can be built from relatively simple quantum interconnects [2] based on direct transmission of the few-photon microwave signals used in superconducting quantum devices [3]. However, the practical range of this approach is limited by the strong attenuation and thermal noise that microwave fields experience at room temperature. Therefore, quantum interconnects based on optical links have been explored as an alternative, because they provide long attenuation lengths, negligible thermal noise, and high bandwidth [4]. Connecting optical networks with superconducting quantum technologies requires the creation of a quantum transducer capable of converting single photons between microwave and optical frequencies [5,6]. Such a transducer offers a promising route toward both large-scale distributed superconducting quantum networks and the scaling of superconductor quantum processors beyond single cryogenic environments [7]. Furthermore, single-photon microwave-to-optical transduction can be used to create high efficiency modulators [8], detectors for individual microwave photons [9], and multiplexed readout of cryogenic electronics [10].
The desire for lower noise, higher efficiency, and faster repetition rates has motivated research into cavity-based EO transducers [24][25][26][27][28][38][39][40][41], in which microwave fields directly modulate light using an EO nonlinearity of the host material. This approach avoids any intermediate mechanical modes and may allow for lower noise owing to the strong thermal contact and spatial separation of microwave and optical resonators in these devices. Previous EO transducers have used bulk lithium niobate [24][25][26], aluminum nitride [28] and hybrid silicon-organic [41] platforms. These devices have demonstrated bidirectional operation and on-chip efficiency as high as 2% [28], yet efficiencies remain low and would require large (∼1 W) optical pump powers to reach near-unity efficiency. The efficiency of EO transducers can be improved by minimizing the loss rates of the resonators and enhancing the EO interaction strength. Toward this end, here we use a thin-film lithium niobate platform, which combines a large EO coefficient of 32 pm/V, tight confinement of the optical mode to enable a strong EO coupling [42], and the ability to realize low-loss optical resonators with demonstrated quality factors (Q) of 10 7 [43].
Specifically, we describe an EO transducer made from a thin-film lithium niobate photonic molecule [44,45] integrated with a superconducting microwave resonator and demonstrate an onchip transduction efficiency of greater than 10 −6 /µW of optical pump power for continuous-wave signals. The triple-resonance photonic molecule design of our device maximizes transduction efficiency by ensuring that both the pump light and the upconverted optical signal are resonantly enhanced by low-loss optical modes. We also reduce undesired piezoelectric coupling in the microwave resonator by engineering bulk acoustic wave resonances in the device layers. Finally, we discuss the future potential for thin-film lithium niobate cavity electro-optic transducers and show that with straightforward improvements the efficiency can be increased to near unity for ∼100 µW of optical pump power.
Device design and characterization
The operating principle of our transducer is illustrated in Fig. 1(a). Two lithium niobate optical ring resonators are evanescently coupled to create a pair of hybrid photonic-molecule modes, with a strong optical pump signal tuned to the red optical mode at ω − . A superconducting microwave resonator with resonance frequency ω m modulates the optical pump signal, upconverting photons from the microwave resonator to the blue optical mode at ω + .
This transduction process can be effectively described by a beamsplitter interaction Hamiltonian where g 0 is the single-photon EO interaction strength, n − is the number of (pump) photons in the red optical mode, while b and a + are the annihilation operators for the microwave and blue optical modes, respectively. The interaction strength g 0 is determined by the microwave resonator's total capacitance, the overlap between microwave and optical modes, as well as the EO coefficient of the host material. We use a thin-film superconducting LC resonator and an integrated lithium niobate racetrack resonator [42] to optimize g 0 . The on-chip transduction efficiency η for continuous-wave signals depends on both this interaction strength and the loss rates of the modes (see Supplement Section 1), where κ m,ex , κ m and κ +,ex , κ + are the external and total loss rates for the microwave and blue optical modes, respectively, and C = 4g 2 0 n − κ m κ + is the cooperativity. The first term in Equation (2) represents the efficiency of a photon entering and exiting the transducer. To maximize this photon coupling efficiency, the resonators in our device are strongly overcoupled.
A microscope image of our device is shown in Fig. 1(b). Light is coupled from an optical fiber array onto the chip using grating couplers with ≈10 dB insertion loss. The photonic molecule optical modes are created using evanescently coupled racetrack resonators made from 1.2 µm-wide rib waveguides in thin-film lithium niobate atop a 4.7 µm-thick amorphous silicon dioxide layer on a silicon substrate. The optical waveguides are cladded with a 1.5 µm-thick layer of amorphous silicon dioxide. The fabrication process for these optical resonators is described in detail in Ref. [44]. To create the superconducting resonator, a ≈40 nm-thick niobium nitride film is deposited on top of the cladding by DC magnetron sputtering [46] and patterned using photolithography followed by CF 4 reactive ion etching. The detuning between the optical modes can be controlled using a bias capacitor on the dark (i.e. not directly coupled to the bus waveguide) racetrack resonator.
The optical transmission spectrum displayed in Fig. 1c shows a typical pair of photonicmolecule optical modes. The internal loss rate for these optical modes are κ ±,in /2π ≈ 2π × 130 MHz, corresponding to an intrinsic quality factor of 1.4 × 10 6 . As shown in Fig. 1d, we observe a clear anticrossing between bright and dark resonator modes when tuning the bias voltage and observe a minimum optical mode splitting of 3.1 GHz. For far-detuned optical modes, the bright resonator mode has a total loss rate κ bright ≈ 2π × 1.0 GHz, indicating that the optical modes are strongly overcoupled to the bus waveguide.
Lithium niobate has a strong piezoelectric susceptibility, which gives the microwave resonator a loss channel to traveling acoustic modes [47]. To investigate this loss mechanism, we perform a two-dimensional simulation of a cross section of the waveguide and resonator capacitor (see Supplement Section 2). The simulated intrinsic microwave quality factor due to piezoelectric loss displays a strong frequency dependence, as shown in Fig. 2(a). This frequency dependence is caused by low quality-factor bulk acoustic modes -illustrated in Fig. 2(b) -that form in the thin-film layers of our device, which resonantly enhance the coupling between the microwave resonator and acoustic fields. Lower loss can be achieved by designing the microwave resonance frequency to avoid the bulk acoustic resonances. The relative orientation of the capacitor and the lithium niobate crystal axes also strongly affects the microwave loss. Figure 2(c) shows that the intrinsic microwave quality factor is maximized when the electric field produced by the capacitor is oriented close to the Z-axis of the lithium niobate crystal, which is also the condition that maximizes electro-optic response. Using these considerations, we designed a microwave resonator which has a measured intrinsic quality factor higher than 10 3 at a temperature of 1 K, as shown in Fig. 2(d).
Microwave to optical transduction
To measure the transduction efficiency of our device, we locked the frequency of the pump to be near-resonant with the red optical mode (side-of-fringe locking) and sent a resonant microwave signal into the device. The pump and upconverted optical signal were collected and sent to an amplified photodetector, which produced a beat note at the input microwave frequency. We inferred the transduction efficiency from this beat note by calibrating the input optical power, system losses and detector efficiency (see Supplement Section 3). During this transduction efficiency measurement, we swept the bias voltage in a triangle waveform with a period of ≈1 min to vary the splitting between the optical modes. The pump light remained locked to the red optical mode throughout the measurement. Figure 3(a) illustrates the optical modes and signals over the course of the bias voltage sweep. The results of this measurement are depicted in Fig. 3. The two maxima in transduction efficiency near ±10 V (Figs. 3(b) and (c)) correspond to the two cases in which a triple-resonance condition is met and the upconverted light is resonant with the blue optical mode. For large negative bias voltages, the blue mode is far-detuned, and most of the upconverted light is generated in the red optical mode by a double-resonance process involving just the red optical mode and the microwave mode (see Supplement Section 1.3). This process does not depend on the resonance frequency of the blue optical mode, so the transduction efficiency is nearly independent of the bias voltage in this regime. Destructive interference of upconverted light produced in the red and blue optical modes (created by the double-and triple-resonance processes, respectively) causes the transduction efficiency minimum near −20V. For large positive bias voltage, the red optical mode is undercoupled and has a narrow linewidth, so the double-resonance transduction process is weak. The measured data presented in Fig. 3 analytical model (see Supplement Section 1) based on independently measured and estimated device parameters, shown in Fig. 3 The measured transduction efficiency features strong hysteresis when varying the bias voltage, which is caused by hysteresis in the detuning of the optical modes, shown in Fig. 3(d). We observed that this hysteresis could be reduced by lowering the optical pump power and sweeping the voltage bias faster. Based on the slow timescale (seconds for −30 dBm on-chip optical pump power), we attribute the hysteresis to photoconductive and photorefractive effects in lithium niobate [48,49]. These effects are caused by optical excitation of charge carriers in the lithium niobate waveguide, which can migrate to create built-in electric fields that shift the optical resonance frequencies through the EO effect.
We measured the bandwidth of the transducer by varying the frequency of the input microwave drive and measuring the highest transduction efficiency reached during a bias voltage sweep. The inset of Fig. 3(d) shows that our transducer has a 3 dB bandwidth of 13 MHz, slightly larger than the measured 10 MHz linewidth of the microwave resonator. This discrepancy is caused by the nonlinear response of the NbN microwave resonator for high microwave power, which leads to an apparent resonance broadening [50] (see Supplement Section 4). To reduce measurement noise, here we use a relatively large microwave power (−38 dBm on-chip), which causes a small degree of nonlinear broadening. This nonlinearity also leads to reduced transduction efficiency for large input microwave powers ( Fig. 4(a) During this measurement, we found that the detuning between the optical modes could change by several GHz when the optical pump power was varied, likely due to the same effects that cause the hysteresis described earlier. To measure the transduction efficiency at the triple-resonance point for each power level, we performed measurements on several pairs of photonic-molecule modes. For the highest optical pump powers used in this study (denoted by crosses in Fig. 4(a)), we modulated the optical pump to extinction at a rate of 20 kHz and with a 10% duty cycle. The lower average power in these modulated-pump measurements resulted in smaller power-dependent detunings and more stable resonances.
In the low-power regime (< −30 dBm) we observe that the transduction efficiency scales linearly with pump power at a rate of (1.9 ± 0.4) × 10 −6 /µW. From this and the measured loss rates of the resonators, we estimate the single-photon coupling rate of our transducer to be g 0 = 2π × 650 ± 70 Hz, comparable in magnitude to the predicted g 0 = 2π × 830 Hz (see Supplement Section 1.4), yet slightly lower than expected. This difference is likely due to variations in the as-fabricated geometry of the device. The transduction efficiency begins to saturate at (2.7 ± 0.3) × 10 −5 , the highest-measured efficiency for this transducer. This saturation is caused by optical absorption in the microwave resonator, which generates quasiparticles that shift the resonance frequency and increase the loss rate of the resonator [51]. Figure 4(b) shows the optical-power dependence of the microwave resonator's properties. We find that the quasiparticle-induced changes in the microwave resonator are independent of whether the pump laser is tuned on-or off-resonance with an optical mode, which suggests that the absorbed light does not come from the optical resonator itself. Instead, light scattered at the fiber array and grating couplers is likely the dominant contribution to the quasiparticle loss.
Discussion and Conclusion
The transduction efficiency demonstrated here falls well below the requirements for a useful quantum transducer. However, several straightforward improvements can be made to the transducer to increase this figure of merit (see Supplement Section 5). First, optical quality factors above 10 7 have been demonstrated in thin-film lithium niobate [43], suggesting that the optical loss rates seen here can be reduced by roughly 10-fold, leading to a 100-fold improvement in transduction efficiency. Second, the microwave resonator loss rate can be reduced through improved engineering of the bulk acoustic waves to which the microwave resonator couples. For example, simulations suggest that suspending the lithium niobate layer can reduce the microwave loss by more than 10-fold. The quasiparticle losses caused by stray light absorption for higher optical pump powers can be made negligible by changing the design of the sample mount and optical fiber coupling. By using these and other (see Supplement Section 5) interventions, we predict that near-unity transduction efficiency can be achieved for optical pump powers of ∼100 µW. Although we did not demonstrate optical-to-microwave transduction (only microwave-tooptical) due to a low signal-to-noise ratio, we note that the transduction process described here is fully bidirectional [28].
In this work, we have demonstrated transduction between microwave and optical frequencies using a thin-film lithium niobate device. The photonic molecule design of our transducer enables straightforward tuning of the optical modes using a bias voltage, ensures strong suppression of the downconverted light that acts as a noise source, and takes full advantage of the large electro-optic coefficient in lithium niobate. We have described how the piezoelectric coupling of the microwave resonator to traveling acoustic waves can be engineered to minimize loss in the microwave resonator. The advantages of an EO transducer -namely the system simplicity and the possibility for low-noise operation -and the opportunities for improved transduction efficiency suggest that further development of thin-film lithium niobate cavity electro-optics is warranted.
During the preparation of this manuscript we became aware of a similar lithium niobate quantum transducer device reported by McKenna et al. [52].
Optical modes
The optical modes used in our device are delocalized between two evanescently coupled ring resonators. These optical modes are governed by the Hamiltonian where a 1 and a 2 are respectively the annihilation operators for the bright and dark ring resonator modes, 2δ is the detuning between ring resonator modes, and µ is the evanescent coupling rate. This Hamiltonian can be diagonalized by the Bologliubov transformation Note that a + and a − must obey a bosonic commutation relation a i , a j = δ i j , which requires that u 2 + v 2 = 1. For convenience, we set u = cos θ 2 and v = sin θ 2 , where θ is a hybridization parameter. The Hamiltonian will be diagonalized for tan θ = µ δ , giving where the resonance frequencies are ω ± = ω 0 ± δω 2 + µ 2 . These optical system eigenmodes at frequency ω ± will be used to calculate the interaction Hamiltonian. The loss rates of the photonic molecule modes change with the hybridization parameter θ. In the resolved sideband approximation (2µ κ, where κ is the typical optical mode loss rate), Eq. S2 also diagonalizes the open system, and the internal (i) and external (e) loss rates for the hybrid modes, κ ±, {i,e} , are given by where κ {1,2}, {i,e} are the intrinsic and extrinsic loss rates for the bright and dark ring resonator modes.
Triple-resonance transduction
Our electro-optic transducer has three resonant modes. The two optical photonic-molecule modes Here ω {±,m} are the resonance frequencies, is the dielectric permittivity, V ± are effective mode volumes, a ± and b are annihilation operators for the optical and microwave modes, ì ψ {±,m} are the field profiles1, C is the total capacitance of the microwave resonator, and d eff is a constant with dimensions of length that relates the voltage on the microwave resonator's capacitor to the electric field at the center of the optical waveguide. For our device geometry, all three modes are polarized approximately along the Z-axis of the nonlinear crystal in the region of interest, so we can use a scalar interaction approximation. The three-wave mixing process used in our device can be described by a nonlinear energy density [53] The interaction Hamiltonian for this process can be obtained from Eq. S5 by inserting our expressions for the mode fields and considering only terms that vary slowly near the triple-resonance condition ω m = ω + − ω − , which yields: The single-photon interaction strength is where n is the optical refractive index and the integral is taken over the nonlinear material. If a strong pump laser is tuned to the red optical mode, we can replace the a − operator with its is the total loss rates of the red optical mode, ω l is the pump laser frequency, ∆ − = ω l − ω − is the pump detuning, and P is the pump power. Moving to a frame where the optical modes rotate at the pump laser frequency, the full Hamiltonian for our system is where ∆ + = ω l − ω + is the detuning of the pump from the blue optical mode, and g = g 0 √ n − is the pump-enhanced coupling rate. We now use the above Hamiltonian to estimate the bidirectional transduction efficiency between continuous-wave (CW) optical and microwave signals. Consider two signal fields incident on the transducer: an optical signal a in detuned from the pump by ω p , and a microwave signal b in with frequency ω bin . The semi-classical Heisenberg-Langevin equations of motion governing the interaction between the microwave and blue optical modes are where κ m and κ m,e are the total and external coupling rates for the microwave mode. The resonator modes couple to propagating output fields a out and b out via the input-output relations In the steady state, Eqs. S9 and S10 yield the frequency-domain transduction scattering matrix where The conversion is symmetric, and the on-chip transduction efficiency is where ω is the excitation frequency, and C = 4g 2 n − κ + κ m is the electro-optic cooperativity. At the triple-resonance condition, where ω = −∆ + = ω m , this efficiency takes the form (S13) The first term represents the efficiency associated with getting a photon into and out of the converter, while the second term gives the transduction efficiency inside the converter.
Double-resonance transduction
When the converter is operated far-detuned from the triple-resonance condition, a doubleresonance transduction process can become a significant contribution to the total transduction efficiency. In this process, optical photons in the red optical mode can be scattered between the pump frequency and a blue-shifted sideband. This sideband field is far-detuned from the red optical mode relative the mode's linewidth in our experiments, so the transduction efficiency for this process is low, but it can be larger than that of the triple-resonance process when the splitting between red and blue optical modes is much larger than the microwave frequency. This double-resonance process is the origin of the bias-voltage independent response for large negative voltages in Figs. 3(b) and 3(c) of the main text. The nonlinear energy density that describes this double-resonance process is [53] which produces the Hamiltonian where Following the usual linearization procedure for the strongly pumped a − mode [54], we approximate a − ≈ a − + δa − , where δa − is a small fluctuating perturbation to the field in the red optical mode. Keeping terms of order a − , the linearized interaction Hamiltonian is where g dr = g 0,dr a − . This Hamiltonian contains both the desired beam-splitter terms and parametric amplification terms which cause optical down conversion, and since the pump is nearly resonant with the red optical mode, both types of terms are significant. In a frame where the optical mode rotates along with the laser, the semi-classical Heisenberg-Langevin equations of motion for double-resonance microwave-to-optical transduction are For simplicity, we assume that the double resonance process operates in the weak coupling regime, so that back-action of the optical fields on the microwave field can be neglected2. We take the ansatz solution and find The transmitted optical sideband field due to double-resonance transduction is δa out = − √ κ −,e δa − , and hence the total apparent transduction efficiency3, including both double-and triple-resonance transduction, is
Estimating the electro-optic interaction strength
The triple-resonance interaction strength g 0 (Eq. S7) can be cast in a form more useful for designing the transducer. Assuming that the electric field created by the capacitor is oriented along the lithium niobate's Z crystal axis and uniform across the optical mode (a good approximation for our device geometry), and that the microwave resonator drives the optical resonators with opposite phase and behaves as a lumped-element system, we find 2i.e. the term −ig dr δa − + δa † − can be dropped 3The existence of multiple optical sidebands in regimes where double-resonance transduction is significant means that transduction efficiency must be carefully defined. In our experiments, we measure only the transmission of a microwave signal from the transducer's input to the photoreceiver's output, and we cannot differentiate multiple optical sidebands. As such, we define transduction efficiency for multiple sidebands as the apparent transduction efficiency: i.e. the equivalent single-sideband transduction efficiency which would produce the observed signal. Note that this distinction between apparent and true transduction efficiency is significant only in far-detuned regimes of bias voltage sweeps, not near the triple-resonance condition where maximum transduction efficiency occurs. here r 33 = 2 χ (2) /n 4 is the relevant electro-optic coefficient, n e is the extraordinary refractive index of lithium niobate, Γ is an optical mode confinement factor, α is an electrode coverage parameter with a maximum value of 2 for full coverage of both optical resonators, and θ is the optical mode mixing parameter described above4. From this equation, it is clear that the interaction strength can be maximized by creating a microwave resonator with closely spaced electrodes, low total capacitance, and full coverage of the optical resonators.
The calculated values for key device parameters are given in Table S1. Using these results, we estimate g 0 (θ = π) = 2π × 1.0 kHz.
Simulating piezoelectric loss
We use a two-dimensional finite element model to simulate the piezoelectric loss of the microwave resonator. In this frequency-domain simulation, a voltage is applied to the capacitor electrodes at frequency ω, and the time-averaged electrostatic energy E electrostatic and acoustic power absorbed by the perfectly-matched layer P acoustic are calculated. The quality factor set by piezoelectric-loss is then given by The two-dimensional nature of the simulation means that acoustic modes with out-of-plane (i.e. along the waveguide) propagation or strain are neglected. Modes with an out-of-plane propagation direction couple weakly to the microwave resonator because the capacitor is much longer than the acoustic wavelength at the relevant ∼GHz frequencies. Modes with out-of-plane stress also couple weakly to the microwave resonator for X-cut lithium niobate because of lithium niobate's piezoelectric coefficients. For example, when applying the electric field along the Z crystal axis in our device the d 33 piezoelectric coefficient, which creates in-plane stress, dominates over other components.
Measurement setup and transduction efficiency calibration
Details of the measurement setup are shown in Figure S1. To calibrate the transduction efficiency, we perform the following procedure before every set of measurements.
First, with the laser frequency detuned far from the optical resonance, we measure the optical power into and out of the DUT. After correcting for measured asymmetric losses in the optical fibers going into the cryostat, we assume the loss at both input and output grating couplers to be symmetric. Based on measurements of a large number of grating couplers, we estimate the coupler-to-coupler variation in insertion loss to be less than 0.4 dB. Next, we measure the optical 4Note that the values of g 0 used in the main text are quoted for the optical mode splitting (and hence the value of the hybridization parameter θ) which maximizes transduction efficiency. The maximum transduction efficiency is obtained for θ ≈ 0.7π in our device. The MZM is arranged for either GHz-frequency optical single-sideband modulation using a phase-shifted (PS) dual drive through a high-frequency port, or low-frequency amplitude modulation through a bias port, controlled by an arbitrary waveform generator (AWG). Focusing grating couplers (≈10 dB insertion loss) couple light from optical fibers into the device under test (DUT), which is cooled to T ≈ 1 K inside a closed-cycle cryostat. The light collected from the DUT is split into an analysis arm (90%) and a 1 kHz photoreceiver (10%), whose signal is used to lock the laser frequency to an optical mode. The analysis arm passes through several optical switches (dotted blue lines) which allow for optional and repeatable insertion of an erbium-doped fiber amplifier (EDFA) and optical filter (F, 0.2nm bandwidth). The analysis arm can be sent to an optical spectrum analyzer (OSA) for sideband calibration, a DC optical power meter for power calibration, a 100 MHz photoreceiver for measuring transmission spectra, or a 10 GHz photoreceiver for detecting transduction. The bias voltage of the DUT is controlled by a sweep generator through a bias tee. A vector network analyzer (VNA) can be connected to microwave port V1A, which is protected by a DC block (DCB) capacitor, to excite the DUT. The upconverted optical signal can be detected at port V2. In an alternative measurement setup, the transmission of an optical sideband can be monitored by connecting the VNA to the optical single-sideband modulator (port V1).
power arriving at the output of the analysis arm using the DC power meter. These measurements allow us to estimate the optical insertion loss from the DUT to the end of the analysis arm η optical = η coupler · η fiber , as well as the on-chip optical power. Next, we calibrate the response of the 10 GHz-bandwidth photoreceiver by using port V1 to generate a single optical sideband. We measure the signal in the analysis arm using the high-resolution optical spectrum analyzer (which allows us to directly measure the relative power of the sideband and carrier P sideband /P pump ), the calibrated DC power meter (which measures the total power P sideband + P pump ), and the 10 GHz photoreceiver. These measurements allow us to estimate the detector response parameter A det. , defined so that P det. = A det. P sideband P pump .
During the transduction measurement, when the laser frequency is locked to an optical mode, measurements of the total optical power and the 10 GHz photoreceiver response allow us to infer the power in the upconverted optical sideband P sideband , based on the above photoreceiver calibration. The gain provided by the erbium-doped fiber amplifier (EDFA), if in use, can be estimated by measuring the photoreceiver response with and without the EDFA in the optical path. Finally, the calibrated transduction efficiency is given by where P in is the input microwave power at port V1A and η cable is the measured insertion loss from port V1A to the DUT. Fig. S2. Effect of microwave power on transmission spectrum. Microwave input powers above about −40 dBm at port V1A (∼ −48 dBm on-chip) produce distorted transmission spectra due to nonlinear dynamics. [50] The superconducting NbN film is deposited using DC magnetron sputtering at room temperature with an RF bias on the substrate holder. The film has a thickness of ∼44 nm, room-temperature sheet resistance of 52 Ω/square, and a transition temperature T c of ∼10 K. At high microwave powers, the superconducting resonator undergoes nonlinear oscillations [50], as shown in Fig. S2. In the actual experiment, the drive power is kept below -30 dBm, and the nonlinear dynamics are therefore small. Table S2 lists several relatively straightforward interventions that can be made to improve the performance of our transducer. The predicted efficiency enhancement for each intervention in the table assumes the transducer operates in the low cooperativity limit. | 6,960.8 | 2020-05-02T00:00:00.000 | [
"Physics"
] |
Discrete-Ordinates Modelling of the Radiative Heat Transfer in a Pilot-Scale Rotary Kiln
: This paper presents work focused on the development, evaluation and use of a 3D model for investigation of the radiative heat transfer in rotary kilns. The model applies a discrete-ordinates method to solve the radiative transfer equation considering emission, absorption and scattering of radiation by gas species and particles for cylindrical and semi-cylindrical enclosures. Modelling input data on temperature, particle distribution and gas composition in the radial, axial and angular directions are experimentally gathered in a down-scaled version of a rotary kiln. The model is tested in its capability to predict the radiative intensity and heat flux to the inner wall of the furnace and good agreement was found when compared to measurements. Including the conductive heat transfer through the furnace wall, the model also satisfactorily predicts the intermediate wall temperature. The work also includes a first study on the effect of the incident radiative heat flux to the different surfaces while adding a cold bed material. With further development of the model, it can be used to study the heat transfer in full-scale rotary kilns.
Introduction
Rotary kilns are cylindrical, tilting and slowly rotating furnaces, and the first patented invention resembling a rotary kiln was created in 1885 by Frederick Ransome, which was introduced to the cement industry [1]. Since then, the rotary kilns have been developed and are still used for cement production and several different other industries, such as pulp and paper and for iron ore pelletizing. The unit operation of a rotary kiln is mainly heat treatment of a solid bed material passing from the higher to the lower end of the kiln as the kiln is rotating with a luminous flame along the kiln axis. The heat transfer within the kiln is complex, since it not only includes convective, conductive and radiative heat transfer, but heat is also transferred in the angular and axial directions as the wall rotates and the bed mixes. Additionally, the bed material may be reactive and heat addition or loss from reactions should then be considered.
Several researchers have examined the heat transfer mechanisms within the kiln freeboard, bed material and walls. Experimental data from full-scale rotary kilns are, however, difficult to samplemainly due to the large dimensions and the rotation of the kilns. Many studies have therefore focused on the development of different models, while others have conducted experiments in smaller furnaces. Cross and Young [2] examined flame characteristics and pellet throughput in a rotary kiln for oil and gaseous flames employing a one-dimensional flame model assuming grey radiative properties. Gorog et al. studied the radiative heat transfer in rotary kilns, with and without luminous flames, as well as the regenerative heat transfer from the kiln wall while varying parameters such as the fuel type or temperature and oxygen enrichment of the secondary air [3][4][5]. Their work showed the computational errors associated with assuming grey properties by comparing results with calculations using a non-grey equimolar CO2/H2O mixture for the freeboard gas. Thornton and Batterham [6] developed a one-dimensional dynamic model of the radiative heat transfer and bed mixing in a rotary kiln. Barr et al. [7,8] studied a pilot-scale rotary kiln and the heat treatment of different solid materials. They developed a cross-section model of the radiative heat transfer in the rotary kiln for a natural gas flame with a reacting bed material using secondary air at ambient temperature. Boateng and Barr [9] developed a quasi-3D model of the thermal transport in a rotary kiln by comprising a 1D axial model together with a 2D model of the rotary kiln cross-section for the bed material. Several researchers have also studied the conductive heat transfer between the wall and bed material [10][11][12][13] as well as the convective heat transfer between the freeboard and the wall and bed materials [14]. The movement of the bed material, the different modes of heat transfer as well as the impact from different parameters on heat transfer in a rotary kiln, such as the temperature dependent radiative emissivity of different materials, has been summarized by Specht [15]. Several researchers have also developed different computational fluid dynamics (CFDs)-based models in order to model full-scale rotary kilns [16][17][18][19]. While such models can be used to provide valuable information of various processes, they rely on predictions of gas and surface temperatures, which can increase their uncertainty relative to measured properties.
This work focuses on rotary kilns used by the Swedish mining company Luossavaara-Kiirunavaara Aktiebolag (LKAB) for iron ore pelletizing, in which preheated iron ore pellets are further heated, oxidized and sintered as they move through the kiln, which has a total length of approximately 34 m and an inner diameter of 5.5 m. The bed of iron ore pellets constitutes approximately 10% of the kiln volume and the heat required by the process is mainly supplied from the suspension-fired combustion of fossil coal at a range of 35-40 MWth, with the burner positioned at the lower end of the kiln. The heat is mainly transferred to the pellet bed and kiln walls from the long jet flame due to radiation. High oxygen concentrations and gas temperatures of the freeboard gas are desired to promote oxidation of magnetite to hematite, achieved by preheating large volume flows of secondary air. The gas stream leaving the rotary kiln holds a temperature within the range of 1150-1250 °C and an oxygen concentration of approximately 16% [20]. The heated pellets enter the rotary kiln at temperatures slightly lower than the outgoing gas temperature and leave the rotary kiln at approximately 1270 °C [20], depending on the fuel used. The hot bed of pellets falls down to a cooler and is cooled with air passing through the bed. The air leaving the cooler is introduced as secondary air to the rotary kiln at a temperature of approximately 1050 °C. A more detailed description of LKAB's process can be found in the work by Jonsson et al. [21].
The use of coal as fuel results in large emissions of carbon dioxide and, due to the concerns of an increased global greenhouse effect, there is an urgent interest to switch to less carbon-intensive fuels. However, since the control of the kiln and the flame conditions are critical to achieve a highquality product, it is necessary to consider how a change in fuel may affect the heat transfer within the rotary kiln, or rather, how to minimize eventual effects on the product. The aim of this work is to improve the understanding of the heat transfer process in rotary kilns using solid fuels, since quantitative knowledge on this is rather limited today. Here, the radiative heat transfer is examined for a stationary kiln, and modelling results are compared to measurement data gathered from a 580 kWth pilot-scale furnace in order to validate the model. The modelled and measured incident radiative heat fluxes to the inner wall of the furnace are of main interest, since they correspond to the heat transferred to the bed of iron ore pellets. Calculated radiative intensities and wall temperatures are also of interest in order to perform a first validation of the model.
Methodology
In the rotary kiln, heat is transferred to the iron ore pellet bed due to thermal radiation, convection and conduction within the bed and from the wall. With a long and luminous flame, the total heat transfer within the kiln is, however, considered to be dominated by radiation [1]. The need for an accurate and detailed model of the radiative heat transfer within the kiln to achieve a better understanding of the process and the total heat transfer is therefore obvious.
In this work, the radiative heat transfer is modelled and studied in three dimensions using a discrete-ordinates method (DOM) solving the radiative transfer equation (RTE) for cylindrical and semi-cylindrical enclosures. Originally, the DOM gained interest in the work of Carlson and Lathrop [22], solving neutron transport problems in different geometries. Since then, the DOM has been applied to and optimized for different radiative heat transfer problems; see, e.g., the review paper of Coelho [23]. Applying a DOM, the examined enclosure is divided into a number of three-dimensional cells and the RTE is solved for a set of weighted discrete directions for each cell. The DOM has been successfully used in previous studies of different cylindrical enclosures [24][25][26] as well as more complex geometries [27][28][29][30][31].
The authors have previously published a paper on the heat transfer modelling within a full-scale rotary kiln for iron ore pelletizing using a DOM [32], and the aim of this work is to describe the detailed radiative heat transfer model used, for a pilot-scale rotary kiln, and validate it by comparing it with measurement data. The developed model and treatment of the radiative heat transfer, in three dimensions, including emitting gases and fuel particles, were previously presented by the authors at the 6th International Conference on Computational Thermal Radiation in Participating Media [33] and have been further developed in this work to include the heat transfer through the furnace wall and an iterative solution of the inner wall temperature and the incident radiative heat flux. The authors have also applied a discrete transfer model (DTM) to study the radiative heat transfer in earlier studies for different cylindrical furnaces with satisfactory results while assuming axisymmetric flames [34,35]. The DOM has, however, been shown to be more economical and studies of non-axisymmetric systems of a whole furnace (in 3D) are more easily implemented in comparison to the DTM [36]. The modelling approach is described in more detail below.
The model in itself is not predictive of the combustion conditions within the pilot-scale furnace but can be used with advantage to study the radiative heat transfer properties in an industrial-scale rotary kiln in detail. Given an input data set of temperature, gas and particle concentrations, as well as surface properties, sensitivity analysis of certain parameters can be conducted to gain additional insight into the heat transfer process. The model can be used in future work as a submodel in a more comprehensive model, coupling the radiative heat transfer with gas, particle, and energy transport as well as combustion reactions in order to provide a more comprehensive predictive tool.
Input data for the model were gathered from experiments comprising temperature, gas composition and particle concentration in a rotary kiln pilot-scale furnace with a thermal input of 580 kWth and using a carbon-rich coal as fuel.
Modelling
The radiative transfer equation (RTE) describes how the radiative intensity, I, changes along a direction , while it accounts for the contributions from emission, absorption, and scattering into and away from the said direction. The RTE may be expressed for a given wavenumber, , as: where and are the absorption and scattering coefficients for the present medium, respectively, and (̂ ) is the spectral intensity scattered into direction ̂ from a small ray originating from direction ̂ depending on the scattering phase function, , over the solid angle Ω .
Considering a rotary kiln, the modelled enclosure will be defined by the cylindrical wall and the present bed material, i.e., a semi-cylindrical enclosure. To simplify the geometry of the enclosure, the bed material is assumed to be evenly distributed along the furnace axis and may then be approximated as a plane, as previously done by Thornton and Batterham [6]. While discretizing the enclosure into cells, the radial angle between two cells (Δ ) was kept constant and due to the geometry of the enclosure, two different cell types will be present in the DOM, here referred to as type I and II, as is shown in Figure 1. The total number of cells used in the DOM is given by: where represents the number of cells in the radial (r), angular (ψ), and axial (z) directions. The radiative intensity to each cell node is calculated for a set number of directions, derived according to an SN approximation, where N denotes the number of different direction cosines used for each principal direction, m, such that the total number of directions is given by: • ( + 2). For the cylindrical and semi-cylindrical enclosures of a rotary kiln, the RTE may be expressed in terms of angular coordinates [25], according to: where , and are direction cosines for a ray travelling along the discrete direction ̂ , according to Figure 2. The ordinates may be positive or negative with respect to the spatial coordinate system and a weight, , is related to each quadrature set of ordinates to cover the whole sphere of 4 sr from a point according to Equation (3). The quadrature sets are further constructed in a way to be invariant if rotated 90° and are described more thoroughly by Fiveland [37].
With an increasing N value, the number of rays increases along with the computational power required to solve the RTE. According to Fiveland [37], the angular discretization with the S4 approximation may yield reasonable enough solutions for some systems, while higher values require considerably more numerical effort. It should be noted, however, that the DOM may suffer from ray effects, which can be reduced by an increased number of discrete directions [23]. For this reason, the S8 approximation is used in this work. Table 1 lists numerical values of the discrete cosines and weights used, collected from the work of Modest [36]. Table 1. Discrete cosines and weights used for the S8 approximations [36]. By employing the angle between two cells in the angular direction, Δ , Equation (2) is solved by multiplying cell types I and II, respectively, with the expressions given by Equation (4): and integrating over the volume elements for each cell type. For both cell types, the following expression is then achieved: where the cell wall areas through which the radiation enters and leaves the cells are represented by Ai, Bj and Ck, and the cell volume by Vp. The ± / terms are introduced to handle the discretization of the direction vector and to correct for the curvature of the cylindrical furnace [22,38]. The six different cell surfaces for the two cell types are presented in Figure 3. Defining the cells from a line between the center of the furnace and the middle of the bed surface, in the angular direction, , is the shortest distance from the central axis of the kiln to the bed surface. Each cell surface may then be expressed as: where i, j and k represent the cell number in the radial, angular and axial direction, respectively, and is the angle between two angular cells. The cell volume may be calculated as = Δ for both cell types. To find the ± / terms, a case in which divergenceless flow is assumed can be used according to Carlson and Lathrop [22], i.e., a case with neither intensity sources nor sinks, all intensities are alike and the right side of Equation (5) To calculate the radiative intensities, an iterative solution process is required since the RTE is solved for each discrete angular direction, resulting in as many coupled partial differential equations. The iterative procedure starts at the wall at one end of the cylinder and, during the first iteration, values for the reflected and angular intensities are assumed, and thereafter updated in each iteration. The discretized rays are spread from the nodal points in the cells over spheres, and calculations are performed for one sphere octant at a time. With a beam directed from the wall towards the furnace center, the intensity to a node point P from one direction in an octant can be expressed as in Equation (8) The DOM may also suffer from false scattering, which may be reduced by using an appropriate spatial discretization scheme [23]. Here, a diamond differencing scheme is used to calculate the upstream intensities of IA, IB and IC. The diamond scheme is, however, known to have a shortcoming in eventually producing negative intensities. In the model, if any such negative value appears, it can be set to zero. It is also possible to ensure that these negative values never appear using a finitedifference weighting factor set to 1 as described by Fiveland [37].
In this work, using grey coefficients to represent the contributions from gases and particles, the RTE is solved for an overall (grey) intensity in order to reduce the required computational power. Considering the present gases, a weighted-sum-of-grey-gases (WSGG) model is used to estimate grey gas absorption coefficients for each cell in the model. Using a WSGG model (as first presented by Hottel and Sarofim [39]), the actual gases present in the furnace are represented by a mixture of a number of grey gases, where the contribution from each grey gas is connected to a weight depending on the real gas composition. Derived from the work of Johansson et al. [40], a modified WSGG model is used in this work, in which a set of four grey gases and one clear gas are applied, considering the gas concentrations of CO2 and H2O. Spectral particle properties are calculated according to Mie theory for fuel and ash particles. Complex refractive indices from Foster and Howarth [41] are utilized for fuel particles, while a combination from the works of Lohi et al. [42], Gupta and Wall [43] and Goodwin and Mitchner [44] are utilized for ash particles. Present soot particles are considered to be small and Rayleigh theory is used to calculate the absorption coefficient with complex refractive indices from Chang and Charalampopoulos [45]. From particle size distributions, spectral absorption and scattering coefficients can be calculated for each particle type and size and summed to a single representative value of either coefficient. Using Planck averaging, the spectral particle properties can be turned into grey particle properties and used together with the WSGG model. A simplified scattering phase function is used in this work and cases assuming either pure isotropic or forward scattering (i.e., no scattering) are examined along with an additional case assuming 80% forward scattering, as suggested by Gronarz et al. [46]. A WSGG model was used previously for gases and coal particles together with the DOM by Yu et al. [47] for a cylindrical furnace of 3 m and 0.6 m inner diameter. They found that the DOM provides credible results of the calculated radiative heat transfer together with a WSGG model. Once the RTE is solved for the furnace using the DOM, the incident radiative heat flux to the inner wall of the furnace can be calculated. This heat flux corresponds to the maximum radiative heat that can possibly be absorbed by a bed material in a rotary kiln. The incident radiative heat flux, , is calculated from the weighted sum of the incident radiation intensities from all directions, with representing any direction cosine, according to Equation (9) Depending on the absorptivity of the wall material surface, a portion of the incident radiative heat flux will be absorbed by the wall, heating the wall. If the conditions on the outside of the furnace, i.e., outer air and surrounding temperatures, are known, the inner wall temperature can be estimated from an energy balance of the heat transfer through the wall. At steady state conditions, the energy difference between the absorbed and emitted radiative heat must be transferred through the wall, neglecting convective heat transfer since radiation has been shown to dominate the total heat transfer on the inside of the furnace [32]. Further assuming grey and diffuse properties of the wall surface, i.e., absorptivity and emissivity are equal, the conductive heat transfer through the wall may be estimated according to Equation (10).
where the outer wall temperature, , may be found from an energy balance over the furnace wall, since the heat transferred through the wall will at steady state be equal to the heat transfer, or rather heat loss, to the surroundings (at temperature ) due to natural convection and radiation. Using an overall heat transfer coefficient for the conductive heat transfer, , based on the materials in the composite wall, the outer wall temperature may be estimated according to Equation (11) [32].
The expression of Equation (12) is considered to be valid for < 10 [50]. By iterating the energy balance given in Equation (10), the inner and outer wall temperatures may be calculated from the incident heat flux and the outer wall heat loss, as well as an estimated temperature profile through the wall.
Data Gathering and Usage
The radiative heat transfer conditions in a rotary kiln have been experimentally examined by measuring the gas properties and particle concentration in a pilot-scale test furnace. The test furnace, as shown in Figure 4, is cylindrical and slightly tilting (3° from the horizontal line), constructed as a down-scaled version of a full-scale rotary kiln used for iron ore pelletizing, applying constant velocity scaling [51], with a length of 4.8 m and an inner diameter of 650 mm. The inner wall was refractory lined, and an insulating material was used between the inner layer and the outer steel wall, with a total wall thickness of 285 mm. Probing and in-flame measurements were possible along the furnace axis due to 13 installed measurement ports, labeled MH 0-12, allowing for radially as well as axially distributed measurements within the furnace. The burner, positioned at the lower end of the furnace and in line with the furnace axis, had six registers for primary air and fuel and has been described more thoroughly by Edland et al. [52]. To mimic the full-scale process, where hot air from the product cooler section is used as secondary air to the kiln, large secondary air volumes of 2300 Nm 3 /h at a temperature of approximately 1050 °C were introduced through two large registers, one above and one below the burner, as described previously by Bäckström et al. [35]. The thermal input to the burner during the experimental campaign was 580 kWth but no bed material was present, and the test furnace was stationary. The flue gases leave the furnace through an extended piping to a stack. Parameters measured during the experimental campaign included the radiative intensity using a narrow angle radiometer (NAR); radiative heat flux using an ellipsoidal radiometer; gas and wall temperatures using a suction pyrometer with a type B thermocouple, an IR camera and stationary thermocouples; gas composition (H2O and CO2) using a FTIR; and particle concentration and size distribution using a low-pressure impactor (size range 30 nm-10 µm). To achieve a good distribution of measurement positions along the furnace axis as well as the furnace diameter, measurements were focused on ports MH 0, 1, 3 and 7 for most parameters, while the IR camera was used for ports MH 0-10. Using water-cooled probes, measurements of temperature, gas composition and radiative intensity were gathered at several positions along the furnace diameter, while the incident radiative heat flux was measured at the position of the inner wall and the particle concentration at the centerline of the furnace. Performing radiative intensity measurements, quartz windows were placed in the opposite wall, acting as cold backgrounds to reduce background radiation along the furnace diameter and to better study the radiation from the flame. The temperature within the wall material was measured along the furnace axis, and thermocouples were positioned 70 mm within the wall from the inner surface. A more detailed description of the test furnace, the gathering of the data, the different measurement probes and techniques used can be found in [34].
Collected measurement data on temperature, gas composition, particle concentration and size distribution were used as input data for the model and prescribed at the cell nodes. Parameters were assumed to change linearly between two measurement positions, radially as well as axially, considering the linear temperature change in T 4 . Values at the axial position of the burner were set from the measurements at port MH 0 and from the known secondary air inlet temperature and gas composition. Between the axial position of port MH 7 and the outlet of the furnace, the radial temperature was linearly changed to become isothermal at 1200 °C at the outlet and the gas composition was set to be the same as at the axial position of measurement port MH 7. Wall properties were set at the cell surface nodes but without available data concerning the solids radiative properties, the wall was assumed to be grey and diffuse. The lower end (burner position) and gas outlet/stack of the kiln were represented as two discs at the bottom and top ends of the cylinder, respectively, acting as boundaries in the DOM, with a set emissivity of 0.80. Several different fuels and fuel combinations of coal and biomass were tested in the furnace during the measurement campaign, though the modelling work in this paper is focused on a single case using a carbon-rich coal as fuel. The fuel particle concentration was known at the burner inlet and measured at the central position for ports MH 3 and 7. The collected and examined particle samples from the flame were found to contain small amount of ash in comparison to char and fuel particles. Ash particles were therefore neglected in the modelling work. Further, the ash contained a small amount of scattering components such as Fe2O3 [53]-only approximately 5% of the ash was made up by Fe2O3. The char particle size distribution used in the modelling work was divided in 12 steps according to measurements. Further details on the measurement campaign, the different fuels examined and measurement data can be found in the work by Gunnarsson et al. [34].
Results and Discussion
To study the radiative heat transfer within the pilot-scale furnace, the described 3D model applying the DOM has been used as a tool in this work for a case using a carbon-rich coal at a thermal input of 580 kWth, and the results are compared to measurements. Using a grid resolution of 30 × 60 × 100, the model was applied for the test furnace full length of 4.8 m, measured from the burner position. Extended temperature and gas concentration maps are shown in Figure 5, based on measurement data gathered at positions along the furnace diameter for several ports, where the measurement positions are indicated with asterisk ( * ) symbols. Figure 5a-c shows cross-sectional temperature contour maps corresponding to the axial distances of ports MH 1, 3 and 7, as observed from the burner, and the estimated temperature in the whole furnace is shown in Figure 5d, as observed from above. Using an IR camera, the furnace inner wall temperature was estimated for ports MH 0-10 under the assumption of an isothermal wall for all angular positions for each axial position (Figure 5d). Downstream of port MH 10, the wall was assumed to be linearly decreasing to 1200 °C at the gas outlet. Figure 5e,f shows cross-sectional gas concentration maps of CO2 and H2O, as observed from above the furnace.
During the experiments, radial symmetry of the flame within the furnace was aimed for but, as can be observed from the temperature maps, the flame became slightly tilted to the left from the burner. It should further be noted that since measurements were performed only along the horizontal line of the furnace, possible variations in the vertical direction could not be observed. The measured (triangles) as well as the modelled incident radiative heat fluxes along the furnace axis are shown in Figure 6a, with the burner located at the 0 m position, and the measurement data corresponding to ports MH 0, 1, 3 and 7. In the model, the scattering was set to be either isotropic (continuous line) or only forward (dashed line), but only marginal effects from the scattering assumption were observed; the isotropic and forward scattering predictions essentially lie on top of each other. This is probably mainly due to the relative size between the diameter and length of the furnace. It appears as if a large portion of the incident radiative heat flux to a position along the furnace axis originates from the flame and wall close to each axial position, minimizing particle scattering effects, and it is possible that the scattering would show a somewhat larger impact with an increased furnace diameter. At port MH 7, the model overestimates the radiative heat flux, which is most probably an effect of an overestimated wall temperature using the IR camera. Since radiation from the flame may be reflected in the wall and the wall emissivity is not fully known, estimations of the opposite furnace wall temperature using an IR camera is challenging. Further, at this axial position, the flame was slightly tilted towards the opposite wall of the furnace, causing an overestimation of the overall wall temperature assuming no angular variation of the wall temperature. Considering this, the model was instead executed for a case where the wall temperature was assumed to adopt the temperature of the gas closest to the wall, according to Figure 5d-that is, for an overall colder wall with angular variations in temperature-and the modelled incident radiative heat flux is shown in Figure 6b. As can be observed, the difference between measured and modelled values is decreased for the downstream port MH 7 due to the cooler wall (~110 °C at port MH 7). Figure 6c-e shows the measured radiative intensities for the respective ports MH 1, 3 and 7, indicated with triangles, and the modelled total radiative intensities, using the same assumptions as for Figure 6b. During the measurements, the NAR was traversed along the diameter of the furnace, entering through a measurement port. With the probe located at the inner wall, at the 0 mm position, high radiative intensities were measured due to the long path length of hot gases and particles in the probe's line of sight. Moving the probe through the flame and closer to the opposite wall, the measured radiative intensity decreases as the path length of hot gases and particles decreases. At the opposite wall (650 mm), the probe is directed towards a cold quartz glass in the furnace wall and the measured radiative intensity comes close to zero. In the model, this cold quartz glass is accounted for, allowing the wall to be cold at the specific and corresponding axial distance from the burner and for one angular position. Unlike the heat flux, large effects are apparent from the scattering assumption in the modelling of the radiative intensity. Assuming only forward scattering (continuous lines), i.e., no scattering, the radiative intensity is predicted satisfactorily for port MH 1 but underpredicted for ports MH 3 and 7. Instead assuming isotropic scattering (dashed lines), the model predicts the radiative intensity rather satisfactorily for all three ports. This increase in the modelled radiative intensity is due to the additional radiation scattered into the single direction studied with the NAR from the hot surrounding wall. That is, while assuming forward scattering, the only contribution to the total radiative intensity, in one narrow direction, originates from the emitting gases and particles present in the NAR's line of sight. An additional case, assuming the scattering phase function was 80% forward scattering (dotted line) and 20% isotropic scattering, was also studied and found to underestimate the radiative intensity at ports MH 3 and 7. Solving the RTE and the energy balance expressed in Equation (10), in an iterative process, the inner and outer wall temperatures can be calculated from the set gas temperature, gas concentration Measurement Forward scattering Isotropic scattering 80% Forward scattering and particle concentration within the test furnace. Figure 7a shows the calculated inner wall surface temperature along the furnace axis using the DOM as well as the estimated wall temperature using the infrared camera, the gas temperature close to the inner wall and the maximum flame temperatures based on measurements. Figure 7b shows the temperature of the inner and outer wall surfaces as well as the calculated and measured temperature at the intermediate position of 70 mm from the inside of the wall. The thermocouples were positioned along the axis of the furnace, beyond the insulation material, and as can be observed, the temperature drop appears within the insulation. The incident radiative heat flux for the calculated wall temperature profile is shown in Figure 7c, with good agreement to the measured values, and may be compared to Figure 6a,b. From the results shown in Figure 7, it appears that the estimated temperature using the infrared camera is in good agreement with the calculated wall temperature close to the burner, while it is closer to the maximum flame temperature farther downstream. It appears as reflected radiation was observed at the downstream measurement ports. At these downstream positions, the gas temperature closest to the wall is instead in good agreement with the calculated wall temperature. This could be considered to be in agreement with the observation of the flame being tilted towards the wall opposite of the measurement ports at these positions, as shown in Figure 5d. Regarding the modelled and underestimated radiative intensity at port MH 7, shown in Figure 6e, a better fit to the measurements could be achieved using the wall temperature as estimated with the infrared camera and assuming some particle scattering. However, considering the relatively large difference in the incident radiative heat flux at port MH 7, shown in Figure 6a, a more probable explanation is an underestimated number of particles at this axial position. Therefore, the impact on the radiative intensity and heat flux was studied, assuming the same case as in Figure 6b but changing the particle concentration at port MH 7. It was found that a 30% particle concentration increase at positions downstream of port MH 7 resulted in a rather good fit if isotropic scattering was assumed, while assuming forward scattering still underestimated the radiative intensity, as shown in Figure 8a. To illustrate the importance of considering the presence of particles, a case assuming no particles at all in the furnace was also examined and the radiative intensity is clearly reduced. Figure 8b shows the calculated incident radiative heat flux for those cases as well and the reduced heat flux due to the absence of particles is evident. Another explanation for the underestimated radiative intensity could be that the highest temperatures were observed at this axial position. Due to the difficulties of measuring accurate temperatures in coal flames, the measured flame temperature could actually be underestimated at this position. While the wall temperature has been shown to have a relatively large effect on the incident heat flux, the wall emissivity could also be considered to have some effect. The wall emissivity was therefore varied from 0.80 to 0.60 and 1.00 for the inner wall surface. Figure 9 shows how the incident radiative heat flux varies with a varied wall emissivity and only marginal effects from the emissivity change can be observed, the predictions essentially lie on top of each other. The explanation for this is the small scale of the furnace, with a radius much smaller than the furnace length (0.325 m << 4.8 m), the grey assumption of the wall material and the fact that it is a closed enclosure with a relatively small axial wall temperature gradient (see Figure 7a). Again, the largest portion of the incident radiative heat flux originates from the flame and wall close to each position (with a similar wall temperature) and after several surface reflections the radiosity (reflected and emitted radiation) from a surface comes close to black body radiation and becomes independent of the wall emissivity, if the radiative properties are assumed to be grey. That is, for a furnace with a larger radius, in comparison to the furnace length, the impact from the changed wall emissivity may become more evident, and more affected from the set end boundaries. The model also has the ability to include a bed material, introducing cells of Type II; see Figure 1. Figure 10a,b shows the incident radiative heat flux to the furnace wall for a case with an additional bed material, in which the bed material accounts for approximately 10% of the furnace volume, as the full-scale furnace used by LKAB. The cylindrical furnace wall has been flattened out in the figures and the presence of the bed material is indicated, close to the top of each figure. In Figure 10a the wall and bed temperatures are assumed to be equal to the closest gas temperatures, while in Figure 10b a made-up case is considered in which a bed is introduced at 100 °C at the axial position of 4.8 m and leaves with a temperature of 1050 °C at the burner (0 m), i.e., equal to the secondary air inlet temperature, in an attempt to imitate, e.g., a bed of pellets being heated as it is moving through the kiln, though with a much larger temperature gradient. The bed and wall emissivities were both set to 0.80 in this example. To more easily compare the two cases, the gas composition and temperature profiles were simplified to be radially symmetric-that is, profiles from Figure 5d-f (0-325 mm along the diameter) were used for all angular positions at each axial position, and forward scattering was used. As expected, when adding a cold bed material to the kiln, the maximum incident radiative heat flux is clearly visibly lowered in Figure 10b: the overall incident radiative heat flux at all surfaces is reduced. It may also be observed that the radiative heat flux is higher to the cold bed material than to the surrounding wall at the same axial position. This example illustrates the possibility to use the model to study the effect of different parameter changes.
Conclusions
A discrete-ordinates method has been applied to develop a 3D model of the radiative heat transfer in cylindrical and semi-cylindrical enclosures in order to resemble rotary kilns with a present bed material. Experimentally gathered data from a pilot-scale test furnace, comprising temperature, gas composition and particle concentration, were used as input data for the model and the modeled radiative heat fluxes were compared to measurements with satisfactory agreement.
The wall temperature was found to have a significant impact on the radiative heat flux, whereas the inner wall emissivity only showed a marginal impact due to the relation between the furnace radius and axial length as well as the assumption of grey radiative properties. The inner wall temperature and the incident radiative heat flux were calculated in an iterative process from the prescribed properties in the freeboard using Equation (10), including conduction through the furnace wall as well as heat losses from the cylindrical outside of the test furnace, and provided good results when compared to measurements. The intermediate wall temperature was also estimated in a satisfactory way when compared to the measurements from the thermocouples positioned within the wall, 70 mm from the inner surface of the wall. The radiative intensity was found to be underestimated in the model when scattering was neglected. Instead assuming isotropic scattering, the model predicted the measured radiative intensity quite well, since radiation was then scattered into the measured direction (with the NAR) from the hot surrounding wall. The impact on the radiative heat flux from the presence of particles in the freeboard was also examined by varying the particle load.
Though no experiments including a bed material in the furnace were performed in this work, the possibility of introducing a bed material to the model was demonstrated, and the effect on the overall radiative heat flux of a cold bed material was examined. As could be expected, the overall radiative heat flux to the bed and wall was reduced when a cold bed material was introduced to the furnace, but the radiative heat flux was found to be higher to the bed material than to the surrounding wall at the same axial position.
The model demonstrated the possibility to study radiative heat transfer in full-scale rotary kilns in future work, using different fuels such as oil or coal, and performing desirable operational sensitivity analyses. | 9,322.6 | 2020-05-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Directed aging, memory, and nature’s greed
Plastic deformation produced by external strains directs the aging of disordered materials to create unusual elastic response.
INTRODUCTION
The incremental process of aging affects materials over extended periods of time (1)(2)(3)(4); for example, a plastic may become more brittle or a glass may slowly shrink in volume. These changes occur without the apparent influence of an external force directing the evolution. They are simply part of the inexorable aging process, which often leads merely to a degradation of physical properties. However, it comes as no surprise that in other cases, such as a beam sagging under the influence of gravity, an externally imposed stress forces the material to deform over time in an obvious and well-defined manner. Here, while the geometry of the beam has perceptibly changed so that it may no longer do the job for which it was originally intended, changes in the material's elastic properties are less apparent.
Our purpose here is to highlight how the mechanics of materials can evolve in unexpected ways due to a process of directed aging. Materials that have not reached equilibrium often retain a memory of how they were processed, trained, stored, or manipulated (5). During aging, they evolve in a direction dictated by those conditions. The resulting properties contain a memory of the applied load. We unite these concepts of directed aging and memory with the idea that nature often follows a greedy algorithm in its evolution. We first recall how materials can be manipulated on the computer and then explore how natural evolution can be harnessed to produce a similar outcome.
Materials can often be considered on an atomic scale as a network of bonds between nodes (6,7). Previous work has shown that networks can easily be tuned to have specific properties by removing a small subset of computationally determined links between nodes. For example, a spring network can be transformed from being nearly incompressible with a positive Poisson's ratio, ≈ 0.5, to being almost completely auxetic with ≈ −1 (i.e., so that compression along one axis causes the transverse directions to become equally compressed) (8)(9)(10)(11); the network can be tuned over the entire spectrum of elastic behavior into a regime where few materials exist. Likewise, networks can be pruned for allosteric behavior so that a local applied strain induces a large displacement at a distant point (12)(13)(14).
It takes unexpectedly few alterations in the original network to create either of these functionalities. A takeaway message is that structures generated from packings with rugged energy landscapes are often extremely malleable-with seemingly modest modifications, they can be easily manipulated to have unusual, esoteric, and finely tuned properties.
In these examples, a "greedy" computer algorithm was used at each stage to find the alteration (i.e., pruning of a bond) that brings the network closest to the desired final state. The algorithm does not require-nor necessarily benefit from-a more sophisticated evolutionary process that samples many alternative paths to reach an optimal outcome.
Here, we explore the extent to which a material in the process of aging can be considered as following nature's-as distinct from a computer's-greediness to achieve unconventional properties. Can a material, without the intervention of a computer, transform by retaining a distinct memory of the forces it has encountered in its lifetime?
Gedanken experiment
We start with a highly idealized gedanken experiment on a large heap of sand. Grains in this heap are under pressure from the material above it. Deep in the pile, the pressure is enormous and some of the contacts between grains experience immense forces. As they age, it is reasonable to expect that the contacts deform plastically, with those experiencing the largest forces deforming most rapidly. Over long times, these incremental deformations could become substantial and change the contacts between grains considerably. While this seems straightforward, this system is interesting because there is preferential alteration of the bond characteristics depending on the magnitude of stress that each individual contact feels under the applied stress.
To gain insight into the effect of such preferential alteration, we recall the evolution under selective bond pruning of a spring network under compression. An idealized example of frictionless spherical grains jammed by compression (15) can be converted into a network by replacing spheres with nodes and replacing contacts between spheres with unstretched springs connecting the nodes (16,17). In such disordered networks, the contribution of any specific bond to the bulk modulus, B, is, to a large extent, independent of its contribution to the shear modulus, G (8). If bonds are pruned according to how much stress they feel because of an externally applied stress, the system's bulk and shear moduli change in different ways.
of 7
In particular, when the system is placed under isotropic compression, pruning of bonds under the largest stress causes B to decrease more rapidly than G so that the ratio G/B increases (8). Note that we prune a bond according to the stress it is under and not according to how much it would change a modulus if it were removed. While the latter is a more effective way to prune a network (9, 10), it is not essential.
For an isotropic material, the Poisson's ratio, , is a monotonic function of (G/B): = , where d is the spatial dimension. Therefore, as (G/B ) increases, the material is driven to have a negative Poisson's ratio. This result for pruned networks suggests that a sandpile under pressure could evolve toward auxetic behavior as well-an unexpected outcome.
Note that our gedanken experiment neglects any particle rearrangements that could occur during aging (4,18,19) and is therefore valid only at the early stages before particle rearrangements occur. The experiments and simulations we describe below also do not allow particle rearrangements.
Results from experiment
The gedanken experiment inspires the following experiments on aging under compression. From a sheet of EVA (ethylene vinyl acetate) foam, we laser-cut two-dimensional (2D) systems as shown in Fig. 1A. We make four different kinds of systems as described in Materials and Methods: jammed packings of discs, networks derived from jammed packings, random holey sheets, also derived from jammed packings, and random networks based on a triangular lattice.
All of these systems are then aged by confining them for a time in a square rigid box that has a smaller edge length, L box , than the original length of the system, L initial . We define the training strain as ϵ T ≡ (L box − L initial )/L initial . Since we train our samples under compression, our values for ϵ T are always negative. We measure the Poisson's ratio, (, ϵ T ), by removing our sample from the confining box and immediately compressing it along one axis while measuring the deformation in the perpendicular direction (see Materials and Methods). Figure 1B shows data for the Poisson's ratio of networks as a function of training time, . For small training strains, we do not see any substantial change in , but for |ϵ T | ≥ 0.15, eventually becomes negative. This suggests that aging here is a nonlinear effect that requires large strains. Figure 1C shows how changes on aging for different values of imposed strain in the long-time limit. For all four systems, decreases in accord with the reasoning behind the gedanken experiment. For the jammed networks, random triangular networks, and holey sheets, becomes negative at larger values of |ϵ T |.
We note that the decrease in is different in each of the four systems. Our data suggest that systems with a larger void fraction (fraction of material that has been cut out) reach a smaller Poisson's ratio. A correlation between density and the Poisson's ratio, , has been reported in (20). We note, however, that previous work on networks showed that the Poisson's ratio could be varied between its two extreme limits while keeping the number of network contacts (and therefore the network density) fixed (8). In keeping with this result, the Poisson's ratio of our systems before aging does not depend strongly on the void fraction. Only after aging under compression do we find a correlation. This suggests that the low-density networks have a higher capacity to be trained. Perhaps large voids allow larger nonaffine strains in the network, which, in turn, produce larger changes in the network structure and stiffness.
In this example of directed aging, the material naturally acquires an auxetic response. The fact that all systems show similar behavior suggests that directed aging of isotropic disordered systems under compression may lead more generally to reduced values of the Poisson's ratio.
There are two ways in which aging may have affected the network: (i) bond strengths change due to the stresses to which they were exposed, and (ii) the geometry of the network changes due to internal rotation, deformation, and/or buckling of bonds. To assess the relative contributions of these two effects, we image an aged network. We use that image to pattern another network out of unaged material that is as geometrically identical as possible to the aged one. The difference between the Poisson's ratio of these two (nearly) identical networks is presumably due solely to the aging of the stiffness of the contacts.
The Poisson's ratio of these geometrically identical but unaged networks is shown in green triangles in Fig. 1D. We consistently find that the unaged, copied networks have a lower Poisson's ratio than the original networks but not as low as that of the aged ones. This implies that some of the contribution to the aging process derives from a change in the network geometry and some is due to a change in the material stiffness.
Our experimental protocol is similar to earlier work by Lakes (21,22), in which foam was made auxetic by heating it while under compression. In that work, the geometry of the structure was observed after the foam was returned to ambient temperature. The evolution of auxetic behavior was attributed to the creation of concave polygons, which are known to decrease the Poisson's ratio of a material (11,22,23). In contrast, on aging a bulk piece of our EVA foam under compression, the Poisson's ratio does not decrease but rather remains constant or increases slightly. We find that aging decreases the Poisson's ratio only when voids or patterns are cut out, suggesting that the scale at which the material changes is not at the microscopic scale but rather at the much larger scale of the network bonds. This suggests that our particular choice of foams is not essential and that other materials that undergo plastic deformation would yield similar behavior. We find a similar decrease in the Poisson's ratio upon aging networks made of 3D printed polyurethane. Our results of Fig. 1D also show that geometrical changes are not the only contribution to the decreased value of ; changes in bond stiffness also participate in creating auxetic behavior.
Results from simulations
To further explore how changes of bond stiffnesses under stress can affect the Poisson's ratio, we conduct simulations of a simple model of aging spring networks. We start by considering a network in which each spring, i, has spring constant k i and an unstretched length l i 0 . When compressed, the spring has length l i . Because of disorder, each spring will generally be compressed by a different amount. The energy of the resulting network is the sum of the energies of all the springs For simplicity, we have omitted the energy due to bending angles around a node (11). Now, consider two limiting cases. The first corresponds to the case in which aging is due completely to the evolution of the bond strengths k i under the imposed stresses. The second corresponds to aging arising completely from the evolution of the equilibrium distances between nodes, ℓ i 0 . This is reminiscent of the geometrical mechanism proposed for foams by Lakes (21).
Here, we focus on the first case, deferring the second to a future publication. We evolve k i so that it decreases in time according to the elastic energy stored in that bond The proportionality constant, 0 , and the average equilibrium length of the bonds, l i 0 ̄ , are material dependent and set the units of time and length. Bonds under greater stress evolve more rapidly. At late times, k i → 0; we therefore consider Eq. 2 as an approximation valid only at early times. We evolve the system at a prescribed training strain, ϵ T , and then measure the elastic response with respect to the zero-strain state, which remains the global energy minimum. We assume that k i does not evolve during the measurement process itself. Figure 1E shows the evolution of the Poisson's ratio calculated at |ϵ T | → 0, when the system is aged under compression. Evidently, decreases in time, consistent with the experimental data in Fig. 1C and eventually becomes negative. The aging evolves in a directed manner-under compression, the bulk modulus B decreases with respect to the shear modulus, G, leading to an auxetic material.
This model also predicts interesting nonlinear behavior. The nonlinear Poisson's ratio is defined by (ϵ) = −ϵ r /ϵ, where ϵ is the imposed strain along one axis and ϵ r is the resulting transverse strain after minimizing the energy with respect to the locations of all the nodes and the box shape. The inset of Fig. 1E shows (ϵ) for the unaged network and for a network that has been aged until / 0 = 10 3 . The original, unaged network depends only weakly on the measuring strain ϵ, even up to 10% strain. For aged networks, however, the nonlinear Poisson's ratio, (ϵ), depends on ϵ, the strain at which it is measured, as shown in Fig. 1E. The material can become auxetic even when it is not auxetic in linear response. In other words, compressing the system along one axis leads to transverse expansion at small strains and to transverse compression under larger strains. This nonlinear behavior is difficult to achieve by design but, here, occurs naturally.
Both the experiments and simulations demonstrate that aging under compression is directed toward a nontrivial elastic state with a lower Poisson's ratio. The elastic properties start to change as soon as aging commences under the applied stress and evolve to be substantially different from those of the freshly prepared material.
Directed aging under shear in laboratory experiments
Different aging protocols can evolve toward different limits. Here, we evolve a system under shear stress instead of compression. We return to the experimental networks described in Results from experiments, aging our networks under pure shear by compressing them in one direction and stretching them in the perpendicular direction. We measure the Poisson's ratio in two perpendicular directions: (i) compressing the network first along one axis and measuring the response along the second axis and (ii) exchanging which axis is compressed and which is measured.
Initially, the Poisson's ratio of the network measured by compressing the network along either direction gives ≈ 0.4. However, once it has been aged under shear, the same measurements give very different results. The Poisson's ratio drops to ≈ 0.2 when compressed in the same direction as the one along which it was aged. However, when compressed along the perpendicular direction, increases from ≈ 0.4 to ≈ 0.8 to 0.9. Figure 2 shows that the material encodes a memory of the direction in which it was aged. Similar anisotropic behavior has been observed in a cross-linked actin network under shear (24).
Generality of directed aging
In all of the examples described so far, aging was used to alter the elastic properties of a network. As a consequence, rearrangements, where a node changes the neighbors with which it is in contact, were not allowed. However, directed aging is not restricted to such situations. It also occurs in contexts in which rearrangements are of crucial importance.
For example, it has been shown that cyclic shearing of a suspension of non-Brownian particles can drive the system toward a state where, under subsequent shearing, each particle simply repeats its former motion (25)(26)(27)(28)(29). This can even be seen in glassy and jammed solids (30)(31)(32)(33)(34)(35)(36). Cyclic driving produces a memory of how the system was prepared. As the cyclic shearing is applied, the training cycles force rearrangements in the particle positions. In another work, lowering of elastic and viscous moduli of a suspension has been observed under shear (37).
These systems thus age in a directed fashion as well: They eventually learn how to navigate phase space to produce a periodic response. Each system finds this highly unusual dynamic property by a locally greedy algorithm of nature-the system simply minimizes its energy (or enthalpy) at each time step (in the case of the glassy or jammed solid), or it simply pushes other particles out of the way (in the case of the dilute suspensions). The way in which the material is trained directs the evolution into a nongeneric state.
In another example, it has been shown that an elastic network that restructures in response to periodic forcing can alter its density of states (38). In this system, bonds may break or form depending on the interparticle distances and forces. When subjected to periodic forcing, it develops a memory of the frequency, , at which it was trained. The periodic forcing thus trains the material to produce an excess density of states at the driving frequency.
The idea of directed aging can also be extended to create a localized response to a global distortion. We give one example here. Sheets with a square lattice of circular holes (39)(40)(41) have < 0 when compressed along the axis directions (39). When this "holey sheet" is compressed, alternating holes distort in perpendicular directions. This ordered collapse results in the transverse edges of the sheet moving closer together, making the sheet auxetic, as shown in Fig. 3.
To obtain a more varied response, we age the sheet in the following way. We plug the holes at the end of the middle two rows so that they retain their circular shape and stay in place as we compress the system along the direction perpendicular to these rows. These rows separate the sheet into two halves that do not communicate with each other. Approximately half of the time that these sheets are compressed, the pattern of holes in one-half is out of phase with those in the other so that the central region does not buckle inward. In these cases, compression along the perpendicular direction creates an overall scallop pattern along the side edges where the sheet juts out in the middle, as shown in Fig. 3.
CONCLUSIONS
We have demonstrated that aging, which is typically considered to be a detrimental process, can be harnessed to encode a desired elastic response in a material. This directed-aging process relies on incremental changes in stiffness and distortion of the microscopic structure brought about by plastic deformation. We have provided several examples in which directed aging achieves a broad range of responses. We have demonstrated that we can tune the nonlinear as well as the linear response simply by controlling the boundary conditions during the aging process.
The different elastic responses we have discussed have been trained with remarkable ease. Designing elastic properties numerically typically requires a detailed knowledge of the precise interactions, as well as intensive optimization of the parameters. Here, we showed that despite the complex structure of the networks and the nonlinear strains at which they were aged, all that is required is patience while the system memorizes the training conditions and ages toward the target state. This makes our approach well suited to designing materials with unconventional functionality and elastic properties (11)(12)(13)42).
Our emphasis in this paper has been to highlight the role of aging alone in creating flexible and unusual functions in materials. However, to realize a design strategy that implements directed aging, we could exploit the strong temperature dependence of the aging process. For example, we could substantially enhance the rate at which desired functionality is acquired by raising the temperature toward the glass transition temperature; lowering the temperature could then freeze in this functionality (21,43). In addition, we have shown that aging at a higher temperature can also slow, or eliminate, the return to the initial pre-aged state by increasing the irreversible plastic contribution to aging as shown in Fig. 4A for increased training times.
This approach provides an avenue to reach the holy grail of metamaterials-producing them on macroscopic scales. Scaling up metamaterials is usually difficult; if they are designed on a computer, increasing their size increases the required computer resources. Moreover, scaling down the bond size is perhaps even more difficult since it requires a precise control of the detailed structure on small length scales. Directing aging provides a way to alter properties of specific individual bonds by applying only macroscopic strains.
In conclusion, we emphasize that material properties, which are normally considered to be a function of the material composition and structure, can also depend strongly on the history of the imposed deformations. This raises the intriguing possibility that one could The abscissa is the strain at which Poisson's ratio is measured when the network is compressed along a given direction. The unaged (circles) networks do not show a substantial dependence on measurement strain along either axis. The aged networks (triangles) show a marked change in behavior when is measured by compressing along the pulling axis (blue) or along the axis that was compressed (red) during aging. use this dependence on aging protocol to read out the history of a material. Could we learn from the elasticity of a rock about geological flows that occurred over millions of years?
Designing networks
All the experimental systems were laser cut from a 0.5″-thick sheet of EVA foam. We used this foam to create a variety of networks for our experiments. To create the jammed packing, we started with a foam sheet and cut out a jammed configuration of discs obtained from a 2D computer simulation. The parts of discs that were overlapping with each other were left undisturbed, and this ensures that we have a fully connected sample. The network designs were derived from a jammed packing. As explained in a previous section, each particle of the jammed packing represents a node of the network, and overlapping pairs of particles are represented by a strut connecting the nodes. For the holey sheets, we again started with a jammed configuration and shrank all the particles uniformly so they do not overlap. We cut holes corresponding to these smaller particles out of a sheet of EVA foam, leaving a sheet with a disordered pattern of holes. The jammed discs and holey sheets explicitly retained the circular nature of a 2D granular packing and were thus closer to the gedanken sandpile experiment discussed in Results. To show that an underlying jammed configuration is not necessary to see this effect, we introduced a fourth system. Starting with a triangular lattice, we allowed the nodes to move by a small amount randomly and then removed bonds at random until the average coordination number of the network was 5.
Measurement procedure
To measure the Poisson's ratio of all these systems, we took pictures of the system as we compressed it uniaxially. In these images, we tracked the boundaries of the system. A straight line was fitted to each of the four boundaries, and these lines were used to measure strains in two transverse directions. These measured strains were used to calculate the Poisson's ratio. All Poisson's ratios were calculated at an input strain between 2 and 4%.
Characterization of foam
In this section, we characterized the effect of aging on the EVA foam we used. We determined whether the changes to the foam are permanent or whether the system relaxes to its initial, unaged, state. We measured the time scales on which any relaxation occurs.
To address this, we started by taking strips of foam and compressed them uniaxially by 33%. After letting them age under compression for a fixed amount of time, we removed any external stresses from the strips and measured the change in their length over time. As shown in Fig. 4A, these compressed strips slowly expand, and the rate of expansion depends strongly on the length of time for which they were aged. The strips that were aged the longest are the slowest to expand back. However, we found that even over longer times, the strips do not seem to return to their original length but plateau to a slightly shorter length. This indicates that the changes due to aging have both an irreversible plastic contribution and a viscoelastic relaxation component. In all our experiments, we measured our systems immediately after they have been aged. Measuring the Poisson's ratio of a single system takes a few minutes, which is much smaller than the relaxation time scales we have measured.
We also characterized the elastic properties of a bulk sheet of foam that has been aged under compression. Figure 4B shows the Poisson's ratio of a square sheet of foam as a function of aging strain. A solid sheet of foam behaves very differently from all the other systems we have looked at. This shows that the auxetic behavior arising due to directed aging is not a property of the foam but rather is due to the macroscopic structure that is cut out of the foam sheet.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/5/12/eaax4215/DC1 Evolution of the bulk and shear modulus as a function of time Fig. S1. The evolution of the bulk and shear modulus as a function of time in simulations. | 6,029.8 | 2019-03-14T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Superposed fracture networks
Abstract The concept of superposed fracture networks consisting of different generations, and often types, of fractures that have developed sequentially is discussed. Superposed networks can consist of different types of extension or shear fractures, and each fracture may abut, cross or follow (reactivate) earlier fractures. An example of a superposed fracture network in Liassic limestones in Somerset, UK, is presented, which comprises two sets of veins and a later joint network. The veins develop as damage zones around faults, with veins of the later set crossing or trailing along the earlier set. The later joints either cross-cut the earlier veins or reactivate them, the latter being common for the thicker (more than about 5 mm) veins. The veins and joint networks have markedly different geometries and topologies. The veins are spatially clustered and are typically dominated by I-nodes, while the joints are more evenly distributed and tend to be dominated by Y-nodes. The combined network of veins and joints at Lilstock is dominated by X-nodes because so many joints cross-cut the earlier veins. Understanding the development of superposed fracture networks leads to better understanding of the kinematic, mechanical, tectonic and fluid flow history of rocks.
Introduction
Many networks have been shown to be the result of several generations (and types) of fractures (e.g., Hancock, 1985;Hanks et al., 2004;Nortje et al., 2011).We term these superposed fracture networks.The characterization and description of such networks require careful identification of different types and generations of fractures.Measurement and statistical analysis can and should be designed to recognize these differences and avoid confusing and grouping of disparate structures.In this paper, we discuss how fracture types and generations can be incorporated into the geometrical and topological analysis of a network, as opposed to the analysis of individual fractures.
A superposed fracture network is defined here as a system that consists of more than one generation of intersecting fractures.The component fractures could be the same type, such as one set of veins that abuts, crosses or reactivates an earlier set of veins (Fig. 1a).A superposed fracture network may also include different types of fractures, such as a fault with a network of veins in a damage zone (one generation of fractures) superposed by a network of later joints (Fig. 1b).The fractures forming a component part of a superposed fracture network may either be a simple set of fractures or a network of several fracture sets.The key feature of a superposed fracture network is that it consists of different generations of fractures, as indicated by abutting, crossing and reactivation relationships.We use the terms superposed deformation (e.g., Lindström, 1961;Treagus, 1995) and superposed folds (e.g., Weiss, 1959) as precedents for using the term superposed to describe fracture networks.Various authors have used the term superposed fractures (e.g., Nekrasov, 1975;Lewis et al., 2023), superposed veins (e.g., Reinhardt & Davison, 1990;d'Ars & Davy, 1991) and superposed faults (e.g., Stanley, 1974;Gorodnitskiy et al., 2009).The term superimposed has also been used to describe different generations of brittle structures (e.g., Gonzalo-Guerra et al., 2023).
The aims of the paper are (1) to define and illustrate the term superposed fracture networks and (2) to show the key geometric observations necessary to understand the development of superposed fracture networks and to interpret routinely measured parameters.Such an approach is also a necessary prerequisite to subsequent tectonic, kinematic and mechanical interpretation of the fractures and to understanding how the networks may contribute to the physical and engineering properties of the rock mass.For example, connectivity of a network (e.g., Manzocchi 2002;Sanderson and Nixon, 2018) is fundamental to both the flow of fluids (Lee et al., 1993;Berkowitz et al., 2000) and the strength of the rock mass (e.g., Dershowitz, 1984;Odling, 1997).
Analysis is presented of a network of veins and joints on a Liassic limestone bedding plane at Lilstock, Somerset, UK (51°12 0 08.9 0 0 N 3°10 0 06.0 0 0 W; Fig. 2).This location was chosen because of the high quality of the exposure and because it shows key features that enable relative ages of the network components to be determined (Peacock and Sanderson, 1999;Peacock, 2001).The superposed fracture networks have been mapped using drone images and GIS to record fracture type, geometry (orientation and length) and topology.This analysis is augmented with key field observations, especially at fracture intersections, to establish the sequence of fracture development.This enables the evolution of superposed fracture network to be determined.
Note that here we use the noun fracture as a general term for planar brittle structures, including faults, veins and joints.More specific terms are used as appropriate.Considering brittle deformation of rock in terms of superposed fracture networks is important because the emphasis on determining the age relationships, based on geometric and topological relationships, leads to improved understanding of the development of the structures.Also note that the use of 'younging tables' to record relative ages of structures has been suggested (e.g., Potts & Reddy, 1999, 2000), but we do not use that approach in this paper.
Geological setting of the Lilstock study area
The study location is on the coast between Lilstock and Hinkley Point, Somerset, UK, on the south side of the Bristol Channel Basin (e.g., Van Hoorn, 1987;Fig. 2).The area underwent Mesozoic N-S extension and Cenozoic N-S contraction (e.g., Dart et al., 1995, Glen et al., 2005).Three main groups of structures can be identified in the Liassic rocks exposed on the Somerset coast (e.g., Dart et al., 1995): (1) ~095°-striking normal faults and associated calcite veins, with some of these showing evidence of both sinistral and dextral reactivation at Lilstock (Peacock & Sanderson, 1999;Rotevatn & Peacock, 2018); (2) E-W-striking thrusts, strike-slip faults that are conjugate about ~N-S and reverse-reactivation of the largest 095°striking normal faults, all with associated calcite veins; and (3) joints (e.g., Peacock, 2001).
The exposure consists of a bedding plane near the base of the Liassic (Lower Jurassic) sequence of interbedded limestone and shales, approximately 0.3 m thick and dipping a few degrees to the north (Fig. 3).095°-striking normal faults and sinistral strike-slip faults occur at the location.The vein network consists of two distinct sets formed at different times under different stress orientations.Stylolites occur between some of the en echelon veins, with shear along some stylolites causing the veins to develop as pull-aparts (Willemse et al., 1997).It is more difficult to divide the joint network into distinct sets based on their orientation, and they probably formed in an evolving stress during exhumation (Rawnsley et al., 1998).This location was used by Ryan et al. (2000) to demonstrate a method for the measurement and display of fracture spacing and orientation from maps of fracture networks.A map of veins at this location was also used by Belayneh et al. (2006) to demonstrate how a percolation approach can be used to predict vein connectivity.Willemse et al. (1997) and Sanderson and Peacock (2019) use examples of veins within approximately 300 m of the exposure to demonstrate aspects of vein development.Exposures on the coast, approximately 2.1 km west of the location, have been used to analyse joint patterns (e.g., Passchier et al., 2021).
We use this location to: (1) recognize different fracture types, (2) determine the geometries and topologies of the different fracture types, (3) interpret the age relationships and development of the superposed fracture networks and (4) determine the effects of pre-existing fractures on the development of later fractures.Note that here the emphasis is on the veins and joints rather than faults and stylolites.
3.a. Data collection
A drone was flown at a height of ~3 m above the exposure to collect 577 vertical photographs, each photograph being 4864 × 3648 pixels.They cover an area ~96 m E-W and 20.6 m N-S.The flight was planned using the DroneDeploy application, with the images having ~70% overlap to allow photogrammetry.Agisoft Metashape was used to create an orthomosaic, with each pixel being ~2 mm × 2 mm, and a digital elevation model (DEM) with pixel sizes of ~4 mm × 4 mm.
3.b. Mapping
The orthomosaic and DEM were analysed using QGIS (version 3.26.0).The traces of fractures were digitized at scales of up to 1:2, with the 'enable snapping' and 'enable snapping on intersection' functions switched on.The following four types of fracture trace were digitized: • Faults: identified by lateral displacement of pre-existing veins seen on the orthomosaic and by height differences on the bedding plane highlighted by using hill-shading of the DEM.• Veins: appear as white or light brown lines on the orthomosaic.• Joints: appear as black or dark lines on the orthomosaic.
• Joints along veins: fracture traces that consist of both veins and joints (i.e., joints following and reactivating veins).
These classes enable analysis of all of the fractures that are veins (by combining veins and joints along veins) and of all of the fractures that are joints (by combining joints and joints along veins).The resolution of the imagery means that we only consider veins or joints with lengths greater than ~4 mm and veins with apertures of greater than ~2 mm.
3.c. Problems and ambiguities
Various problems and ambiguities were encountered when digitizing the fractures.While it was generally simple to identify faults with lateral offsets of a few millimetres or vertical offsets of a few centimetres, it was more difficult to identify smaller displacements.It was also difficult to accurately identify the tips of faults, where displacements decrease to sub-resolution scales, especially where such faults pass laterally into veins apparently with only opening-mode displacements.The faults are mineralized and joints tend to follow portions of those faults, but they have been mapped simply as faults.
Some of the veins in the area have apertures of up to about 70 mm.The area also shows veins with centimetre-scale spacings, metre-scale lengths and sub-millimetre-scale apertures, these being formed by a process termed crack-jump by Caputo and Hancock (1998).It is therefore difficult to see the smaller veins at the resolution of the orthomosaic.See Snow (1970), Marrett (1996) and Forstner and Laubach (2022) for descriptions of fracture apertures in rock.As with the faults, it is particularly difficult to see the low-displacement parts of the veins, making it hard to identify vein tips and therefore to determine the connectivity of the veins.Veins are typically segmented, with many veins being composites of linked segments (e.g., Vermilye & Scholz, 1995).It can be difficult to decide whether to digitize two stepping veins or a single composite vein, which may influence the numbers of traces mapped but has little effect on the sum of their lengths (Table 1).Another ambiguity is that some later veins intersect, follow and re-emerge from earlier veins.Such trailing veins were digitized as being two veins intersecting the older vein rather than as a single vein.We consider this preferable to trying to identify and digitize each case of trailing.
The imagery has a pixel size of ~2 mm × 2 mm.The traces of all of the fracture types identified can form anastomosing patterns, sometimes making it difficult to determine the start and end points of each anastomosing fracture, which may influence such parameters as the length distributions of the fracture networks.These side-stepping and anastomosing patterns are mostly near or below the resolution of the imagery and are not considered in construction of the larger scale (>>4 mm) networks represented and analysed in this paper.We only consider total trace lengths for the different fracture types (Table 1) rather than their scaling relationships.
The veins have widths of up to several millimetres, and many of the joints also have mm-scale apertures, probably partly because of weathering.The intersection points (nodes) are therefore really areas rather than points, but those intersection areas are small relative to the scale of the mapping.
While these various issues created some problems with digitizing and interpretation, the ambiguous traces comprise a small percentage of the total fracture population.We consider them to not influence the main observations or results presented in this paper.
3.d. Ground-truthing
The mapping using the orthomosaic and DEM was undertaken with the benefit of numerous previous visits to the location.It was necessary, however, to ground-truth the results, check the relationships between different types and sets of fractures, and take higher-resolution photographs of key features (i.e., from nearer to the exposure than the ~3 m height flown by the drone).
Relationships between superposed fractures
We identify four common types of relationships between pairs or sets of fractures of different ages, with the relationships giving information about the relative ages of the fractures (Peacock et al., 2018): • Cross-cutting: where later fractures cross and displace earlier fractures.A fault that crosses and displaces another fault will be the younger of the two (e.g., Chen, 2013).The relative ages of crossing veins can commonly be determined by the displacement patterns (Fig. 4a) or by the pattern of mineral infills (e.g., Craw et al., 2010).The lack of measurable displacements or mineral fill mean that it is difficult to determine the relative ages of crossing joints.
• Abutting: where a fracture meets another fracture at an intersection line or point.A later joint commonly abuts an earlier joint (Fig. 4b; e.g., Rives et al., 1994).Note, however, that an earlier fault can be displaced by, and therefore abut, a later fault (e.g., Nixon et al., 2014).Also note that abutting relationships can be caused by the splaying of one fracture off another, with the two fractures being synchronous (e.g., Biddle & Christie-Blick, 1985).• Trailing: where two new fractures are connected through an older fracture, with renewed displacement occurring on the older fracture (Fig. 4c, d).Trailing faults are illustrated by Nixon et al. (2014), Phillips et al. (2018) and by Deng and McClay (2021), and trailing veins are illustrated by Virgo et al. (2013, Fig. 12c).• Reactivating: the term reactivation is typically used for renewed displacement on a fault that has undergone a prolonged period of inactivity (e.g., Shephard-Thorn et al., 1972;Sibson, 1985).Here, however, we generalize the term for other fracture types, such as where a joint follows and causes renewed opening along an earlier vein (e.g., Fig. 4e).Such reactivation of fractures has been described for veins (e.g., Ramsay, 1980;Zulauf, 1993;Evans, 1994), dykes (e.g., Drobe et al., 2013) and faulted joints (e.g., Wilkins et al., 2001).
These relationships provide evidence for the relative ages and therefore the superposition of fracture networks.It may also be possible to use non-geometric data to determine superposition, such as mineral paragenesis and radiometric dating of different fracture cements (e.g., Guastoni et al., 2014).
5.a. Vein networks
The calcite veins show the following characteristics: • Orientations.The mapped veins all dip at ~90°to bedding, which has a gentle dip to the north, with vein strike data shown in Fig. 5a, b.These strike data indicate two sets of veins, one set striking ~085°to 115°(Set A, ~18% of data) and another a dominant set striking ~145°to 185°(Set B, ~75.5% of data).The sets have a strong and well-defined preferred orientation, with most of the data (~93.5% of data) within these narrow orientation ranges (Fig. 5a).These two sets have been divided in QGIS using cut-offs of 045-125°(Set A) and 125-225°(Set B), with maps shown in Fig. 6a, b.Sets A and B are therefore defined on the basis of fracture type (calcite veins) and orientation.• Apertures.The calcite veins of both sets typically have apertures of up to ~10 mm, although some of the veins in the area have apertures of up to ~70 mm.When observed with a hand lens, the calcite appears to be sparry with no visible evidence for crack-seal.Wider veins occur, although joints and weathering along these wider veins tend to make aperture measurements ambiguous.Both sets show veins with apertures that are below the ~2 mm × 2 mm pixel size of the imagery. in the mapped area (Fig. 6b) but appears to be concentrated along-strike from faults of similar trend (Fig. 3).suggests that it may be possible to further divide veins of this orientation on the basis of orientation and relative ages.
5.b. Joint networks
The joints show the following characteristics: • Orientations.The joints dip at ~90°to bedding and their strikes are shown in Fig. 5c, d.Two orientations of joint appear to dominate, these being ~N-S and ~E-W.The joints that follow veins have, however, the same orientations as vein sets A and B. The sets have a much less clearly defined preferred orientation than the veins, with only ~80% of the data within broad orientation ranges that occupy ~70% of the total range.Many of the joints curve, creating problems for subdividing the joints into sets based purely on their orientation (e.g., Engelder & Delteil, 2004).For simplicity, we consider the entire joint network to be simply one set, based only on fracture type.• Apertures.The joints generally show sub-millimetre apertures.Some wider joints occur, and this probably is caused by weathering and erosion.• Trace lengths.The maximum trace length measured for the joints is at least 8.98 m, with this longest joint extending to the edge of the mapping area.The mean length is ~0.53 m (n = 1881).The joints show a mean trace length per unit area of ~4.4 m −1 .• Geometric indicators of kinematics.Joints tend to show opening-mode displacement (e.g., Pollard & Aydin, 1988), but curvature along many of the joints may suggest a component of shear along portions of such joints.• Distributions.The joints appear to be fairly evenly distributed across the mapped area, with some appearing to curve into fault zones (Fig. 6c).Such behaviours for joints in the Liassic rocks of the Bristol Channel Basin are described by Rawnsley et al. (1992Rawnsley et al. ( , 1998) ) and Bourne and Willemse (2001).The veins appear to be clustered around faults, so the joints that follow veins are also spatially related to faults.• Relationships with veins.The joints either cross or follow both sets of veins.The joints that follow the veins do not seem to extend to and beyond the tips of those veins, suggesting that vein aperture is important in controlling whether or not a vein will be reactivated as a joint.• Relationships between joints.Pairs of joints in the Liassic rocks of the Somerset coast typically show abutting relationships (e.g., Rawnsley et al., 1998;Peacock et al., 2018).Some crossing relationships occur where one or both joints follow veins.• Relative ages.The joints cross-cut or follow vein sets A and B, so post-date the veins.Abutting relationships between joints would enable relative ages of different joints (or joint sets) to be determined (e.g., Peacock et al., 2018).Hancock and Engelder (1989) suggest that many joints in northwestern Europe were created by exhumation in a regional stress field in which maximum horizontal compressive stress was orientated ~NW-SE.
5.c. Veins and joints combined
Orientation data for the combined populations of veins and joints (Fig. 5e, f) show intermediate behaviour between the orientations of the veins and of the joints independently.Approximately 57.7% of the fracture lengths (veins and joints combined) fall in the strike range of 145°-185°, while approximately 28% of the fracture lengths fall in the strike range of 060°-105°.
Topologies of superposed fracture networks at Lilstock
Any network in two dimensions, such as fracture traces, can be represented by a system of nodes and branches (Sanderson & Nixon, 2015).The branches represent the fracture traces and the nodes record information about the types and distributions of intersections between the fractures.Topology emphasizes the relationships between two or more individual structures, such as crossing and abutting relationships of fractures (e.g., Sanderson & Nixon, 2015;Peacock et al., 2017Peacock et al., , 2018)).Network topology is useful for characterizing many aspects of fracture networks (e.g., Sanderson & Nixon, 2015;Duffy et al., 2015;Procter & Sanderson 2018), including establishing the relative age of different structures.
It is also useful for understanding the connectivity of fractures within a network (e.g., Berkowitz et al., 2000;Manzocchi, 2002;Sanderson & Nixon, 2018).Here, we use the node types shown by the different components of the fracture network at Lilstock to show how these differentiate different forms of superposition.
6.a. Vein network
The vein network consists of two sets.Veins in Set A (the older set) are commonly isolated or show en echelon relationships.Set A is dominated by I-nodes (Table 2a, Figs.6a, 7), which form 92.3% of the nodes.They also have a strong spatial clustering around the faulted margin of the exposed bedding plane.Veins in Set B (the younger set of veins) appear to form swarms, with en echelon patterns being less common (Fig. 6b).The Vein B network is still dominated by I-nodes (77.4%), but with significantly more Y-and X-nodes (Table 2).X-nodes are created by cross-cutting relationships between veins in Set B (~NNW-SSE striking veins appear to cross-cut ~N-S striking veins).
Set B is superposed on Set A creating a higher proportion of Xnodes because the two sets cross (Table 2a, all veins).The two sets of veins combined still shows a majority of I-nodes (52.4%), with 31.5% of the nodes being X-nodes (Table 2a, Fig. 7) at which Set B is seen to cut Set A.
Both Set A and Set B veins develop in damage zones related to two different generations of faults, and this spatial clustering results in limited connectivity across the exposure, with a high proportions of I-nodes.Where superposition occurs, Set A veins are generally overprinted by Set B, producing almost four times as many X-nodes as Y-nodes.
6.b. Joint networks
The joint network is dominated by Y-nodes, these forming 84.6% of the nodes and 94% of the connected nodes (Table 2a, Fig. 7).Prabhakaran et al. (2021) report that Y-nodes form 70 to 80% of the nodes in Liassic limestones ~2.3 km to the east at Lilstock.Inodes are rare (10.3%;Table 2a), with the joints being highly connected in the network.Six V-nodes were identified, but these are not included in the analysis.
A key feature that distinguished the topology of the joint network is the dominance of Y-nodes, indicating that joints nucleate and/or terminate against one another, which is often termed abutment.This strong interaction contrasts with the crosscutting and overprinting seen within the vein network.We will examine the interactions between the vein and joint network in the next section.
6.c. Combined network of veins and joints
Combining data for all of the veins and the joints produce a superposed network, with orientations intermediate between the veins and the joints (Fig. 5).At the resolution mapped, the total superposed network has a fracture intensity of just over 10 m −1 , with the joints forming 42.4% of this (Table 1).The topology of the combined network (Fractures in Table 2a) is different from either the veins or the joints and cannot be predicted simply from the weighted average of the two networks.The superposed network contains a significantly higher proportion of X-nodes (42.4%) (Table 2a, Fig. 7).We can use the node counts to test hypotheses about the character of the interaction between the two networks.
The data for the connected nodes (Y-and X-nodes) in the vein and joint networks are extracted from Table 2a and combined with data on the number of connected nodes for Set A and Set B intersections and for those between the joints and veins (Table 2b).The proportions of X-and Y-nodes vary between the different types of intersections.Y-nodes dominate (94.5%) for joint:joint intersections, whereas X-nodes dominate (82.1%) for joint:vein intersections.The vein:vein intersections are also dominated by Xnodes but to a somewhat lesser extent (66.1% for all intersections and 78.8% for Set A:B intersections).Table 2a is a simple contingency table with almost zero probability of a random distribution of node types.These data strongly support the idea that vein Set A is overprinted by Set B, but interaction of the joints and veins is more complex.The joints both cross-cut the veins (high % of X-nodes) but also run along (and reactivate) the veins, suggesting both overprinting and utilization of the pre-existing network.The joints dominantly abut other joints, either initiating or terminating at pre-existing joints producing mainly Y-nodes.
Discussion
A schematic model for the development of the vein and joint network at Lilstock is shown in Fig. 8. Here, we discuss aspects of the analysis and development of this and other superposed fracture networks.
7.a. Evolution
The analysis presented here enables the following evolution of the fracture network to be determined.Note that connectivity refers to the degree to which fractures are connected within a network, which depends on the size, frequency, orientation, spatial correlation, scaling and topology (Berkowitz et al., 2000;Manzocchi, 2002).Three main stages can be identified (Figs. 6, 8): • Stage 1: development of vein Set A, in the damage zones of Ẽ-W-striking normal or oblique-slip faults (Fig. 6a).At this stage, the veins are mostly localized adjacent to the faults that bound the area, with little connectivity across the exposed bedding plane.
• Stage 2: development of vein Set B, in the damage zones of ÑW-SE-striking strike-slip faults (Fig. 6b).Some of veins in Set B trail through veins in Set A, while others cross to form X-nodes.At this stage, there was limited connectivity between the Set B veins, but the total vein network is weakly connected across the exposed area.
• Stage 3: development of the joint network (Fig. 6c).The joints reactivate (follow), trail through or cross-cut the earlier veins.
The joint network itself is well connected, mainly through Y-nodes, with few I-nodes.The joint network overprints and reactivates the vein network to produce a superposed veinjoint network that is well-connected network.
The sequential evolution of network connectivity has been documented by Park et al. (2010).Determining this evolution requires both identification of the types of fracture involved and examination of their relationships (reactivation, trailing, crossing, etc.).
7.b. Overprinting and reactivation
The superposition of two or more fracture networks can occur in different ways.The N-S veins largely overprint the E-W veins, producing cross-cutting intersections (high proportion of X-nodes in Table 2), with a limited amount of reactivation, as indicated by occasional trailing.The joint network shows both overprinting, abutting (Y-nodes) and much re-utilization of the earlier formed veins (joints along veins), with abundant termination and trailing of joints at veins and particularly at earlier formed joints.
Fracture reactivation is a common form of superposition.Faults are commonly reactivated with a different sense of displacement, with examples given in Table 3.This reactivation can be a component of fracture network superposition.For example, Kelly et al. (1999) show that reverse-reactivation of E-W-striking normal faults in the Liassic rocks at East Quantoxhead (~5 km WSW of the study area at Lilstock) was accompanied by the development of a network of conjugate strike-slip faults.Fracture network superposition can also involve fractures being reactivated as other types of fractures, such as an extension fracture (e.g., a vein or a joint) being reactivated as a contractional structure (e.g., a stylolite).Examples of such reactivation are given in Table 4.
7.c. Implications for analysing and understanding fracture networks
Just as understanding superposed folding helps unravel the deformation history of a region (e.g., Ray, 1974;Ramsay & Huber, 1987), understanding superposition in fracture networks helps determine the evolution of that network.Rather than analysing the final fracture network as a single entity (e.g., Zhu et al., 2022), it is necessary to distinguish the different fracture types present and determine the sequence of development of the components of the network (e.g., Katternhorn et al., 2000;Gillespie et al., 2001), if the geometric and topological development of the network is to be understood.This involves using geometric and topological characteristics to define different classes or ages of fracture that are appropriate for the study (e.g., Peacock & Sanderson, 2018;Andrews et al., 2020).Such an approach helps deduce how fractures have been controlled by the interplay between palaeostress fields and earlier structures and would lead to better understanding of the kinematic, tectonic and fluid flow history.Simply adding all fractures together in a network would be analogous to not distinguishing between paths, roads, canals, railways and aeroplane flight paths, and lumping them all together to analyse a transport network.
7.d. Other network components and examples
While we have focused on a fracture network created by the superposition of two sets of veins and a joint network, the analysis can be expanded to include other structures in the network.For example, some of the stepping veins of Set B are linked by stylolites or slickolites, with some forming pull-aparts (e.g., Willemse et al., 1997).At a larger scale, the two vein sets appear to be damage related to a network of superposed faults (Fig. 3).Superposed fault networks can show abutting (e.g., Nixon et al., 2014, Fig. 11), crossing (e.g., Dart et al., 1995;Gonzalo-Guerra et al., 2023), reactivating (Table 3) or trailing (Nixon et al., 2014, Fig. 15) relationships.
Conclusions
Superposed fracture networks result from the successive development of different ages (and commonly different types) of fractures.Successive sets of fractures may either overprint (cross-cut), follow (reactivate) or arrest at (abut) the earlier ones.In the example of veins and joints on a Liassic limestone bedding plane at Lilstock, UK, the network comprises (1) early formed E-W veins, (2) later N-S veins and (3) a later system of joints.The later components of a superposed fracture network can both overprint and re-utilize (reactivate) earlier fractures.For example, the N-S veins at Lilstock cross-cut and trail into E-W veins, and the joints abut, cross-cut and reactivate both the vein sets.
The different components of a superposed fracture network can have different topologies.The first set of veins at Lilstock is dominated by I-nodes, with linkage of the straight, sub-parallel veins being limited.The second set of veins is still dominated by I-nodes, but locally cross-cut Set A, producing more X-nodes.The joints cross-cut both sets of veins, producing X-nodes.This indicates overprinting of the vein network by joints, but that some utilizing earlier veins as they develop.Joint:joint intersections are dominantly Y-nodes, indicating strong mechanical interaction during joint development.
interpreting a superposed fracture network, it is important to separate out the components, based on both the type of fracture and their age relationships.Although we have focused on veins and joints, this type of analysis is applicable to other types of superposed fracture networks, including faults.
Figure 1 .
Figure 1.(Colour online) Examples of different types of superposed fracture networks.(a) Two sets of calcite veins of different ages.Liassic limestone at Lilstock.View vertically downwards.(b) Normal fault zone with a network of calcite veins in a damage zone (one generation of fractures) superposed by a network of later joints.Liassic limestone at East Quantoxhead (51°11 0 27 0 0 N, 3°14 0 15 0 0 W).View downwards at approximately 45°to the NW.
Figure 2 .
Figure 2. (Colour online) Geological map of western Somerset, showing the location of the study site at Lilstock.The geology is from the British Geological Survey 1:625,000 scale map of the UK.Reproduced with the permission of the British Geological Survey ©NERC.All rights Reserved.
Figure 3 .
Figure 3. (Colour online) (a) Overview map of the area, based on an orthomosaic (pixel size ~7 mm × 7 mm) made using photographs taken using a drone flown ~20 m above the surface.Faults have been mapped from the orthomosaic, with ~E-W-striking faults probably having normal displacements, although some strike-slip is likely (Rotevatn & Peacock, 2018).The ~NW-SE-striking faults generally show dextral displacements of up to ~1 m.(b) Larger-scale view of the mapping area, with structures mapped from an orthomosaic (pixel size ~2 mm × 2 mm).
•
Trace lengths.The maximum trace length measured for Set A is at least 3.844 m, with this longest vein extending to the edge of the mapping area.The maximum trace length measured for Set B is at least 6.32 m, with this longest vein extending to the edge of the mapping area.The shortest measured vein of Set A is at the limits of the drone imagery, and the mean length is ~198 mm (n = 1412).The shortest measured vein of Set B is at the limits of the drone imagery, and the mean length is ~259 mm (n = 2845).Fracture trace length data are summarized in Table1.Note, however, that caution is needed with these length measurements, which are likely to be underestimates of true values (see Section 3.c).Set A veins show mean trace lengths per unit area of ~1.2 m −1 , and Set B shows mean trace lengths per unit area of ~4.7 m −1 (mapped area = 227.358m 2 ).• Geometric indicators of kinematics.Veins in Set A commonly show left-stepping, en echelon relationships, indicating a component of ~E-W dextral shear.En echelon patterns are less obvious in Set B, although some appear to show shear fractures and pull-aparts (Willemse et al., 1997; Sanderson & Peacock, 2019) indicating ~NNW-SSE sinistral shear.• Distributions.Both vein sets appear to show spatial relationships to faults.Set A are clustered around the ~E-W-striking faults, with most of the veins of this set occurring in the north of the mapped area (Fig. 6a).Set B is more widely distributed
Figure 4 .
Figure 4. (Colour online) Examples from Liassic limestones at Lilstock of different types of relationships between fractures that give information about their relative ages.All views are approximately vertical downwards.(a) Earlier veins are connected by slickolites to form pull-aparts, with a later vein crossing a slickolite.(b) Abutting joints, with the abutting relationships giving the relative ages of the joints.(c) Trailing calcite veins.(d) Example of joints trailing through a calcite vein.(e) Later joints following and reactivating earlier calcite veins.
Figure 5 .
Figure 5. (Colour online) Orientation data for the fractures at Lilstock.(a) Rose diagram, weighted to length and area proportional, for the veins (n = 4763).(b) Graph of vein strike vs percentage cumulative branch length for the veins.The straight dashed line, from (0,0) to (180,100), represents a uniform orientation distribution, with deviation of the data from this line providing a useful and unbiased indication of the departure from uniformity (Sanderson & Peacock, 2020).Maximum deviation (Dþ) = 0.05; minimum deviation (D-) = -55.09,V = 55.14.The sum V = |Dþ| þ |D-| is independent of the choice of origin, with V = 0 representing a perfectly uniform distribution, and V = 1 representing a parallel alignment of lines (Sanderson & Peacock, 2020).The data indicate a dominant strike of veins at ~145°to 185°(Set B), with a secondary strike of ~085°to 115°(Set A).(c) Rose diagram for the joints (n = 5064).(d) Graph of vein strike vs percentage cumulative frequency for the veins.Dþ = 9.1, D-= -9.5, V = 18.6, V* = 13.26.The data indicate a wider range of strikes than shown by the veins, with a dominant orientation of ~070°to 110°.(e) Rose diagram for the veins and the joints (n = 9827).(d) Graph of vein and joint strike vs percentage cumulative frequency for the veins.Dþ = 2.34, D-= -34.11,V = 36.45,V* = 44.22.The data show intermediate behaviour between the vein and the joint data.
Figure 6 .
Figure 6.(Colour online) Maps of the different fracture sets at Lilstock.(a) Vein set A strikes approximately E-W and is clustered around a fault zone with an approximate E-W strike.(b) Vein set B strikes approximately N-S to NW-SE and are clustered around faults that strike approximately NNW-SSE.Veins of set B crosscut or trail through veins of set A. (c) A network of joints is superposed on the pre-existing veins.Some joints cross-cut the veins, while other follow (reactivate) the veins.
Figure 7 .
Figure 7. (Colour online) Ternary plot of I-, Y-and X-nodes for the veins, the joints and both the veins and joints combined.The vein network is dominated by I-nodes, with more X-nodes than Y-nodes.The joint network is dominated by Y-nodes.The veins and joints combined are dominated by Y-and X-nodes.
Figure 8 .
Figure 8. (Colour online) Schematic model for the superposition of the fracture network at Lilstock.Vein set A is clustered around a fault, with these veins typically being en echelon and forming I-nodes.Vein set B crosses vein set B to form X-nodes, although some trail through veins of Set A. The joints form a later network that cut across or follow veins of sets A and B. Later joints typically abut earlier joints.
Table 1 .
Data for fracture trace lengths for the superposed fracture network at Lilstock.Mapped area = 227.358m 2 .Intensity is mean length per unit area.Note that the values for 'Joints' includes the values for 'Joints following Set A veins' and 'Joints following Set B veins'.6.7% of the length of joints follows Set A and 12.6% of the length of joints follows Set B. 23.8% of the length of Set A is followed by later joints, while the 11.7% of the length of Set B is followed by later joints
Table 2 .
Node types for the superposed fracture network at Lilstock.(a) Numbers (and percentages) of node types for the components.% C = the percentages of connected nodes (i.e., percentage of V-, Y-and X-nodes).(b) Numbers (and percentages) of connected node types at interactions between different components
Table 3 .
Examples of different types of fault reactivation
Table 4 .
Examples of different types of fracture reactivation | 8,495.2 | 2023-09-01T00:00:00.000 | [
"Geology"
] |
Dijet production in $\sqrt{s}=7$ TeV $pp$ collisions with large rapidity gaps at the ATLAS experiment
A $6.8 \ {\rm nb^{-1}}$ sample of $pp$ collision data collected under low-luminosity conditions at $\sqrt{s} = 7$ TeV by the ATLAS detector at the Large Hadron Collider is used to study diffractive dijet production. Events containing at least two jets with $p_\mathrm{T}>20$ GeV are selected and analysed in terms of variables which discriminate between diffractive and non-diffractive processes. Cross sections are measured differentially in $\Delta\eta^F$, the size of the observable forward region of pseudorapidity which is devoid of hadronic activity, and in an estimator, $\tilde{\xi}$, of the fractional momentum loss of the proton assuming single diffractive dissociation ($pp \rightarrow pX$). Model comparisons indicate a dominant non-diffractive contribution up to moderately large $\Delta\eta^F$ and small $\tilde{\xi}$, with a diffractive contribution which is significant at the highest $\Delta\eta^F$ and the lowest $\tilde{\xi}$. The rapidity-gap survival probability is estimated from comparisons of the data in this latter region with predictions based on diffractive parton distribution functions.
Introduction
Diffractive dissociation (e.g.pp → pX) contributes a large fraction of the total inelastic cross section [1] at the Large Hadron Collider (LHC).The inclusive process has been studied using the earliest LHC data in samples of events in which a large gap is identified in the rapidity distribution of final-state hadrons [2,3].In the absence of hard scales, the understanding of these data is based on phenomenological methods rather than the established theory of the strong interaction, quantum chromodynamics (QCD).
A subset of diffractive dissociation events in which hadronic jets are produced as components of the dissociation system, X, was first observed at the SPS [4], a phenomenon which has since been studied extensively at HERA [5,6] and the Tevatron [7].The jet transverse momentum provides a natural hard scale for perturbative QCD calculations, making the process sensitive to the underlying parton dynamics of diffraction and colour-singlet exchange.A model [8] in which the hard scattering is factorised from a colourless component of the proton with its own partonic content (diffractive parton distribution functions, DPDFs), corresponding to the older concept of a pomeron [9], has been successful in describing diffractive deep inelastic scattering (ep → eX p) at HERA [10].The DPDFs have been extracted from fits to HERA data in the framework of next-to-leading-order QCD, revealing a highly gluon-dominated structure [11,12].
The success of the factorisable approach breaks down when DPDFs from ep scattering are applied to hard diffractive cross sections in photoproduction [13,14] or at hadron colliders.Tevatron data [7] show a suppression of the measured cross section by a factor of typically 10 relative to predictions.A similar 'rapidity-gap survival probability' factor, usually denoted by S 2 , was suggested by the first results from the LHC [15].This factorisation breaking is usually attributed to secondary scattering from beam remnants, also referred to as absorptive corrections, and closely related to the multiple-scattering effects which are a primary focus of underlying-event studies [16][17][18].Understanding these effects more deeply is an important step towards a complete model of diffractive processes at hadronic colliders and may point the way towards a reconciliation of the currently very different theoretical treatments of soft and hard strong interactions.
In this paper, the ATLAS technique for finding large rapidity gaps, first introduced in Ref. [2], is developed further and applied to events in which a pair of high transverse momentum (p T ) jets is identified.The resulting cross sections are measured as a function of the size of the rapidity gap and of an estimator of the fractional energy loss of the intact proton.The results are interpreted through comparisons with Monte Carlo models which incorporate DPDF-based predictions with no modelling of multiple scattering.Comparisons between the measurements and the predictions thus provide estimates of the rapidity-gap survival probability applicable to single dissociation processes at LHC energies.
Models and simulations
Monte Carlo (MC) simulations using leading-order (LO) calculations in perturbative QCD are used in unfolding the data to correct for experimental effects and in the comparison of the measurements with theoretical models.The PYTHIA 8.165 (hereafter referred to as PYTHIA8) general-purpose LO MC generator [19] is used to model dijet production in non-diffractive (ND) events, as well as in single diffractive dissociation (SD, pp → X p) and double diffractive dissociation (DD, pp → XY).An alternative model of the SD process is provided by POMWIG (version 2.0β) [20], whilst an alternative next-to-leading-order (NLO) model of the ND process is provided by POWHEG (version 1.0) [21,22].
Figure 1: Illustration of hard single-diffractive scattering, in which partons from a pomeron (IP) and from a proton enter a hard sub-process.The rapidity gap appears between the system X and the intact proton.
In both PYTHIA8 and POMWIG, hard scattering in diffractive processes takes place through the factorisable pomeron mechanism [8] illustrated in Fig. 1.A pomeron couples to an incoming proton, acquiring a fraction ξ of the proton's longitudinal momentum.The proton either scatters elastically (SD) or dissociates to form a higher-mass system (DD).A parton from the pomeron (as described by DPDFs) then undergoes a hard scattering with a parton from the dissociating proton at a scale set by the transverse momenta of the resulting jets.The dissociation system X has an invariant mass M X , such that ξ = M 2 X /s at a proton-proton centre-of-mass energy √ s.
POMWIG is based on a standard implementation of hard diffractive scattering with a factorisable pomeron, in which both the pomeron flux and the DPDFs are taken from the results of the H1 2006 DPDF fit B1 [11] and the proton PDF set is CTEQ61 [23].In contrast, PYTHIA8 provides a simultaneous model of hard and soft diffraction [24], in which a soft diffractive model inherited from PYTHIA6 [25] is smoothly interfaced to a hard diffractive model similar to that in POMWIG.The probability of using the hard model depends on M X .The H1 2006 DPDF fit B is again used for the partonic content of the pomeron and the proton partonic structure is taken from the CT10 PDFs [26].Several different pomeron flux parameterisations are available in PYTHIA8.In addition to the default Schuler and Sjöstrand (S-S) model [27], alternative parameterisations by Donnachie and Landshoff (D-L) [28] and Berger and Streng [29,30], as well as the Minimum Bias Rockefeller (MBR) model [31], are also considered in this analysis.These models differ primarily in their predictions for the ξ dependence of the cross section [24].The DD process in PYTHIA8 is modelled similarly to the SD process.Neither of the diffractive models considered here take rapidity-gap destruction effects into account, i.e. they set the rapidity gap survival probability S2 ≡ 1.
An alternative for ND processes is provided by the POWHEG NLO generator.As described in Ref. [22], the 'hardest emission cross section' approach used in POWHEG avoids the pathological behaviour observed in calculating cross sections with symmetric jet cuts in fixed-order NLO calculations.Here, NLO dijet production in the DGLAP formalism is interfaced with PYTHIA 8 to resum soft and collinear emissions using the parton shower approximation.
PYTHIA8 adopts the Lund String model [32] for hadronisation in each of the ND, SD and DD channels.
It also contains an underlying-event model based on multiple parton interactions (MPI).POMWIG is derived from HERWIG [33] and thus inherits its fragmentation and cluster-based hadronisation models.
For the purposes of this paper, the POWHEG ND simulation is interfaced to PYTHIA8 for fragmentation and hadronisation.All considered models based on the PYTHIA hadronisation model include p T -ordered parton showering, while those based on HERWIG use angular-ordered parton showering.
The default MC combination used for the data unfolding for detector effects is a mixture of PYTHIA8 samples of ND, SD and DD dijets, with the "ATLAS AU2-CT10" set of tuned parameters (tune) [34] for the underlying event.In this tune, the fraction of the total cross section attributed to the SD process is reduced relative to the default by 10% and that to DD by 12%, to better match early LHC data.The Berger-Streng parameterisation, which has a very similar ξ dependence to D-L, is chosen for the pomeron flux factor.Finally, the interaction of the particles with the ATLAS detector is simulated using a GEANT4-based program [35,36].
The ATLAS detector
The ATLAS detector is described in detail elsewhere [37].The beam-line is surrounded by a tracking system, which covers the pseudorapidity 2 range |η| < 2.5, consists of silicon pixel, silicon strip and straw tube detectors and is immersed in the 2 T axial magnetic field of a superconducting solenoid.The calorimeters lie outside the tracking system.A highly segmented electromagnetic (EM) liquid-argon sampling calorimeter covers the range |η| < 3.2.The EM calorimeter also includes a presampler covering |η| < 1.8.The hadronic end-cap (HEC, 1.5 < |η| < 3.2) and forward (FCAL, 3.1 < |η| < 4.9) calorimeters also use liquid argon for their sensitive layers, but with reduced granularity.Hadronic energy in the central region is reconstructed in a steel/scintillator-tile calorimeter.The shapes of the cell noise distributions in the calorimeters are well described by Gaussian distributions, with the exception of the tile calorimeter, where the noise has extended tails, and which is thus excluded from the rapidity gap finding aspects of the analysis.Minimum-bias trigger scintillator (MBTS) detectors are mounted in front of the end-cap calorimeters on both sides of the interaction point and cover the pseudorapidity range 2.1 < |η| < 3.8.The MBTS is divided into inner and outer rings, both of which have eight-fold segmentation.In the analysis, two trigger systems are used at Level-1 (L1), namely the MBTS which efficiently collects low-p T jets, and the calorimeter-based trigger (L1Calo) which concentrates on higher-p T jets.In 2010, the luminosity was measured by monitoring the activity in forward detector components, with calibration determined through van der Meer beam scans [38,39].
Experimental method
To study rapidity-gap production, the experiment needs to operate at very low luminosities such that there is on average much less than one collision per bunch crossing (i.e.negligible 'pile-up').This requirement has to be balanced against the need to collect adequate numbers of events with large rapidity gaps.The analysis therefore uses data from an early 2010 LHC run, with a total integrated luminosity of 6.8 nb −1 .
The average number of collisions per bunch crossing is 0.12.
The jet selection follows that used in the ATLAS 2010 dijet analysis [40].Jets with p T > 20 GeV and |η| < 4.4 are reconstructed by applying the anti-k t algorithm [41] to topological clusters at the standard ATLAS jet energy scale.For comparisons, in particle-level MC models, jets are formed with the anti-k t algorithm from stable (cτ > 10 mm) final-state particles.The analysis is performed with jets of two different radius parameters R = 0.4 and R = 0.6.Approximately twice as many jets are reconstructed with the R = 0.6 than with the R = 0.4 requirement in the kinematic range covered here.
The calorimeter-based jet trigger ('L1Calo') is used with the lowest available p T threshold in phase-space regions where its efficiency is determined to be greater than 60%.This criterion is satisfied for central jets at all pseudorapidities in the range |η| < 2.9 with p T > 29 (34) GeV for jets with R = 0.4 (0.6).At lower transverse momenta, or where the jets are beyond the L1Calo η range, the MBTS trigger is used, with the requirement of a signal in at least one segment.The MBTS trigger is fully efficient for dijet events, but has a substantial time-dependent prescale (which is taken into account in the off-line analysis), reducing the effective luminosity for forward and low-p T jets to 0.303 nb −1 .
At least two jets are required, with jet barycentres satisfying |η| < 4.4 and with p T > 20 GeV.These requirements correspond to the region in which the jet energy scale and resolution are well known and in which the jets are fully contained within the detector.
Several sources of background were investigated.To reject contributions from beam interactions with residual gas in the beampipe, muons from upstream proton interactions travelling as a halo around the proton beam, and cosmic-ray muons, events are required to have a primary vertex constructed from at least two tracks and consistent with the beam spot position.In-time pile-up, caused by multiple interactions in one bunch crossing, is suppressed by requiring that there be no further vertices with two or more associated tracks.Out-of-time pile-up, caused by overlapping signals in the detector from neighbouring bunch crossings, was investigated and found to be negligible at the large bunch spacings (> 5 µs) of the chosen runs.Once an event is triggered and the dijet selection criteria are met, the requirement on the primary vertex removes 0.3% and 0.2% of events in the L1Calo-and MBTS-triggered data, respectively, while the in-time pile-up suppression cuts remove 9.4% and 6.5%, respectively.The latter values are used to scale the cross sections to account for the corresponding losses.Residual background occurs due to the limited position resolution of the vertex reconstruction, which typically merges pairs of vertices with ∆z 1 cm into a single vertex.The size of this effect is estimated by extrapolation to lower values of the ∆z distribution for pairs of vertices which are resolved and its influence is evaluated by randomly overlaying minimum-bias events on the selected sample.The effect is smaller than 0.5% in all bins of the measured distributions.The residual beam-induced background is studied using 'unpaired' bunch crossings in which only one bunch of protons passes through the ATLAS detector and is found to be negligible.
Each event is characterised in terms of pseudorapidity regions which are devoid of hadronic activity ('rapidity gaps') using a method very similar to that first introduced in Ref. [2].Rapidity gaps are defined using the tracking (|η| < 2.5 and p T > 200 MeV) and calorimetric (|η| < 4.8) information within the ATLAS detector acceptance.Full details of the track selection can be found in Ref. [42].Following Ref. [2], the clustering algorithm accepts calorimeter cells as cluster seeds if their measured response is approximately five standard deviations above the root-mean-square noise level, with a small dependence of the threshold on pseudorapidity.Cells neighbouring the seed cell are included in the cluster if their measured energies exceed smaller threshold requirements defined by the standard ATLAS topological clustering method.The particle-level gap definition is determined by the region of pseudorapidity with an absence of neutral particles with p > 200 MeV and charged particles with either p > 500 MeV or p T > 200 MeV.These momentum and transverse momentum requirements match the ranges over which the simulation indicates that particles are likely to be recorded in the detectors, accounting for the axial magnetic field in the inner detector.The treatment of calorimeter information in the rapidity-gap determination follows the procedure introduced in Ref. [43], such that the requirement p T > 200 MeV for calorimeter clusters from the previous rapidity-gap analysis [2] is removed.Since this transverse momentum requirement corresponds to a very high momentum at large pseudorapidities, the modified approach more completely exploits the capabilities of ATLAS to detect low-momentum particles in the calorimeters.The total numbers of selected events in the L1Calo and MBTS samples with R = 0.6 are 285191 and 44372, respectively.
The variable characterising forward rapidity gaps, ∆η F , is defined by the larger of the two empty pseudorapidity regions extending between the edges of the detector acceptance at η = 4.8 or η = −4.8 and the nearest track or calorimeter cluster passing the selection requirements at smaller |η|.No requirements are placed on particle production at |η| > 4.8 and no attempt is made to identify gaps in the central region of the detector.In this analysis, the size of the rapidity gap relative to η = ±4.8lies in the range 0 < ∆η F < 6.5.For example ∆η F = 6.5 implies that there is no reconstructed particle with (transverse) momentum above threshold in one of the regions −4.8 < η < 1.7 or −1.7 < η < 4.8.
For events which are of diffractive origin, the Monte Carlo studies indicate that the rapidity-gap definition selects processes in which one of the incoming protons either remains intact (SD) or is excited to produce a system with mass M < 7 GeV (DD).In the second case, the system is typically restricted to a pseudorapidity region beyond the acceptance of the ATLAS detector.In both cases, the other incoming proton dissociates to produce a hadronic system of larger invariant mass M X .The gap size, ∆η F , grows approximately logarithmically with 1/M X , the degree of correlation being limited by event-to-event hadronisation fluctuations.
In this analysis, measurements of the energy deposits in each event are used to construct a variable, ξ which is closely correlated with ξ and is similar to that used in Ref. [15].Neglecting any overall transverse momentum of the system X, the relation holds for cases where the intact proton travels in the ±z direction.In other words, if the forward rapidity gap starts at η = +4.8(-4.8), the exponential function takes the positive (negative) sign.Here, the sum runs over all particles constituting the system X.This relation has the attractive feature that the sum is relatively insensitive to particles in the X system travelling in the very forward direction, i.e. those which are produced at large pseudorapidities beyond the detector acceptance.Correspondingly, the variable ξ is defined as At the detector level, the sum in Eq.( 2) runs over calorimeter clusters in the region |η| < 4.8.To best match this requirement, the corrected cross section is defined in terms of neutral particles with p > 200 MeV and charged particles with p > 500 MeV in the same pseudorapidity range.The correlation at the particle level between ξ and the true ξ (the latter obtained from elastically scattered protons) in the PY-THIA8 MC model of SD events with two jets, is shown in Fig. 2(a).For log 10 ξ < ∼ −2, there is a clear correlation between the fiducial ξ variable and ξ, which continues to larger ξ, but with a progressively worse correspondence as some components of the dissociation system which are included in the ξ calculation fail the fiducial requirement |η| < 4.8 applied in the ξ calculation.At low values, ξ is systematically slightly smaller than ξ, due to the exclusion of low-momentum particles from the ξ definition.Figure 2(b) shows the correlation between the reconstructed and particle-level determinations of ξ.According to the MC models, the resolution in the absolute value of log 10 ξ varies from around 0.07 at large ξ values to around 0.14 at small ξ. between the particle-level ξ and detector-level ξ calculated from clusters selected as defined in the text, using the sum of PYTHIA8 ND, SD and DD contributions.In both plots, the distributions are normalised to unity in each column.
The quality of the description of the uncorrected data by the PYTHIA8 Monte Carlo model is shown for several variables in Fig. 3. Here, the default ND component of PYTHIA8 is fixed to match the data in the first bin of the ∆η F distribution, requiring a normalisation factor of 0.71.The SD and DD contributions are shown without any adjustment of their normalisation.Satisfactory descriptions are obtained of the ∆η F and ξ variables, and also of the pseudorapidity and transverse momentum distributions of the leading jet, indicating that a combination of the diffractive and the non-diffractive PYTHIA8 components is appropriate for use in the unfolding of experimental effects.
The data distributions in ∆η F and ξ are corrected for detector acceptance and migrations between measurement bins due to finite experimental resolution using Iterative Dynamically Stabilised (IDS) unfolding [44].This procedure corrects for migrations between the particle and detector levels based on an 'unfolding' matrix, constructed from a combination of PYTHIA8 ND, SD and DD samples, as shown in The results of the IDS procedure depend in general on the number of iterations used.A fast convergence is achieved for both measured distributions and the fourth iteration is chosen as nominal since it optimises the balance between the systematic and statistical uncertainty arising from the unfolding procedure.The unfolding procedure is stable against variations in binning, number of iterations and the scaling factors applied to the diffractive and non-diffractive contributions in the PYTHIA8 model, as discussed further in Section 5.
Systematic uncertainties
The procedures for handling many of the sources of systematic uncertainty follow from previous ATLAS measurements.The full list of uncertainties considered is given below.Further details of the uncertainties affecting jets (sources 1-5 below) can be found in Ref. [40], while those affecting diffractive variables (sources 7-9) are elaborated in Ref. [2,43].
1. Jet energy scale: the largest source of uncertainty arises from the determination of the jet energy scale.This is obtained following the procedure in Ref. [40], where relative shifts are applied between the particle-level and detector-level response as a function of η and p T .This accounts for all effects playing a role in evaluating jet transverse momenta, including dead material, electronic noise, the different responses of the LAr and Tile calorimeters, the simulation of particle showers in the calorimeters, pile-up effects and the models of fragmentation used by different MC generators [45].Studies in the context of the current analysis show that the inclusive treatment is also appropriate for diffractive processes.As in Ref. [40], the dominant component of this uncertainty comes from the inter-calibration of jets in η.The total resulting uncertainty in the differential cross sections measured here varies from 20% for small gaps to ∼ 40% for very large gaps, a region which is dominated by diffractive events with relatively small transverse momentum or large pseudorapidity of jets.
2. Jet energy resolution: this is determined from data using in situ techniques and MC simulation [46].The resulting uncertainty on the cross-section measurements is evaluated by smearing the p T of the reconstructed jets in MC simulation using a Gaussian distribution to match the resolution uncertainty found in data.The resulting effect is below 6% in all kinematic regions.
3. Jet angular resolution: this was determined using the same techniques as for the jet energy resolution.Following the procedure in Ref. [40] leads to an uncertainty on the differential cross sections which is typically around 1-2% and largest for jets at the largest |η|.
4.
Jet reconstruction efficiency: the efficiency for reconstructing jets from the calorimeter information is determined by reference to a sample of 'track jets' reconstructed from inner-detector tracks.Following Ref. [40], the uncertainty is taken from the difference between the results of this procedure using data and MC simulation, with extrapolation to the η range not covered by the tracker.This results in systematic uncertainties in the measured cross sections which are smaller than 2% in all kinematic regions.
5. Jet cleaning efficiency: the fraction of jets that match the standard quality criteria, designed to remove jets associated with spurious calorimeter response, was studied using a tag-and-probe technique [40].The corresponding systematic uncertainties are obtained by applying looser and tighter selections to the tag jet and propagate to at most 8% in the cross sections measured here.
6. Trigger efficiency: the trigger efficiency is evaluated as a function of leading-jet transverse momentum in various pseudorapidity ranges using either an independently triggered data sample or the MC mixture used in Fig. 3.The rise near the threshold of the efficiency in each p T interval is parameterised based on a fit with free parameters.The efficiency is taken from the data, while the uncertainty is taken as the difference between two MC distributions: one assuming 100% trigger efficiency and the other rescaled by trigger efficiencies found in this MC sample in the same η and p T ranges as in the data.The resulting uncertainties are smaller than 3.5% for all measured bins.
A further parameterisation uncertainty, evaluated by varying the fit parameters within their uncertainties, is less than 0.7% for all measurements.An additional uncertainty, below 0.5% in all bins, is obtained from the differences in the simulated efficiencies from the ND, SD and DD processes.
7. Cluster energy scale: the uncertainty on the energy scale of the individual calorimeter clusters used to determine ξ is evaluated in an η-dependent manner as described in Ref. [43].The resulting uncertainty in the cross sections differential in ξ is typically 10%.
8. Cell significance threshold: the significance thresholds applied to suppress calorimeter clusters which are consistent with noise fluctuations, are shifted up and down by 10% to determine the corresponding systematic uncertainties.The weakened requirements on particle (transverse) momenta applied here compared with Ref. [2] increase the sensitivity to the threshold shifts, particularly in the forward regions, resulting in uncertainties on the differential cross sections of typically 10-20%.
9. Track reconstruction efficiency: the uncertainty on the track reconstruction efficiency is taken from Ref. [42], resulting in a negligible effect on the differential cross sections.
10. Luminosity: the uncertainty on the luminosity is taken from the luminosity determination for the year 2010 [39], resulting in a ±3.5% normalisation uncertainty on all measurements.
11. Reconstructed vertex requirement: the uncertainty on the efficiency of the vertex multiplicity requirement is evaluated by loosening it in data to include events with no vertices.This changes the differential cross sections by less than 1% in all bins.
12. Dead material: the effect of possible inaccuracies in the detector dead material simulation was studied in Ref. [2] using dedicated MC samples with modified material budgets (±10% around the central value) in the inner detector, services and calorimeters.The largest effect on any bin in that analysis was 3%, which is applied as a symmetric shift in each bin of the current measurement.
13. Unfolding procedure: the uncertainty associated with modelling bias introduced by the unfolding procedure is estimated using a data-driven procedure whereby the particle-level distributions of the MC sample are reweighted such that the corresponding detector-level distributions match the uncorrected data in the two-dimensional (∆η F , ξ)-space.The reweighted detector-level MC distribution is then unfolded using the same procedure as is applied to the data.The systematic uncertainty in each bin is taken to be the difference between the unfolded reweighted MC distribution and the reweighted particle-level MC distribution.The resulting unfolding uncertainty is typically around 15% for the ∆η F distribution (rising to 25% in the bin for the largest gaps) and is smaller than 10% in the case of the ξ distribution.Since the factors used to scale the ND and (SD+DD) processes to best describe the data before unfolding are different for the ∆η F and ξ distributions, a further uncertainty of up to around 5% is ascribed by swapping these factors between the two distributions.
The total systematic uncertainty is defined as the sum in quadrature of the uncertainties described above.The dominant contribution arises from the jet energy scale uncertainty, followed by the unfolding uncertainty, the cell significance threshold uncertainty (for the ∆η F distribution) and the cluster energy scale uncertainty (for ξ).The overall uncertainty varies between bins in the range 20% to 45%.There are strong correlations between the systematic uncertainties in neighbouring measurement intervals of both the ∆η F and ξ distributions.
Results
In this section, particle-level dijet cross sections are presented differentially in the variables ∆η F and ξ, both of which have discriminatory power to separate diffractive and non-diffractive contributions.The cross sections correspond to events with at least two jets with p T > 20 GeV in the region |η| < 4.4.The particle-level gap is defined by the region of pseudorapidity with an absence of neutral particles with p > 200 MeV and charged particles with either p > 500 MeV or p T > 200 MeV.The conclusions are not strongly dependent on the choice of R parameter in the anti-k t jet algorithm, although the cross-section normalisations are about two times larger for R = 0.6 than for R = 0.4.The data shown here correspond to R = 0.6.The results with both cone sizes can be found in tabular form in Ref. [47].
[nb] Figures 4(a) and 4(b) show the dijet cross section differentially in ∆η F and ξ for R = 0.6 jets.In contrast to related distributions in inclusive rapidity-gap measurements [2], the data in these figures do not show any significant diffractive plateau at large gap sizes.This difference is of kinematic origin, resulting from the reduced phase space at large gap sizes or small ξ when high-p T jets are required.Both distributions are compared with predictions from the PYTHIA8 MC model, decomposed into ND, SD and DD components, with the D-L flux choice.The normalisation of the ND contribution in both distributions is fixed to match the data in the first bin of ∆η F , where this component is expected to be heavily dominant, requiring a multiplicative factor of 1/1.4.The SD and DD normalisations are left unchanged from their defaults in PYTHIA8.This MC combination results in a satisfactory description of both distributions.The ND component is at least an order of magnitude larger than the SD and DD contributions for relatively small ∆η F 1 and large ξ 0.1.As ∆η F grows or ξ falls, the diffractive components of the models become increasingly important, such that the ND and (SD+DD) components are approximately equal at ∆η F ∼ 3 or log 10 ξ ∼ −2.At the largest gaps (∆η F 5) and smallest ξ ( ξ 0.003), the model suggests that the diffractive components are approximately twice as large as the ND contribution.
A dijet cross section differential in ξ has also been measured by CMS [15].The ATLAS and CMS hadron level cross-section definitions are slightly different in terms of the η, p and p T ranges of the particles considered and the jet R parameter.Nonetheless, the measured cross sections are similar in magnitude and both analyses lead to the conclusion that a non-negligible ND contribution extends to relatively large ∆η F and small ξ.
The predicted ND contribution at large gap sizes is sensitive to the modelling of rapidity and transverse momentum fluctuations in the hadronisation process, which are not yet well constrained.To establish the presence of a diffractive contribution, it is therefore necessary to investigate the likely range of ND predictions.In Fig. 5, the dijet cross sections differential in ∆η F and ξ are compared with the PYTHIA8 ND contribution and also with an NLO calculation of non-diffractive dijet production in the POWHEG framework, with hadronisation modelled using PYTHIA8, as described in Section 2. Each of the ND predictions is separately normalised in the first bin of the ∆η F distribution.The range spanned by the ND predictions suggests a substantial uncertainty in the probability of producing gaps through hadronisation fluctuations, such that for ∆η F 4, it is not possible to draw conclusions on the presence or absence of an additional diffractive contribution.However, in both of the models, the ND prediction falls significantly short of the data for ∆η F 4. A similar conclusion is reached at the lowest ξ.This region is therefore investigated in more detail in the following.Since the diffractive contribution is characterised by both large ∆η F and small ξ, it can be separated most cleanly by placing requirements on both variables simultaneously.In Fig. 6, the ξ distribution is shown after applying the requirement ∆η F > 2. This restricts the accessible kinematic range to ξ 0.01, and suppresses the ND contributions considerably.As shown in Fig. 6(a), the ND contribution in the lowest ξ bin (−3.2 < log 10 ξ < −2.5) is smaller than 25% according to all models considered, allowing for a quantitative investigation of the diffractive contribution.
The data are compared with various models of diffractive dijet production with no rapidity-gap survival probability factors applied.The PYTHIA8 ND+SD+DD model is shown in Fig. 6(b) for three different choices of pomeron flux, Schuler-Sjöstrand (S-S), Donnachie-Landshoff (D-L) and Minimum Bias Rock-efeller (MBR), as described in Section 2. The SD contribution dominates in this kinematic region, as can be inferred by comparing the PYTHIA8 predictions in Fig. 6(b) with the PYTHIA8 ND and PYTHIA8 DD contributions in Fig. 6(a).There is some dependence of the predicted cross section on the choice of flux, but all three PYTHIA8 predictions are compatible with the data without the need for a rapidity-gap survival probability factor, the D-L flux giving the best description.In contrast, the POMWIG model of the SD contribution alone lies above the data by around a factor of three in the low ξ, large ∆η F region (Fig. 6(a)).Both PYTHIA8 and POMWIG are based on implementations of DPDFs as measured at HERA.POM-WIG is a straightforward implementation of a standard factorisable Pomeron model with standard matrix elements, specifically intended for use in comparison with diffractive hard scattering processes such as that measured in this paper.PYTHIA8 is intended to describe diffraction inclusively.It contains a complex transition between the hard (DPDF-based) and soft models, and the corresponding mechanisms for generating final-state particles.The large difference here between the predictions of PYTHIA8 and POMWIG may be a consequence of this difference in basic approach.The quality of the description of the data by PYTHIA8 is not altered significantly if the modelling of multi-particle interactions, colour reconnections, or initial-or final-state radiation are varied.
Attributing the POMWIG model's excess over the data in the most sensitive region to absorptive effects, the data are compared quantitatively with POMWIG to determine the rapidity-gap survival probability S 2 appropriate to this model.The value of S 2 is determined from the region where the poorly known ND contribution is smallest, i.e. integrated over the range −3.2 < log 10 ξ < −2.5 after imposing the rapidity-gap requirement ∆η F > 2 as in Fig. 6.The estimate of S 2 is obtained from the ratio of data to the SD contribution in the POMWIG model after subtracting from the data the ND contribution as modelled by PYTHIA8 and the DD contribution assuming the SD/(SD+DD) ratio from PYTHIA8.No gap survival factors are applied to the subtracted ND and DD contributions.The size of these corrections can be inferred from the PYTHIA8 ND and DD contributions as indicated in Fig. 6(a).A correction factor 1.23 ± 0.16 [48] is applied to S 2 to account for the fact that the H1 2006 Fit B DPDFs used in POMWIG include proton dissociation contributions ep → eXY where the proton excitation has a mass M Y < 1.6 GeV, in addition to the SD process.
The resulting extracted value of the rapidity-gap survival probability appropriate to the mixed POMWIG / PYTHIA8 model is S 2 = 0.16 ± 0.04 (stat.)± 0.08 (exp.syst.) , where the statistical (stat.)and experimental systematic (exp.syst.)uncertainties are propagated from the data.This model is shown as 'POMWIG S 2 Model' in Fig. 6(b).No attempt has been made to fully assess the model-dependence uncertainty, although changing the ND contribution in the extraction from PYTHIA8 to POWHEG + PYTHIA8 results in an S 2 of 0.15 and indications from elsewhere [14,15] suggest that S 2 might be smaller if NLO models were used.The result is compatible with the values of 0.12 ± 0.05 and 0.08 ± 0.04, obtained by CMS in LO and NLO analyses, respectively, using the region 0.0003 < ξ < 0.002 and a jet R parameter of 0.5 [15].The result is also compatible with that obtained at lower centre-of-mass energy at the Tevatron [7], which was re-evaluated in a subsequent NLO analysis [49] to be between 0.05 and 0.3, depending on the fraction of the pomeron momentum carried by the parton entering the hard scattering.Theoretical predictions for S 2 at the LHC [50,51] are also compatible with the result here, although the predicted decrease with increasing centre-of-mass energy is not yet established.
Conclusions
An ATLAS measurement of the cross section for dijet production in association with forward rapidity gaps is reported, based on 6.8 nb −1 low pile-up 7 TeV pp collision data taken at the LHC in 2010.The data are characterised according to the size of the forward rapidity gap, quantified by ∆η F and ξ, which for the single-diffractive case approximates the fractional longitudinal momentum loss of the scattered proton using the information available within the detector acceptance.Non-diffractive Monte Carlo models are capable of describing the data over a wide kinematic range.However, a diffractive component is also required for a more complete description of the data, particularly when both large ∆η F and small ξ are required.The PYTHIA8 model gives the best description of the shape and normalisation of this contribution.
The rapidity-gap survival probability is estimated by comparing the measured cross section for events with both large ∆η F and small ξ with the leading-order POMWIG Monte Carlo model of the diffractive contribution, derived from diffractive parton distribution functions extracted in deep inelastic ep scattering.This determination is limited by the uncertainties associated with the non-diffractive and double-dissociation contributions, the result being S 2 = 0.16 ± 0.04 (stat.)± 0.08 (exp.syst.).The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide.
Figure 2 :
Figure2: (a) Particle-level correlation between the ξ variable extracted from the diffractively scattered proton and ξ calculated from particles selected as defined in the text, using the PYTHIA8 SD MC model.(b) Correlation between the particle-level ξ and detector-level ξ calculated from clusters selected as defined in the text, using the sum of PYTHIA8 ND, SD and DD contributions.In both plots, the distributions are normalised to unity in each column.
Figure 3 :
Figure 3: Comparisons of dijet cross sections from uncorrected data with a combination of PYTHIA8 diffractive and non-diffractive contributions at detector level based on jets found by the anti-k t algorithm with R = 0.6.The MC distributions are normalised to the integrated luminosity of the data after first applying a factor of 0.71 to the ND contribution.The error bars correspond to the statistical uncertainties.In addition to the measured (a) ∆η F and (b) ξ variables, the distributions in (c) the leading-jet pseudorapidity and (d) transverse momentum are also shown.The lower panels show ratios of the MC models to the data where the error bars indicate the sum in quadrature of the statistical uncertainties arising from the data and the MC simulation.
Fig. 2 (
Fig.2(b).The MC combination is optimised in a simple fitting procedure in which scaling factors are applied to the ND and (SD+DD) components to best match the data.The IDS unfolding is performed in two dimensions, corresponding to the p T of the leading jet and the target distribution (either ∆η F or ξ).The results of the IDS procedure depend in general on the number of iterations used.A fast convergence is achieved for both measured distributions and the fourth iteration is chosen as nominal since it optimises the balance between the systematic and statistical uncertainty arising from the unfolding procedure.The unfolding procedure is stable against variations in binning, number of iterations and the scaling factors applied to the diffractive and non-diffractive contributions in the PYTHIA8 model, as discussed further in Section 5.
Figure 4 :
Figure 4: The differential dijet cross sections in (a) ∆η F and (b) ξ, compared with the particle-level PYTHIA8 model of the SD, sum of diffractive components SD and DD, and sum of all three ND, SD and DD components.The Donnachie-Landshoff pomeron flux model is used for the diffractive components.The error bars on the data and the MC models indicate their respective statistical uncertainties, while the yellow bands show the total uncertainties on the data.The ND contribution is normalised to match the data in the first ∆η F bin.The lower panels show ratios of the MC models to the data where the error bars indicate the sum in quadrature of the statistical uncertainties arising from the data and the MC simulation.
Figure 5 :
Figure 5: The dijet cross sections differential in (a) ∆η F and (b) ξ, compared with the PYTHIA8 ND MC model as well as an ND model using the NLO POWHEG generator with hadronisation based on PYTHIA8.Each of the models is separately normalised to match the data in the first ∆η F bin.The error bars on the data and the MC models indicate their respective statistical uncertainties, while the yellow bands show the total uncertainties on the data.The lower panels show ratios of the MC models to the data where the error bars indicate the sum in quadrature of the statistical uncertainties arising from the data and the MC simulation.
Figure 6 :
Figure 6: The differential cross section as a function of ξ for events satisfying ∆η F > 2. The same data are shown in (a) and (b), and are compared with models as described in the text.The error bars on the data and the MC models indicate their respective statistical uncertainties, while the yellow bands show the total uncertainties on the data.The 'POMWIG S 2 ' model represents the sum of PYTHIA ND and POMWIG, with POMWIG multiplied by 0.16 and scaled by 1/1.23 and by the (SD+DD)/SD ratio from PYTHIA8. | 9,503.2 | 2015-11-02T00:00:00.000 | [
"Physics"
] |
Regulation of Heparan Sulfate 6-O-Sulfation by β-Secretase Activity*
The enzymes involved in glycosaminoglycan chain biosynthesis are mostly Golgi resident proteins, but some are secreted extracellularly. For example, the activities of heparan sulfate 6-O-sulfotransferase (HS6ST) and heparan sulfate 3-O-sulfotransferase are detected in the serum as well in the medium of cell lines. However, the biological significance of this is largely unknown. Here we have investigated by means of monitoring green fluorescent protein (GFP) fluorescence how C-terminally GFP-tagged HS6STs that are stably expressed in CHO-K1 cell lines are secreted/shed. Brefeldin A and monensin treatments revealed that the N-terminal hydrophobic domain of HS6ST3 is processed in the endoplasmic reticulum or cis/medial Golgi. Treatment of HS6ST3-GFP-expressing cells with various protease inhibitors revealed that the cell-permeable β-secretase inhibitor N-benzyloxycarbonyl-Val-Leu-leucinal (Z-VLL-CHO) specifically inhibits HS6ST secretion, although this effect was specific for HS6ST3 but not for HS6ST1 and HS6ST2. However, Z-VLL-CHO treatment did not increase the molecular size of the HS6ST3-GFP that accumulated in the cell. Z-VLL-CHO treatment also induced the intracellular accumulation of SP-HS6ST3(-TMD)-GFP, a modified secretory form of HS6ST3 that has the preprotrypsin leader sequence as its N-terminal hydrophobic domain. Diminishment of β-secretase activity by coexpressing the amyloid precursor protein of a Swedish mutant, a potent β-secretase substrate, also induced intracellular HS6ST3-GFP accumulation. Moreover, Z-VLL-CHO treatment increased the 6-O-sulfate (6S) levels of HS, especially in the disaccharide unit of hexuronic acid-GlcNS(6S). Thus, the HS6ST3 enzyme in the Golgi apparatus and therefore the 6-O sulfation of heparan sulfates in the cell are at least partly regulated by β-secretase via an indirect mechanism.
Heparan sulfate (HS) 2 proteoglycans play important roles in various biological processes by acting as co-receptors, by serving as reservoir molecules in morphogen gradient formation etc. (1)(2)(3)(4)(5). The sulfation domains (S domains) of HS are the binding sites for the growth factors and morphogens. Analyses of fruit fly, zebrafish, and mice mutants have revealed that the specific patterns of the sulfations determine the bioactivities of the HS proteoglycans (6 -10). In particular, 6-O-sulfation of HS has been shown to be required for the fibroblast growth factor (FGF) and Wnt signaling pathways in Drosophila and zebrafish (11,12). With regard to the link between 6-O-sulfation of HS and the FGF signaling pathway, heparan sulfate 6-O-sulfotransferase (HS6ST) RNA interference experiments in fruit fly have demonstrated that the interfered phenotype closely resembles those of mutants that are defective in FGF signaling components (11). In addition, by subjecting a sulfated octasaccharide library to an affinity chromatographic assay, Ashikari-Hada et al. (13) have shown that the 6-O-sulfation of HS is important for binding activities of FGF-10, -4, and -7. With regard to the link between HS 6-O-sulfation and the Wnt signaling pathway, the knockdown of HS6ST in zebrafish with morpholino antisense oligonucleotides results in perturbed muscle differentiation that is associated with higher expression of the Wnt target genes myoD and eng2. QSulf1, the avian heparan sulfate 6-O-endosulfatases, is required for the activation of MyoD, which is a Wnt-induced regulator of muscle specification (14). Thus, the 6-O sulfation of HS plays important roles in regulating HS-binding growth factor signaling and morphogen gradient formation.
Three isoforms of HS6STs have been identified in mice and humans (15)(16)(17)(18). The expression patterns of these isoforms are regulated in spatially and temporally different manners. Their substrate specificities also differ (16,17), since HS6ST1 preferentially catalyzes the sulfation of the L-iduronic acid (IdoA)-GlcNS disaccharide unit, whereas HS6ST2 prefers the D-glucuronic acid (GlcA)-GlcNS and IdoA(2S)-GlcNS disaccharides, and HS6ST3 has intermediate substrate specificity between * This work was supported in part by grants-in-aid for scientific research (B) from the Japan Society for the Promotion of Science, grants-in-aid for scientific research (C) from the Japan Society for the Promotion of Science (17570099), a grant-in-aid for scientific research on priority areas 14082206 and 18770119 from the Ministry of Education, Culture, Sports, Science and Technology of Japan, and a special research fund from Seikagaku Corp. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To whom correspondence should be addressed: Institute for Molecular Science of Medicine, Aichi Medical University, Nagakute, Aichi 480-1195, Japan. Tel.: 81-52-264-4811 (ext. 2088); Fax: 81-561-63-3532; E-mail<EMAIL_ADDRESS>HS6ST1 and HS6ST2. Thus, different patterns of HS 6-O sulfation could be produced by three HS6ST isoforms with different substrate specificities. Since every HS-modifying enzyme is thought to be a Golgi resident protein, but HS6STs are rapidly secreted into the culture medium (19), the degree of 6-O-sulfation occurring in the cell may also be regulated by the amount of the enzyme protein in the Golgi apparatus.
In this study, we have investigated the mechanism by which HS6STs are secreted. We have focused the analysis on HS6ST3 for several reasons. First, it is secreted at higher levels than the other two HS6STs (Fig. 1B). Second, the HS6ST1 gene has another ATG codon 30 bp upstream from the translation initiation codon, as reported previously (16). This results in a 10amino acid longer form that can be also translated in HS6ST1green fluorescent protein (GFP)-transfected CHO-K1 cells. This may confuse analyses that seek to determine whether the protein has been processed or not. Third, the HS6ST2 gene also has another ATG codon, albeit far upstream from the other translation initiation codon. Transfection of CHO-K1 cells with this longer HS6ST2-GFP expression vector revealed poor secretion of this longer form (see Fig. 3D), and we could not determine which ATG codon is actually used in vivo. We showed that the short cytoplasmic and hydrophobic domain of HS6ST3 was sufficient for secretion, which appeared to be cleaved in the early secretory pathway (see Fig. 2, A-D). We further showed that HS6ST3 secretion is regulated by -secretase by treating cells with a cell-permeable -secretase inhibitor and by competing -secretase activity by overexpressing one of its substrates (Fig. 3, A-C). Moreover, SP-HS6ST3(-TMD)-GFP, which is a GFP-labeled, secretory modified form of HS6ST3 whose N-terminal hydrophobic domain has been replaced with the preprotrypsin leader sequence, also accumulated in the cell when it was treated with the -secretase inhibitor (Fig. 4). In addition, this treatment increased the 6-O sulfation of HS by about 2-fold, with hexuronic acid (HexA)-GlcNS(6S), HexA-N-acetylglucosamine (GlcNAc)(6S), and HexA(2S)-GlcNS(6S) disaccharide units showing a 3.7-, 1.2-, and 2.3-fold increase in 6-O sulfation (Fig. 5). Therefore, it is likely that -site APP-cleaving enzyme (BACE1)-like enzymes that have -secretase activity may be involved in HS6ST3 secretion and may thus play important roles in modulating 6-Osulfated HS structures.
Cell Culture, Transfection, and Western Blotting-CHO-K1 and its transfectants were maintained in Dulbecco's modified Eagle's medium/F-12 medium (Sigma) supplemented with 10% fetal calf serum (Cell Culture Technologies, Lugano, Switzerland) except that when the medium fraction was to be subjected to Western blotting, the medium was supplemented with 1% fetal calf serum. The cells were transfected with the APP sw or BACE1 expression plasmids (21) by using FuGENE6 transfection reagent (Roche Applied Science) according to the manufacturer's instructions. In some cases, cells were treated with BFA (10 g/ml) or monensin sodium salt (5 M) for 4 h. Cell lysates were prepared as described previously (20). Medium aliquots were centrifuged at 1,000 ϫ g for 5 min to remove cell debris. The resulting cell lysates and medium aliquots were processed for Western blotting according to standard procedures and subjected to electrophoresis in 8% polyacrylamide gel. The proteins in the gels were transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA) and reacted with rabbit polyclonal anti-GFP antibody (diluted 1:1000). To detect the antibody, a peroxidase-conjugated secondary antibody (CAPPEL, Irvine, CA) was used, and immunocomplexes were revealed by Western Lightning TM Chemiluminescence Reagent Plus (PelkinElmer Life Sciences).
Triton X-114 Phase Separation-Cell lysates were prepared by using Triton X-114 as a detergent. Cell lysates were incubated at 37°C for 3 min and centrifuged at 1700 ϫ g for 5 min at room temperature. An upper detergent-poor phase and a lower detergent-rich phase were collected separately and subjected to Western blotting with an anti-GFP or anti-calnexin antiserum (Stressgen Bioreagents, Ann Arbor, MI).
Cell Staining-Mouse monoclonal anti-GM130 was obtained from BD Transduction Laboratory (Lexington, KY), mouse monoclonal anti-Myc antibody (9B11) was from Cell Signaling Technology (Danvers, MA), and Alexa Fluor-conjugated goat anti-mouse IgG antibodies were from Invitrogen Japan (Tokyo, Japan). Cells were fixed and stained as described previously (20). Immunofluorescence was detected under an LSM5 PASCAL confocal microscope (Carl Zeiss Japan, Tokyo, Japan).
Protease Inhibitor Treatment-Cells were prepared at 1 ϫ 10 6 /well on 6-well plates 1 day prior to protease inhibitor treatment. The protease inhibitors were dissolved in dimethyl sulfoxide (Z-VLL-CHO) or in phosphate-buffered saline (KTEEIS-VEN-Stat-VAEF-OH) at the concentrations 100 times as much as the final ones (final concentrations: Z-VLL-CHO, 1 M; KTEEISVEN-Stat-VAEF-OH, 100 nM). After incubating the cells for the indicated times, the cell lysates and media were separately analyzed as described above. The intensity of the GFP fluorescence was measured by using a fluorescent spectrophotometer, F-3010 (Hitachi High-Technologies, Tokyo, Japan), as described previously (20).
Disaccharide Analysis-Confluent cells grown in 15-cm dishes were treated with Z-VLL-CHO for 30 min prior to radiolabeling with 0.1 mCi/ml [ 35 S]H 2 SO 4 for 7 h. After two washes with phosphate-buffered saline, the cells were scraped off, suspended in 4 ml of 0.2 M NaOH, incubated for 16 h at room temperature, neutralized with 4 M acetic acid, and then treated with DNase (100 g) and RNase (100 g) at 37°C for 2 h. Proteinase K (100 g) was then added, and the incubation continued for 16 h at 37°C. The reaction was stopped by heating at 100°C for 5 min. The samples were centrifuged at 10,000 rpm for 15 min to remove insoluble materials. The supernatants were applied to a DEAE-Sephacel column equilibrated with 50 mM Tris-HCl buffer (pH 7.2) containing 0.2 M NaCl and subsequently washed with 10 column volumes of 0.2 M NaCl in 50 mM Tris-HCl buffer (pH 7.2). Fractions eluted with 2 M NaCl in the same buffer were collected. Two and a half volumes of cold ethanol containing 1.3% (w/v) potassium acetate were added to these fractions, and the glycosaminoglycans were recovered as precipitates by centrifugation at 13,000 rpm for 30 min at 4°C. The precipitates were dissolved in water, and portions of the solutions were treated at 37°C for 3 h with a mixture of 1 milliunit of heparitinase I, 2 milliunits of heparitinase II, and 1 milliunit of heparitinase III in 50 l of 50 mM Tris-HCl buffer, pH 7.2, containing 1 mM CaCl 2 and 4 g of bovine serum albumin. After the digests were filtered with Ultrafree-MC (5,000 molecular weight limit; Millipore Corp., Bedford, MA), unsaturated disaccharides in the filtrates were separated by a silicabased polyamine column as described previously (22). The radioactivity of the fraction was determined by using the LS 6500 scintillation counter (Beckman Coulter, Fullerton, CA).
RESULTS
We previously showed that 6-O-sulfotransferase activity could be detected in the conditioned medium of cultured CHO-K1 cells (19). Analysis of the amino acid sequence of HS6ST1 in the culture medium revealed that its N-terminal amino acid started from tyrosine 25. Subsequent cDNA cloning showed that Tyr 25 and Gln 24 are conserved in all of the HS6ST isoforms (HS6ST1, -2, and -3) that have been identified to date in mice, humans, rats, dogs, chicken, and zebrafish (Fig. 1A). We speculated that these residues might be a common cleavage site. To further analyze the cleavage and secretion of the HS6STs, we cultured CHO-K1 cell lines that stably express HS6STs tagged with GFP at their C termini and then measured the GFP fluorescence that had been secreted into the culture medium (Fig. 1B). Fluorescence in the culture medium of the untransfected CHO-K1 parental cell line was not observed. The conditioned media of cells expressing HS6ST1-GFP, HS6ST2-GFP, and HS6ST3-GFP did show GFP fluorescence. Western blotting with the anti-GFP antibody indicated that the HS6ST-GFPs in the medium were not derived from degradation products as they were the same size as the intracellular HS6ST-GFPs (see Fig. 2B; data not shown). However, when the conserved QY residues were altered to AY, QA, and AA, the secretion of these residues were not affected or even enhanced (Fig. 1B). This indicates that these residues do not after all participate in HS6STs cleavage or secretion.
We then sought to define the mechanism by which the HS6STs are secreted. To do this, we focused on HS6ST3 rather than on the other two HS6STs for the reasons described above. We first sought to define the compartment in which HS6ST3 is processed. To do so, we treated the HS6ST3-GFP-expressing CHO-K1 cells with BFA or monensin, which block different (15). *, amino acids that are identical in the HS6STs. TMD, transmembrane domain. B, substitution of the QY amino acids with AY, QA, or AA does not influence HS6ST secretion. CHO-K1 cells stably transfected with expression vectors bearing the indicated substitutions were grown to confluence in 6-well plates, washed, and cultured in fresh medium for 8 h. The GFP fluorescence in the culture medium and cell lysate was then detected by Hitachi F-3010 fluorescence spectrophotometry with an excitation wavelength of 388 nm and an emission wavelength of 407 nm. HS6ST secretion was expressed as a ratio of the fluorescence intensity of the culture medium to that of the cell lysate. For comparison, the secretion of the stable transfectant expressing HS6ST1-GFP was set to 100%.
-Secretase Regulates Heparan Sulfate 6-O-Sulfation
steps of the intracellular transport process, and examined the resulting molecular form of HS6ST3-GFP by Western blot analysis employing anti-GFP antiserum. If HS6ST3-GFP processing occurs after it reaches the trans-Golgi, both its full-sized and processed forms would be present in BFA-treated cells. This is because BFA treatment disassembles the early Golgi complex and its fusion with the endoplasmic reticulum, and thus the uncleaved product would accumulate in the cell. Monensin reversibly slows the intracellular transport rate of newly synthesized proteins, in particular by interfering with the transfer across Golgi compartments; thus, it compromises secretion from the trans-Golgi (23). If HS6ST3-GFP is cleaved only after reaching the cell surface, both monensin and BFA treatment would result in the accumulation of the uncleaved HS6ST3-GFP product. As shown in Fig. 2A, HS6ST3-GFP secretion was almost completely blocked when the cells were treated with either inhibitor. Western blot analysis was then performed to determine the molecular sizes of the intracellular HS6ST3-GFP synthesized by the untreated and BFA-and monensin-treated cells. For this analysis, the cell lysates were first incubated with PNGase F to remove N-glycans that could simplify the analysis. The HS6ST3-GFP molecules in the BFA/monensin-treated cells were the same size as those in the untreated controls even when electrophoresed for a long time to intensify the size difference (Fig. 2B, top). This suggests that HS6ST3-GFP is processed early in the secretory pathway, namely in either the endoplasmic reticulum or the cis/medial-Golgi. To confirm this possibility, we performed the Triton X-114 phase separation assay. Bordier (24) has shown that transmembrane proteins can be specifically and quantitatively recovered in the detergent-rich phase. Heating of aqueous solution containing Triton X-114 above the temperature called the cloud point leads to the formation of large micelles that sediment rapidly during low speed centrifugation. An upper detergent-poor phase and a lower detergent-rich phase are formed, with the latter phase containing membrane lipids and transmembrane proteins. HS6ST3-GFP was extracted into the aqueous phase after centrifugation regardless of the BFA and monensin treatment, suggesting that the N-terminal hydrophobic domain was cleaved off early in the secretory pathway (Fig. 2B, bottom).
Since the N-terminal cytoplasmic sequence of HS6ST3 is relatively short, we hypothesized that the N-terminal cytoplasmic and hydrophobic domain might have the signal peptide activity of a secretory protein. Signal peptides of secretory proteins, which are cleaved by the signal peptidase, are known to consist of three contiguous regions; a 1-5-amino acid region of positively charged residues, a 7-15-hydrophobic amino acid region, and a 5-7-amino acid region containing residues with small side chain at the Ϫ1and Ϫ3-positions (25)(26)(27)(28). If the N-terminal hydrophobic sequence of HS6ST3 is cleaved by signal peptidase, adding several hydrophilic amino acids to its N terminus would compromise its recognition by the signal peptidase. To test this possibility, we added FLAG-peptide to the N terminus of HS6ST3-GFP and substituted the ATG codon of HS6ST3 with ACG (FLAG-HS6ST3(ATGϾACG)-GFP). Cell lysates and the medium of CHO-K1 cells stably expressing HS6ST3-GFP or FLAG-HS6ST3(ATGϾACG)-GFP were subjected to Western blotting analysis using anti-GFP antiserum. As shown in Fig. 2D, HS6ST3-GFP was detected both in the cell lysate and in the conditioned media. However, FLAG-HS6ST3(ATGϾ MAY 18, 2007 • VOLUME 282 • NUMBER 20 ACG)-GFP was exclusively found in the cell lysate. Next, we substituted the N-terminal hydrophobic domain of HS2ST with that of HS6ST3, resulting in a HS2ST chimeric protein (HS6ST3-2ST-GFP). HS2ST-GFP was not detected in the culture medium (Fig. 2D). If the N-terminal hydrophobic domain of HS6ST3 were involved in signal peptidase recognition, substituting the N-terminal hydrophobic domain of HS2ST with that of HS6ST3 would result in a secretory protein. Western blotting was performed on the cell lysates and the medium of CHO-K1 cells stably expressing HS2ST-GFP or HS6ST3-2ST-GFP. HS6ST3-2ST-GFP, but not HS2ST-GFP, was detected in the conditioned medium (Fig. 2D). Thus, the N-terminal cytoplasmic and hydrophobic domains of HS6ST3 are sufficient for secretion. Next, we performed the Triton X-114 phase separation experiment for these mutants. As shown in Fig. 2C, SP-GFP (secretory form of GFP), HS6ST3-GFP, and HS6ST3-2ST-GFP were extracted into the aqueous phase after Triton X-114 phase separation. HS2ST-GFP, FLAG-HS6ST3(ATGϾACG)-GFP, and endoplasmic reticulum membrane protein calnexin were found both in the aqueous and detergent-rich phase. Thus, the N-terminal cytoplasmic and hydrophobic domains of HS6ST3-GFP and HS6ST3-2ST-GFP would be cleaved off before secretion to become soluble protein. On the other hand, FLAG-HS6ST3(ATGϾ ACG)-GFP was not cleaved, since the addition of FLAG tag sequence would compromise the recognition by signal peptidase. These results support our hypothesis that the N-terminal cytoplasmic and hydrophobic domains of HS6ST3-GFP behave as a signal peptide sequence of secretory protein.
-Secretase Regulates Heparan Sulfate 6-O-Sulfation
In the course of the experiment to identify the protease that is responsible for the cleavage of HS6ST3-GFP, we treated the cells with various protease inhibitors (aprotinin, EDTA, iodoacetic acid, leupeptin, N-ethylmaleimide, pepstatin A, phenylmethylsulfonyl fluoride, N ␣ -tosyl-L-lysine chloromethyl ketone hydrochloride, N-p-tosyl-L-phenylalanine chloromethyl ketone, KTEEISEVN-Stat-VAEF, N-benzyloxycarbonyl-Leu-leucinal, Z-VLL-CHO, and 1,3-di-(Ncarboxybenzoyl-Leu-Leu)amino acetone). If a particular inhibitor were involved in blocking HS6ST3-GFP secretion, the extracellular GFP fluorescence would decrease, whereas the intracellular fluorescence would increase. We found no significant effect when the following protease inhibitors were used: aprotinin, EDTA, iodoacetic acid, leupeptin, N-ethylmaleimide, pepstatin A, phenylmethylsulfonyl fluoride, N ␣ -tosyl-L-lysine chloromethyl ketone hydrochloride, N-p-tosyl-L-phenylalanine chloromethyl ketone, KTEE-ISEVN-Stat-VAEF, N-benzyloxycarbonyl-Leu-leucinal, 1,3di-(N-carboxybenzoyl-Leu-Leu)amino acetone (data not shown). However, the potent effect was observed with the cell-permeable -secretase inhibitor Z-VLL-CHO (Fig. 3A), since it decreased extracellular fluorescence intensity by 30 -50% of the intensity of untreated cells (Fig. 3A). Concomitantly, it increased the intracellular fluorescence intensity by 200 -300% of the intensity of untreated cells (Fig. 3A). We used two lines of CHO-K1 cells stably expressing different levels of HS6ST-GFP and found that both cell lines respond to Z-VLL-CHO similarly (data not shown). Other inhibitors did not show any significant effect. Western blot-ting of the cell lysate and the medium with anti-GFP antibody showed that Z-VLL-CHO treatment did not appreciably change the overall amount of HS6ST3-GFP (Fig. 3A, bottom left). The cell-impermeable -secretase inhibitor KTEEISEVN-Stat-VAEF had no effect. To ensure that the observed effect of Z-VLL-CHO did not arise from a block to general cellular transport, we treated CHO-K1 cells expressing the secretory form of GFP (SP-GFP) with Z-VLL-CHO or vehicle for 8 h and measured the fluorescent intensity of the culture medium and the cell lysates (Fig. 3A). Z-VLL-CHO treatment decreased the fluorescent intensity of the culture medium to 75%. This may be due to Z-VLL-CHO toxicity. However, the fluorescence of the cell layer was nevertheless increased by Z-VLL-CHO treatment. Therefore, the observed effect of Z-VLL-CHO does not result from nonspecific inhibition of cellular transport.
Although Z-VLL-CHO is widely used as a specific inhibitor of -secretase, there may be other proteases that can also be inhibited by this inhibitor. To examine the specificity of the inhibitory effect of Z-VLL-CHO on HS6ST3-GFP secretion, the accumulation of HS6ST3-GFP in the treated cells was observed up to 12 h by fluorescence microscopy. In the untreated cells, HS6ST3-GFP co-localized with the Golgi marker GM130 (Fig. 3B), as shown in our previous paper (20). However, when the cells were treated with Z-VLL-CHO, HS6ST3-GFP accumulated in the cell, initially in the Golgi apparatus (Fig. 3B). Longer exposure of the cells to the inhibitor resulted in the leaking of HS6ST3-GFP from Golgi apparatus, since HS6ST3-GFP fluorescence showed wider distribution than the GM130 fluorescence labeled with Alexa594.
To investigate whether -secretase is directly responsible for the secretion of HS6ST3-GFP, we first examined if transiently expressing human APP with a Swedish mutation (APP sw ), a well known and potent -secretase substrate (29), would alter HS6ST3-GFP secretion (data not shown). We also investigated the effect of transiently transfecting the vector expressing BACE1, which has -secretase activity (30, 31) (data not shown). The effects of expressing APP sw or BACE1 were quite small; this is because the efficiency of transient transfection was very low (about 15%) even under the best possible conditions. The competition of -secretase activity by APP sw expression increased the GFP fluorescence in the cell, whereas conversely, BACE1 expression enhanced HS6ST3 secretion (data not shown). These observations suggest that HS6ST3 secretion is dependent on -secretase activity. Since the difference in fluorescence measured above was quite small, we confirmed the enhanced secretion of HS6ST3-GFP following BACE1 transfection by fluorescent microscopy. HS6ST3-GFP-expressing cells were transiently transfected with the Myc-tagged BACE1 expression vector and treated with or without BFA to visualize HS6ST3-GFP expression. Cells were stained with anti-Myc antibody followed by Alexa594-conjugated anti-mouse antibody and counted the number of BACE1-positive cells with or without HS6ST3-GFP expression. Before BFA treatment, the number of BACE1-and HS6ST3-GFP-double-positive cells/ BACE1-positive cells was 38/136. After BFA treatment, the number of BACE1-and HS6ST3-GFP-double-positive cells/
-Secretase Regulates Heparan Sulfate 6-O-Sulfation
BACE1-positive cells was 79/129. The relative number of BACE1-and HS6ST3-GFP-double-positive cells increased significantly by BFA treatment (p Ͻ 0.01; Fisher's exact probability test). The representative photographs were shown in Fig. 3C. Cells expressing BACE1 were less GFP-fluorescent compared with BACE1-nonexpressing cells (no treatment). BFA treatment increased the relative number of doubly positive cells (BFA). The relative number of HS6ST3-GFP-expressing cells/ total cells (no treatment, 127/146; BFA, 109/137; p Ͼ 0.05; Fisher's exact probability test) was not significantly increased by BFA treatment. The cells transiently expressing APP sw had an abnormal morphology (data not shown), which may be caused by the toxic effect of A40 and A42 produced by BACE (32)(33)(34).
To determine whether -secretase directly processes HS6ST3-GFP, we analyzed the molecular weight of HS6ST3-GFP within cells after the treatment with Z-VLL-CHO or vehicle by Western blotting. Z-VLL-CHO treatment increased the overall cellular HS6ST3-GFP levels (Fig. 3A, bottom), which is consistent with the Z-VLL-CHO-induced elevation in intracellular fluorescence intensity shown in the top of Fig. 3A. However, the molecular weight of the intracellular HS6ST3-GFP did The fluorescence intensities of the conditioned medium and cell lysate of untreated cells were set to 100%, respectively. Treatment with Z-VLL-CHO had no effect on the secretion of SP-GFP. Lower left, the Z-VLL-CHO-mediated inhibition of HS6ST3-GFP secretion was verified by Western blotting analysis of the lysates of the cells treated with or without Z-VLL-CHO for 8 h. For this, equal amounts (30 g) of cell lysates were subjected to Western blotting using anti-GFP antiserum after treatment with PNGase F. The HS6ST3-GFP molecules in the cell were similarly sized regardless of the treatment with Z-VLL-CHO. The overall amounts of HS6ST3-GFP (cell lysate and medium) did not change after Z-VLL-CHO treatment. Equal amounts (30 g) of cell lysates were subjected to Western blotting using anti-GFP antiserum without PNGase F treatment. Lower right, Western blotting of Z-VLL-CHOtreated or nontreated cell lysates after Triton X-114 phase separation without PNGase F treatment. The aqueous (A) and the detergent-rich (D) phase were subjected to Western blotting with anti-GFP or anti-calnexin antiserum. B, time-dependent increase of intracellular HS6ST3-GFP levels in Z-VLL-CHO-treated cells that stably express HS6ST3-GFP. The cells were fixed at the indicated time after Z-VLL-CHO treatment, permeabilized, and stained with mouse monoclonal anti-GM130 (a Golgi marker), followed by incubation with Alexa Fluor 568 anti-mouse IgG. C, CHO-K1 cells expressing HS6ST3-GFP were transiently transfected with the BACE1 expression vector, and half of the transfected cells were treated with BFA to block the secretion. Cells were stained with anti-Myc (9B11) antibody followed by Alexa594-conjugated goat anti-mouse IgG antibody. D, cells stably expressing the isoforms of HS6ST1 and HS6ST2 that have longer N termini by 10 and 146 amino acids, respectively, were cultured. Left, the GFP-derived fluorescence in the conditioned medium and cell lysates was analyzed by fluorescence spectrophotometry. The secretion efficiency was calculated as a ratio of the fluorescence intensity of the culture medium divided by that of the cell lysate. For comparison, the secretion of the stable HS6ST1-GFP transfectant was set to 100%. Total GFP fluorescence was as follows: HS6ST1-GFP, 22 MAY 18, 2007 • VOLUME 282 • NUMBER 20 not increase (Fig. 3A, lower left). By Triton X-114 phase separation, HS6ST3-GFP was extracted into the aqueous phase after centrifugation regardless of the Z-VLL-CHO treatment (Fig. 3A, lower right). Thus, it appears that -secretase regulates the level of intracellular HS6ST3 that received the cleavage. This is further supported by our observation that cleavage products were not detected when BACE1-Fc (21) was incubated in vitro with the HS6ST3-protein A fusion protein. 3 By analyzing the DNA data base of the full-length cDNA project, the HS6ST1 gene was revealed to have another ATG codon 30 bp upstream from the translation initiation codon as reported previously (16). This results in a 10-amino acids longer form that can be also translated in HS6ST1-GFP-transfected CHO-K1 cells. We also discovered that the N termini HS6ST2 can be 146 amino acids longer, respectively, than those we previously reported (16), due to the presence of an additional initiation codon upstream from our previously published initiation codon. To investigate whether these longer HS6ST1 and HS6ST2 are also secreted into the medium like their shorter isoforms, we cloned them by reverse transcription-PCR and subcloned them into the GFP expression vector such that the GFP tag was on their C termini. Stable cell lines transfected with these vectors secreted HS6ST1Long-GFP but not HS6ST2Long-GFP into the culture medium (Fig. 3D). Interestingly, treatment of HS6ST1-GFP-and HS6ST2Long-GFP-expressing cells with Z-VLL-CHO had no effect of their secretion patterns (Fig. 3D), which indicates that the secretase-dependent mechanism that controls the intracellular pool of HS6ST is specific to HS6ST3.
-Secretase Regulates Heparan Sulfate 6-O-Sulfation
The above data suggested that the N-terminal domain of HS6ST3-GFP is cleaved off to generate the secreted form, whose secretion is subsequently regulated in -secretase-dependent manner. To test this hypothesis further, we asked whether Z-VLL-CHO treatment could also cause the secreted form of HS6ST3-GFP to accumulate in the cell. To do this, we stably expressed in CHO-K1 cells a modified form of HS6ST3-GFP in which the N-terminal hydrophobic domain has been replaced with the signal peptide of human preprotrypsin; this generated the secretory HS6ST3-GFP fusion protein SP-HS6ST3(-TMD)-GFP (Fig. 4). We then assessed the effect of Z-VLL-CHO treatment on the secreted GFP fluorescence. As shown in Fig. 4, the secreted fluorescence intensity in the medium of Z-VLL-CHO-treated cells was lower than that of untreated controls. Thus, the cleaved form of HS6ST3 is retained intracellularly if -secretase activity is blocked, which further supports the notion that the mechanism behind HS6ST3 secretion is dependent on -secretase.
DISCUSSION
That HS sulfation is important for the function of many HSbinding growth factors and morphogens has been revealed by analyzing mutant animals in which the activity of various HS biosynthetic enzymes has been disrupted. For example, the embryos of Drosophila sugarless and sulfateless, which lack the UDP-D-glucose dehydrogenase and heparan sulfate N-deacetylase/N-sulfotransferase enzymes, respectively, have similar phenotypes to embryos lacking the functions of fibroblast growth factors (35). RNA interference-mediated knockdown of HS6ST in Drosophila also resulted in defective fibroblast growth factor signaling (11). Thus, the regulation of HS sulfation may well be a key mechanism that controls the bioactivity of many HS-interacting proteins. That the sulfotransferases themselves may participate in regulating HS sulfation is suggested by the fact that the HS6ST isoforms have inherently 3 S. Kitazume, unpublished observation.
-Secretase Regulates Heparan Sulfate 6-O-Sulfation
different substrate specificities and generate a variety of sulfation patterns. Although their substrate specificities partially overlap, each individual isoform has a characteristic preference for the uronic acid residue neighboring the N-sulfoglucosamine residue. HS6ST1 predominantly sulfates the IdoA-GlcNS unit, whereas HS6ST2 preferentially sulfates the GlcA-GlcNS and IdoA(2S)-GlcNS units to generate trisulfated disaccharide units in HS, and HS6ST3 has the intermediate preference between HS6ST1 and HS6ST2. The degree of sulfation may also play an important role in regulating the bioactivity of HSbinding proteins. So far, little is known about the mechanism regulating the level of sulfation. In this study, we showed for the first time that cells are able to regulate the degree of 6-O-sulfation by changing their intracellular levels of HS6ST3, which is one of the three enzymes that 6-O-sulfate HS. Moreover, we showed that -secretase activity may be partly responsible for regulating the intracellular localization of HS6ST3.
Of the three HS sulfotransferases, HS6ST and heparan sulfate 3-O-sulfotransferase, but not HS2ST, can be purified from the culture medium, which indicates that these enzymes are secretory proteins (15). 4 We previously showed that 6-O-sulfotransferase activity could be detected in the conditioned medium of cultured CHO-K1 cells (19). Analysis of the amino acid sequence of HS6ST1 in the culture medium revealed that its N-terminal amino acid was Tyr 25 . Since Tyr 25 and Gln 24 are conserved in all HS6ST isoforms (HS6ST1, -2, and -3), we speculated that HS6ST3 may be cleaved at this position. When the conserved QY residues were altered to AY, QA, and AA, the secretion of these mutant proteins was not affected or was even enhanced (Fig. 1B), which indicates that these residues did not participate in HS6ST3 cleavage. Alternatively, the mutations may have resulted in cleavage at a different position (see below). Although we showed previously that HS6STs are Golgi-resident type II transmembrane proteins (20), there were no experimental data about the cellular compartment in which HS6ST is cleaved. To investigate this, we treated the HS6ST3-GFP-expressing cells with BFA and monensin (Fig. 2B). Both inhibitors almost completely abrogated HS6ST3-GFP secretion but, unexpectedly, had no effect on HS6ST3-GFP cleavage, as determined by 8% polyacrylamide gel electrophoresis of the cell lysates and conditioned medium after removing the N-linked sugar chain with the PNGase treatment (Fig. 2B). Although it remains possible that the extracellular form of HS6ST3-GFP may have been modified in some way that compensates for the loss of molecular weight arising from the cleavage of the N-terminal hydrophobic domain, the above data nevertheless suggest that HS6ST3-GFP is cleaved in the early phase of the secretory pathway, before reaching the trans-Golgi. This was further confirmed by the Triton X-114 phase separation assay. As shown in Fig. 2B, HS6ST3-GFP was extracted into the aqueous phase after centrifugation regardless of the BFA and monensin treatment. These results suggest that the N-terminal hydrophobic domain of HS6ST3-GFP was cleaved to become soluble protein early in the secretory pathway.
We hypothesized that the N-terminal cytoplasmic and hydrophobic sequences of HS6ST3 would behave like a signal peptide of a secretory protein for several reasons. First, the predicted cytoplasmic domain of HS6ST3 is short (7 amino acids) and has two positively charged amino acids followed by a hydrophobic domain and some small amino acids. This domain was predicted to be a signal peptide by the SOSUIsignal analysis (available on the World Wide Web at bp.nuap.nagoya-u.ac.jp/ sosui/sosuisignal/sosuisignal_submit.html). Second, the phase partitioning assay (Fig. 2C) demonstrated that HS6ST3-GFP was soluble, suggesting the cleavage of its hydrophobic transmembrane domain. The molecular form of HS6ST3-GFP in the cell was the same as that of the extracellular form, even after treating the cells with BFA (Fig. 2B). Third, when several hydrophilic residues were added to its N terminus, HS6ST3-GFP was retained in the cell (Fig. 2D) as a membrane protein (Fig. 2C). Fourth, substituting the N-terminal hydrophobic domain of HS2ST for that of HS6ST3 rendered the chimeric protein soluble (Fig. 2C) and secretory (Fig. 2D). Although we could not show directly that a signal peptidase cleaves HS6ST3-GFP, these experiments strongly suggest that the N terminus of HS6ST3-GFP behaves like a signal peptide of a secretory protein. A series of mutants with substitutions in the conserved QY residues (AY, QA, AA) were also predicted to have signal peptide activity according to SOSUI signal analysis. It is not an easy task to determine the N-terminal amino acid of the cleavage products of the HS6ST3-GFP and its QY mutants, because limiting amounts are available. At the moment, we cannot state categorically whether the N-terminal hydrophobic sequences of HS6ST1 and HS6ST2 behave as signal peptides of secretory protein or not.
We treated the transfected CHO-K1 cells with a series of protease inhibitors and found that the secretion of HS6ST3 was only inhibited by the cell-permeable -secretase inhibitor 4 H. Habuchi and K. Kimata, unpublished observations. | 7,747.6 | 2007-05-18T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Advancing Building Energy Management System to Enable Smart Grid Interoperation
With the emerging concept of smart grid, a customer's building facility is able to consume, generate, and store energy. At the center of the facility, an energy management system is required to perform an efficient energy management and to enable smart grid interoperation. However, the existing EMS is designed only for the management function and does not support the interoperation. To resolve the issue, this paper designs the interoperation function and proposes a new EMS model, named Premises Automation System (PAS), that also inherits the existing management function. To identify functional requirements, we bring out four design aspects out of energy services. Then, we design two categories of interoperable energy services to achieve customer interoperation. To demonstrate the feasibility of PAS, we implement and deploy a testbed in a campus. We conduct experiments with a microgrid scenario and present interesting measurements and findings.
Introduction
Smart grid is a nationwide project to modernize the 100-yearold power infrastructure by integrating the state-of-the-art information technology for two ultimate goals: (1) balance the power demand (consumption) with the supply via active interoperation amongst energy resources and (2) accelerate the use of environment-friendly renewable energy sources.
In the smart grid context, building facilities, including industrial, commercial, and residential sectors, have been primary energy consumers; they consume 72% of total energy in the US [1]. In the future, the facilities will be capable of generating and storing energy with potential inclusion of Electric Vehicles (EVs), solar panels, and batteries. To manage such complicating energy resources, research community has developed an intelligent energy management system (EMS). Its eventual goal is to maximize energy efficiency in a building and to minimize the electricity cost by making the best use of energy resources available in a building. To this end, the EMS communicates with individual building equipment to collect its energy data and to control them separately: fine-grained management. The EMS, then, analyzes the data collection so as to detect any inefficient building operations and failures.
From the smart grid perspective, the customer's building facility becomes the most important entity to interoperate, as its energy capability (demand, generation, and storage) dramatically increases. For instance, when the bulk power source confronts shortage of power supply, the customer is able to reduce current power consumption, which can prevent blackouts. The EMS in the facility is required to support such smart grid interoperation by enabling customer energy resources to interact with other systems outside the facility. However, the existing EMS has been designed as a standalone system without any consideration of the interoperation aspect.
To resolve the issue, we propose a new design of EMS, named Premises Automation System (PAS). PAS aims at accommodating both the customer need of efficient energy management and the grid need of customer interoperation. To address the customer need, PAS inherits fundamental design issues from the existing EMS model. It connects to customer energy resources that use heterogeneous communication protocols and technologies and manages them in a fine-grained manner. To address the grid need, we first review existing and potential energy services that realize the customer interoperation and then classify them into two categories: grid service and customer service. In a grid service, 2 International Journal of Distributed Sensor Networks the customer facility receives and consumes service data delivered from smart grid, while the facility provides service data to the grid in a customer service. For each category of service, we examine functional requirements of the EMS in the four aspects of service data type, communication interface to realize the service, required intelligence (data processing and knowledge generation), and security and privacy.
To demonstrate the feasibility of PAS, we develop and deploy a testbed in our campus. In the testbed, PAS connects to and manages various types of energy resources, consumes an automated Demand Response service, generates valuable energy forecast data, and provides energy services to smart grid based on the proposed service model in a secure manner. We run experiments to evaluate these functionalities under a microgrid scenario and illustrate interesting measurements and observations. The rest of the paper is organized as follows. Section 2 describes smart grid and its interoperation with customer facility. It also introduces four aspects to consider in the design of smart griddable EMS. Section 3 proposes the PAS design that addresses fine-grained energy management, grid energy service, and customer energy service. The PAS testbed is implemented and illustrated in Section 4, which is followed by experiments with the microgrid scenario in Section 5. Finally, we conclude the paper in Section 6.
Smart Grid.
Smart grid aims at making the existing power grid more intelligent and interoperable by allowing bidirectional flows of information and electrical power. By integrating the state-of-the-art information and communication technologies to the power infrastructure, energy resources with embedded sensors generate valuable data that is, then, shared with all other resources in smart grid. Such information flow enables smart grid to monitor status of power generation and consumption accurately and respond quickly to potential failure. Smart grid also allows a bidirectional flow of electricity, compared with today's power grid-electricity flows from central bulk generators to end consumers. Various types of renewables can be installed on the consumers' side and supply power back to the grid reversely. On top of the information and power network, the operational goal of a smart grid system is to maximize interoperations amongst energy resources so as to balance the power demand with the supply, which eventually makes the power grid more reliable and sustainable. To facilitate the interoperations, National Institute of Standards and Technology (NIST) presents a conceptual model consisting of seven domains, each of which represents a high-level grouping of smart grid entities having similar objectives [2].
Customer Domain.
In the conceptual model, the customer domain represents customer facilities (e.g., office, campus, and home) that consume more than 70% of total energy in the US. Traditionally, a building automation system controls facility equipment for the purpose of occupant comfort and optimal business operations. Today, the introduction of smart grid changes the customers' awareness and expectation about their energy management. They want to see breakdowns of energy usage and to take actions to reduce energy costs. Moreover, they are interested in instrumenting new types of energy resources like solar panel within the facilities. To meet the emerging customer needs, recent research on the customer domain has developed an advanced energy management system (EMS) to build a smart building. It performs fine-grained energy measurements and controls, say, at individual home/office appliance level. It optionally analyzes the collected data and controls equipment in a way to maximize the energy efficiency inside the customer facility.
Although the existing EMS research makes the customer facility more intelligent to satisfy the customer needs, it has barely taken the grid need of interoperation into consideration. As the customers' capabilities of energy consumption, generation, and storage increase, it becomes of the most importance to interoperate with the facility for the purpose energy balance in smart grid. And the EMS is expected to play a gateway role interconnecting the facility to other domains for the interoperation. Thus, the design of the EMS must be enhanced so as to enable customer energy resources to interact with other smart grid entities outside the facility, that is, supporting customer interoperation.
Customer Interoperation and Energy Service.
Customer interoperation is an interaction of the customer facility with external domains in which the customer's own resources are engaged. Such interoperation is well realized by energy services, and we refer to a couple of literatures that present use cases of energy services [3,4].
Design Aspects for Smart Griddable Energy Management System. The interoperable energy services are divided into two categories: grid service and customer service. In the grid service, a customer facility receives and consumes service data delivered from an external domain. For instance, the facility becomes a client of a service that a local utility company provides. In the customer service, the customer facility plays as a service provider, and external domains use facility's services as clients. Each category of service is characterized by four aspects that must be considered in customer interoperation: (i) Service data could be energy measurement, energy forecast, control message, conventional information such as weather forecast, and power price. (ii) Service interface enables interdomain communications under which there are three issues of interface abstraction, data representation, and interaction model. (iii) Intelligent unit performs interpretation of external data, knowledge generation, and decision making to take energy-related actions. (iv) Security addresses the most critical security concern at each category of energy service.
Taking these aspects into consideration, the following section designs a smart griddable energy management system, named Premises Automation System.
Related Works.
Interoperation among heterogeneous systems in the smart grid field has been one of the most critical engineering problems and many research ideas have been proposed. While researchers have addressed and evaluated excellent interoperation models, few of them verify the feasibility of their models throughout experiments on real world testbeds and extensive data analysis. Warmer et al. [5] discuss the potential of Service-Oriented Architecture (SOA) in an emerging smart house environment. Authors see the Internet and web services as the key to enable the interaction between the house with its smart devices and the supply companies and electricity service operators to exchange supply bids and Demand Response (DR) related data. They insightfully summarize the concept of how web services drive radical changes in the energy system. Morvaj et al. [6] develop a simulation model for analysis of interactions within an envisioned smart city in order to demonstrate basic features of several smart houses. The proposed model acts in accordance with expected behavior, according to current and future features of smart grid including utilization of renewables, energy control on demand and supply side, and price-based signaling. Patti et al. [7] present the design and implementation of a service-oriented infrastructure composed of a middleware, a database, and a network interface layer for public space monitoring. The proposed scheme highlights providing an effective and hardware independent user interface. For interested readers, we refer to a survey paper [8] that reviews architectures and concepts for intelligence in future electric energy systems.
Premises Automation System
This section examines the design issues of PAS under two service categories. We also investigate issues on the management of customer energy resources so that PAS satisfies both the customer needs and the grid needs. For better understanding, we illustrate a system architecture in Figure 1.
Management of Customer Energy
Resources. PAS, as an energy management system, must be able to manage internal energy resources. This subsection briefly discusses two fundamental issues.
Accessing to Heterogeneous Energy Devices.
A customer facility consists of a number of pieces of building equipment (energy resources) from various vendors. They often use proprietary data formats, contexts, and communication protocols, and PAS must cope with such heterogeneity to make all the energy data generated within a facility understandable in a unified manner.
To address the issue, many organizations recommend using standardized data representation models. Examples include ANSI C12.19 data model for smart meters [9] and ZigBee Smart Energy Profile (SEP) in the home area network context [10]. However, most legacy systems still generate data in the proprietary format. PAS must be able to handle and transform such data into a standard format. Standardization efforts touching this issue include Facility Smart Grid Information Model (FSGIM) [11], Open Building Information Exchange (oBIX) [12], and Web Service for Building Automation and Control Networks (BACnet/WS) [13]. In academia, researchers have developed programmable APIs that allow us to access customer energy resources in a unified manner [14]. Haggerty et al. [15] leverage the SensorML scheme [16] and define a data model that fits to tiny sensor devices embedded into energy resources.
Fine-Grained Measurement and Control.
Energy resources in the smart building are equipped with embedded systems, and PAS gathers detailed information from them. Such information includes power usage, status of the resource, and availability of load shedding and shifting. Finegrained measurement benefits building management in many aspects. A building owner is informed of a breakdown of energy usage, which motivates him to take actions for energy savings. Detailed data can be further processed together with other sensory data, say from occupancy sensors, for more efficient building operations. PAS also controls the energy resources individually. Say, the building owner is required to reduce 50 KW of power for 2 hours. He does not want to stop the entire building operations. He would rather find unnecessary or less prioritized energy loads first and perform customized controls. A new type of energy resource also necessitates the fine-grained control. For instance, smart appliances such as washer and dryer are able to shift their operations, and an LED light adjusts its brightness levels.
Many smart building researches have been devoted to the capability of measurement and control. Plug-load meters, which can do energy metering and control for plug-type loads, have been developed in projects [17,18]. Bellala et al. analyze time series data of energy usage in a commercial campus, from which their algorithm detects anomalous usage periods representing unusual power consumption [19].
Service Data and Communication Interface.
In most grid services, smart grid sends conventional information data to the customer facility that, then, makes use of the information to trigger actions and to generate meaningful knowledge.
For instance, a Demand Response Automation Server (DRAS) provides a DR service by generating Demand Response (DR) event signals to notify customers of status changes on the power supply side [20]. Then, PAS accepts and understands the service signals, delivers event information to appropriate energy resources to control, and sends a DR event report back to the DRAS. Many standardization efforts have defined communication protocols for such grid services. Examples include Energy Interoperation (EI) [21], Open Automated Demand Response (OpenADR) [20], IEC Common Information Model (CIM) family of standards, and Weather Information Exchange Model (WXXM) [22]. The design of the service interface for the grid services is straightforward. To become a service consumer, PAS must develop communication counterparts of the grid services.
Intelligent Unit.
It is imperative that PAS translates the contexts of the grid services into the semantics that are being used within the customer facility. Upon receiving a DR signal from smart grid, PAS may control a group of energy resources according to a preprogrammed rule. In this procedure, the most demanding intelligences in PAS are an accurate interpretation of the grid service data (DR signal) and a decision making of how new information affects the operations of energy resources. For instance, a DR signal in OpenADR can deliver different forms of information. It may include realtime power price, expecting the customer to react to the price change. He may decide not to change the building operations if the benefit of staying overwhelms the increased energy bill. The DR signal may deliver a specific amount of energy curtailment that the customer must be enforced. To respond to the signal, PAS curtails and/or shifts less critical energy loads so as to minimize impacts on building operations while satisfying the grid's requirement. Or, it may decide to use onsite generations.
Another intelligence required in PAS is to create new knowledge and service based on grid service data. Zhu et al. estimate energy harvesting by a solar array given a weather forecast [23]. Then, they develop a control algorithm that determines when and how much energy to store in a local battery storage to minimize the electricity bill. In [24], given EV owners' charging profiles and real-time power price, the authors develop a Vehicle-to-Grid (V2G) scheduling algorithm that works at a large scale EV charging structure.
Security.
Most grid service data is open to public and thus is not often encrypted. Instead, message integrity is of the most importance. That is, it must be ensured that the messages are not tampered in transmit. For instance, a DR signal delivers the changes of power price over time that must be transparent by law. Altered price data, but, may mislead the customers' increasing energy consumption on emergency, which causes shortage of power supply and blackouts. PAS must take care of this issue.
Service Data.
In the customer energy service, the facility serves as a service provider to external domains. Unlike the grid service, PAS transmits data that is directly related to energy resources, which is classified as follows: (i) Energy measurement represents the instantaneous capability of power consumption, generation, and storage. In addition to current, voltage, power, and energy values, we include power quality data (e.g., reactive power) in this category.
(ii) Resource status represents current status of an energy resource. In addition to simple on/off equipment, an LED light has two status types of brightness and temperature, whereas the status of a power storage is indicated by State of Charge (SOC) [%].
(iii) Command message is service data delivered from external domains to control a resource or to change its configuration.
(iv) Energy history is a collection of historical data of energy measurement per each resource.
(v) Energy forecast is an estimation of future energy activity on each energy resource.
Service
Interface. PAS provides service data to external domains via service interfaces. It also accepts messages that eventually change the status of energy resources. To this end, a couple of public reports mention an Energy Service Interface (ESI) as a gateway actor [2,25], and Hardin defines it conceptually [4]. After examining it deliberately, we bring three fundamental design issues to help us determine what and how specific communication interfaces are implemented. First, PAS must consider interface abstraction that determines the appropriate level of internal details that it exposes to external domains. High level of abstraction exposes internal business logics with less details, while low level of abstraction exposes more details of internal operations. A welldesigned abstraction transmits only the necessary service data to external domains, while shielding them from changes occurring within the customer facility. Next, PAS must represent service data in a standardized format. To address the issue, smart grid takes a Canonical Data Model (CDM) approach. A data producer transforms its output to a standardized information model, and then a consumer transforms it back to its own terminology. Standard data models discussed in Section 3.2 can be good candidates for the CDM. Last, PAS must support efficient interaction models that determine how PAS communicates with external systems. Recently, a web service (WS) has been paid attention due to its interoperability and scalability. For instance, OpenADR specifies two types of bindings (SOAP and HTTP) and two types of message exchange patterns (PUSH and PULL). Recent literatures actively exploit the Representational State Transfer (REST) style [26] that defines data and service as an object and transfers the object's state via HTTP's methods.
Intelligent Unit.
In the customer service, PAS is required to compute energy forecast. Since the facility is the most dynamic energy entity, predicting its energy activity is essential to achieve energy balance in smart grid.
PAS performs demand forecast. In the past, large customers such as manufacturing factories and tall office buildings consumed most customer power. But as power hungry equipment such as air conditioning and Electric Vehicle has been installed within smaller customer facilities, it becomes more difficult that smart grid estimates energy demand accurately. In this sense, an accurate demand forecast from each customer facility is necessary for smart grid to plan future power generation. PAS also performs generation forecast. Although the renewables in the customer facility represent clean energy sources, they still suffer from being variable or intermittent. As smart grid relies on them, such unpredictability may threaten the energy balance. Accurate generation forecast can mitigate the risk. Unlike demand and generation forecast, storage forecast is often computed from them today. But if the battery in an EV can be used as power storage in the future, storage forecast will be more meaningful service data and its computation will become more complicated as vehicles move around.
An analysis of big data collection of energy measurement, history, and forecast is recently attracting huge attention. But it does not necessarily contribute to customer interoperation, so we leave it as a future research.
Security.
In the customer service, we primarily consider two security requirements, confidentiality and access control. First, the customer facility generates a huge amount of energy data that is highly likely to disclose personal identifiable information of the occupants. Since PAS transmits it to external domains via insecure public networks, a strong data encryption mechanism must be considered. Next, PAS in the customer service permits only authorized users to read customer data and to control energy resources. A new challenge in the access control is how to cope with fine granularity that the smart building pursues. Each customer facility will include a myriad of equipment and energy resources, each of which can be accessible by external systems individually. Moreover, an access for data reading must be distinguished from that for resource control. This would make the access control rules in PAS too complicated. An access control mechanism must also comply with the service interaction model. Unfortunately, the existing WS-Security resolves the integrity issue only [27].
Implementation of PAS Testbed
To demonstrate the feasibility of the design issues, we implement and deploy a PAS testbed. We deploy various types of energy resources and develop a PAS system running the required functions as shown in Figure 1.
Energy Resources.
The energy resources considered in the testbed are smart submeter, plug-load meter, smart equipment, EV charging station, and solar panel. Some of them are pictured in Figure 2.
Smart Submeter. Unlike a conventional smart meter that measures aggregated energy usage, a smart submeter provides fine-grained measurement and control. Our testbed deploys two types of submeters. We instrument a panel-level multisubmeter that simultaneously connects up to 36 single phase circuits within a panel [28]. Using it, we monitor two groups of energy loads, the lightings and power outlets at an office. We also install mini-submeters that are instrumented to single power lines [29]. It can directly connect to a light switch that turns on/off a set of fluorescent lights. These submeters use Current Transformer (CT) to convert current to voltage, and an embedded microcontroller calculates the real, reactive, and apparent powers and energy usage. They are also with relays, and the microcontroller switches the power upon requests.
Plug-Load Meter. As the plug-loads account for more than one-third of the total power consumption in a building [30], it is necessary to manage them carefully. To this end, we deploy two types of plug-load meters: smart plugs and smart power strips. Office appliances are plugged into them: computers, monitors, desk lamps, and network switches. The plug-load meter is functionally similar to a submeter, that is, energy measurement and control. It communicates with PAS using a ZigBee module. We note that ZigBee becomes one of the most promising technologies for wireless communications in a smart building system. The ZigBee protocol is patent-free and thus it costs low. ZigBee adopts collision avoidance schemes, providing communication reliability that is strongly required in wireless environment. It also allows ZigBee nodes to constitute a mesh topology, making the ZigBee network scalable.
Smart Equipment. Smart equipment represents such energy resources that must be accessed directly. Recent smart appliance, programmable thermostat, and LED light fall into this category. Each equipment has its own operation cycles beyond a simple on/off control and is able to adjust the operations upon external requests. The PAS testbed deploys dimmable LED panel lights that adjust their brightness and color temperature in 8 steps. Each light uses a ZigBee module to transmit its status and to accept control commands to/from PAS. PAS also connects to two types of smart appliances via the Ethernet: a clothes dryer and a refrigerator. PAS is able to change the strength of the heat (high, low, or no heat) as well as turn on/off the operation. The refrigerator adjusts the operating cycles of compressor, defrost, and fan. To measure energy usage, the mini-submeters are instrumented to their input power cables.
EV Charging Station. A number of charging stations have been deployed at campus parking structures in our sister project. A station powers several EVs via J1772 connectors simultaneously and supports multiple charging levels [31]. It is capable of measuring charging capacity as well as charging rate. Each station sends the charging data in real time to a management server in our laboratory that controls the stations based on subscribers' profiles and preference. PAS communicates with the stations via the server. Because of low penetration of EVs, however, we could not collect enough data for our experiments. As a complementary work, we simulate charging activities based on measurements and obtain ample amount of data. Solar Panel. PAS also connects to a Photovoltaic (PV) solar panel. We are currently installing a new one on the roof, and, instead, this version of testbed implements a virtual panel that follows the same hardware specification of the real device. Our virtual resource obtains the real-time solar radiation data (GHI (Global Horizontal Irradiation), DNI (Direct Normal Irradiation), and temperature [ ∘ C]) from Solar Resources and Meteorological Assessment Project (SOLRMAP) [32] and then computes power (P), voltage (V), and current (I) values using the following equations. Notation Description gives the description of notation used in our scheme. Solar power is For voltage V, we compute for running from 0 to max every 0.01 steps as follows: = × ( 0 × (exp ( ( + 273.15) ) − 1) − ) . (2) Then, we pick such that | − | is minimum, where = P/( × ). The voltage at this point, , represents the most accurate value that we can obtain. Then, we compute V = × and I = P/V. In this way, we update energy data of our virtual PV resource every 30 min.
Intelligence
Resource Management. PAS maintains meaningful metadata regarding each resource. For instance, each mini-submeter is managed with a load type, location, and the load's priority. A resource owner configures the metadata, and thus the data keeps reflecting physical characteristics of the plugged load and user contexts. PAS also develops a control strategy. It prioritizes energy resources based on resource type, current status, criticality, and capacity and then determines groups of resources and corresponding control sets. Upon receiving a DR signal, PAS finds a predefined strategy corresponding to the DR event and executes the control set. PAS provides a scheduling function through which a user preschedules the operations of energy resources. The dimmable LED lights are International Journal of Distributed Sensor Networks 7 now reserved to be ON only during office hours, while a user can still turn them on/off any time.
Forecasting. PAS performs both demand forecast and generation forecast on individual resource level. Among various forecasting models, PAS takes a persistence model for demand forecast. It estimates future demand fully based on historical data, which is highly effective in very short-term prediction, that is, one hour ahead. Due to its simplicity, it is widely used by public sectors. In particular, we extend a Customer Baseline Load (CBL) calculation that has been used by local utilities to calculate customers' curtailment on DR events. That is, given historical data over last 5 weeks, we put weights on the day and latest data and estimate the demand of the next hour.
For solar generation forecasting, PAS implements a hybrid model that integrates a persistence model and a statistical model to ensure high accuracy over both short-term and midterm predictions. As many articles recommend [33], hybrid models perform well in the time horizons ranging from 1 hour up to 36 hours, representing short-and mediumterm forecasts, respectively. More specifically, our model takes Auto-Regressive Moving Average (ARMA) [34] as a mathematical model. The advantages of using the ARMA model and the persistence model are their simplicity, costeffectiveness, and accuracy for timely forecasting. The purpose of the research is not to compete with a variety of solar forecasting tools that are academically or commercially available today, but to generate our own solar forecasting results using the simple, inexpensive, and effective methods, based on our campus environment, which can be extended for the laboratory-level microgrid.
The following equation describes the process to predict solar generation ( ) at time : where the first term, the Auto-Regressive (AR) part, includes the order of the AR process and the AR coefficient , and the second term, the Moving Average (MA) part, includes the order of the MA process, the MA coefficient , and the white noise ( ). Constructing our model consists of two phases, identifying the orders , and determining the coefficients , . In particular, we limit , < 10 to simplify the process. To realize the model, we use the Daniel-Chen model (1991) for order identification. Coefficients determination is calculated by applying the Yule-Walker relations for and the Newton-Raphson algorithms for . The two-phase realization of the ARMA model is implemented in the System Identification Toolbox, Matlab platform [35]. By inputting the data resulting from SAM into Matlab, the System Identification Toolbox is capable of constructing mathematical models, that is, finding the orders and coefficients in the equation above. As a result, the realized ARMA model is able to deliver time series output for forecasting. Power Price Forecast. Instead of using a Time-of-Use (ToU) tariff, PAS realizes a dynamic pricing mechanism by exploiting the wholesale market price provided by California Independent System Operator (CAISO) [38]. More specifically, PAS obtains three types of dynamic power price: Day Ahead Market (DAM); Hour-Ahead Scheduling Process (HASP); and Real-Time Market (RTM). The DAM provides an estimated power price of every hour for 24 hours ahead. The HASP and RTM provide an hour-ahead price estimation of every 12 minutes and 5 minutes, respectively. Our DR server also uses the same data as a unit power price for the DR service.
Energy Service Provider.
Taking the design issues in Section 3.3, we implement the customer energy service.
Open Building Information Exchange.
To follow the CDM approach, we take the oBIX specification [39], a standard to represent data in building information systems. Thus, external systems consume service data of the oBIX format.
The architecture of oBIX data representation consists of three principles: object model, contract, and XML syntax. First, the object model defines information of electrical and mechanical systems in a building as an object. Like the Object-Oriented Programming (OOP), each object is modeled by a set of value objects like "string" and "bool" and a set of op objects that defines an operation with input and output objects. Next, the object model allows inheritance to 8 International Journal of Distributed Sensor Networks <obj> <real name="voltage" val="120.2" unit="obix:units/volt"/> <real name="current" val="1.21" unit="obix:units/ampere"/> <real name="power" val="160.03" unit="obix:units/watt"/> </obj> Example 1 <obj is="obix:History" xmlns:psxml="http://myPAS/schema"> <int name="count" val="541"/> <abstime name="start" val="2013-04-12T00:00:00.000-07:00"/> <abstime name="end" val="2013-07-16T00:00:00.000-07:00"/> <op name="query" href="query" in="psxml:HistoryFilterEx" out="obix:HistoryQueryOut"/> </obj> Example 2 model complex energy data by means of a contract mechanism. Realized by is object, it establishes the classic "is a" relationship with various overriding rules. In this way, an object can represent not only a physical unit directly, but also a particular functionality as a collection of subobjects. Last, oBIX exploits XML to express its underlying object model. To this end, it specifies four syntaxes: each object type maps to one XML element; an object's children are mapped as children elements; the XML element name maps to the predefined primitive object type; and every other object is expressed as XML attribute.
With the principles, oBIX supports low level of abstraction. More precisely, the appropriate use of value and op objects determines the abstraction level of an energy object. This capability allows us to abstract energy services in a more flexible manner to satisfy various service requirements. Leveraging the oBIX specification, we implement data models at the lowest level. Example 1 represents a power object that contains power draw data of a smart plug.
In a similar way, Example 2 illustrates a History object, a historical archive of a point's value over time. The is attribute in the obj element indicates that it is extended from a standard oBIX object obix:History. The example also shows that the query operation to read history records takes an argument whose object type is psxml:HistoryFilterEx and returns history records in the object of obix:HistoryQueryOut.
Web
Services. oBIX designs energy data in an objectoriented way. An interaction model is, then, required to map the object into an energy service. To this end, we implement WS that exposes service interfaces in an interoperable manner. In terms of WS, each energy object is accessed via a URI and passed around as an oBIX document. PAS implements the access methods via the HTTP binding in the REST style that realizes a resource centric access using a set of verbs.
More specifically, three request types in oBIX are mapped into HTTP methods: Read, Write, and Invoke. Read request uses GET for any object having href attribute and returns object data as an oBIX document. Write is targeting at an object having writable attribute and is implemented with PUT. Invoke supports operations on an object by using the POST method. An oBIX document is passed to PAS as an input in both Write and Invoke. In Example 3 the oBIX document represents a smart plug object, "plug1." The ref tells the link to an associated subobject, "power." The example also shows that the resource is controllable via two ways: Write and Invoke.
Fine-Grained Access Control.
Each energy resource distinguishes its values and operations, and web services expose them by implementing three actions to the resource. In this way, they realize "fine granularity of data access and resource controls." For instance, a user accesses a Read interface to read energy usage of an air conditioning system, while he may turn it off via another interface. Fine granularity is a new challenge for the security mechanism, because different actions induce different levels of security violation.
To resolve the issue, we implement a fine-grained access control that executes authorization on the action level (i.e., Read, Write, and Invoke) [40]. It leverages the concept of a file system Access Control Entry. That is, each object maps to an attribute with three-digit privilege level. For instance, the object "plug1" is related to an attribute "plug1=111." The first digit indicates permission of Read, and the following two digits indicate the rights of Write and Invoke, respectively. Each user maintains his own set of attributes in his private key. When the user tries to access the object by presenting his key having an attribute of "plug1=100," he is permitted to read energy data, but not to change any values or control the energy resource.
Case Study: Microgrid
To evaluate the PAS testbed, we consider a microgrid scenario that we envision as a promising model of a customer facility that satisfies both the grid need and the customer need. From International Journal of Distributed Sensor Networks 9 <obj href="http://myPAS/points/zigbee/plug1/"> <str name="deviceName" val="BSPE12SOYZM43001"/> <str name="version" val="C2V5.57"/> <bool name="connectLoad" writable="true" val="true"/> <op name="controlLoad" href="control" in=" obix:WritePointIn" out="obix:Point"/> <ref name="power" href="power"/> </obj> Example 3 the grid perspective, microgrid is able to respond to the grid's signal, by reducing power consumption or by feeding power back to the grid in real time. From the customer perspective, it eventually pursues a Net-Zero Energy Building (NZEB) that can island the facility from a bulk power source by leveraging on-site generation. We run experiments to illustrate that PAS performs smart operations to achieve microgrid.
Fine-Grained Measurement and Control.
Connected to various submeters and equipment, PAS can easily produce fine-grained measurement and execute individual resource controls. Since this paper focuses on customer interoperation, we omit graphs on simple measurements. Instead, this paper highlights an interesting experiment measuring the power quality of a refrigerator. Figure 3 illustrates a result on a randomly selected day. The reactive power and the power factor are computed by the phase difference ( ) between input voltage (V) and current (A), and together they represent the amount of energy being wasted. For instance, the power factor of 0.7 means that the 100 W device requires 143 VA (Voltage-Ampere) apparent power (= 100/0.7) to operate, and 43 VA is wasted. The result shows the power factor of 0.66 (= cos ) on average and the accumulated reactive power of 714.8 VAR (VA Reactive), comparing to the consumed energy of 726.8 Wh over 24 hours. The value in our experiment is mainly attributed to inductance and capacitance in the electric circuit of the refrigerator and can be usually compensated by using a synchronous condenser. In this way, we can improve energy efficiency. A combination of inductive and resistive loads at the customer facility can be used in grid stabilization services such as frequency regulation and VAR compensation. An appropriate control on the loads can help balance active and reactive power in smart grid.
Knowledge Generation.
For the demand forecast experiment, we train our persistence model with energy usage data of a smart power strip. A couple of office appliances are plugged into the strip. And two students come and go to use it to power their laptops. So, the usage pattern is more like irregular as shown in Figure 4 as shown in Figure 4(b). The figure also draws bars representing real measurement in order to verify the accuracy of the forecasting. Our model forecasts energy demand of 90.2 Wh a day, but the strip consumes 97.8 Wh in real, which computes around 92.2% of accuracy. The highest accuracy appears at hour 13 with 97.6%, while the lowest one is 67.6% at hour 18. PAS performs a solar generation forecast using the ARMA model. The solar panel is fixed with an angle of 20 degrees, and its maximum capacity is 5 KW. The relationship among power, voltage, and current is as follows: maximum point voltage is 328 V; maximum point current is 10.58 Amp; open circuit voltage is 381 V; and short circuit current is 11.5 Amp. Table 1 shows two one-day ahead generation forecasts of the virtual PV solar panel in August 2012 and January 2013. During a winter day, the panel generates 13.2 KW of power for 10 hours with efficiency of 15.3%. In winter, it generates 18.7 KW of power for 12 hours with efficiency of 15.6%.
We measure the prediction accuracy through Mean Squared Error (MSE). For calculation, we take solar forecast data over one year from the ARMA model and obtain real measurement data from System Advisor Model (SAM) [41]. For comparison, we also implement a simple persistence model, run prediction, and compute MSE values. Figure 5 illustrates the error values [KW] over one year. Taking January as an example, the ARMA model shows better performance than the persistence model by 44.38%. The error in the ARMA model shows 0.0206 KW on average with max. of 0.028 KW in March and min. of 0.0121 KW in July. an Automated DR. A DR service manager acquires a power price forecast from a wholesale market in California. Figure 6, the event starts at 7 am and lasts until 2 pm. During the event, the power price changes: it becomes 2 times, 3 times, and 2 times more expensive than the unit price. This event information is generated one hour before the event, so it is an hour-ahead DR program. Once PAS receives a DR signal and notices that the price goes up, it performs a predefined DR strategy. In this experiment, we register one LED light to our strategy so that the price change is seen through the brightness of the LED. As the price goes up, the LED gets dimmed proportionally. Since the power draw of the LED is proportional to the brightness level, we can easily observe the change of energy usage during the DR event.
Utilizing Distributed Energy Resource.
Given the capabilities of both power generation and consumption, this experiment runs a simulation of a Battery Management System (BMS), showing that PAS manages an energy storage to achieve microgrid. To this end, we take our experimental data as shown in Table 2. Both the charging and discharging efficiency on the battery are 80%. That is, we lose 20% of power in the procedure of charging and discharging. The maximum SOC is 90%, that is, 22.5 KWh, while the minimum SOC is set to 20% (5 KWh). Whenever the panels generate power, PAS stores it to the battery. The energy loads draw power from the battery first. They use another source of power (e.g., from the grid) when the battery uses up the power stored. Figure 7 illustrates the experimental results over 7 days (168 hours) in 2013. The square mark curve (blue) represents aggregated power generation by two solar panels: max. of 6,066.8 W and min. of 0 W. The star mark curve (red) represents aggregated power consumption: max. of 2,596.9 W and min. of 351.3 W. The solid line (black) represents the SOC of the battery that both the generation and consumption influence directly. Since the generation is usually greater than the consumption, the SOC often reaches to the maximum of 22.5 KWh. The panels generate 67.3 KWh of surplus power for 30 hours in total. On the other hand, the demand exceeds the power capacity of generation and storage from hour 100 to hour 103. This is mainly attributed to low power generation on day 4, which exemplifies the unpredictability of renewables.
Conclusion
To achieve the ultimate goal of smart grid, an EMS in the customer facility must satisfy both the customer need and the grid need. However, the existing EMS only considers an efficient energy management and does not communicate with other smart grid entities, which fails to achieve smart grid interoperation. To address the issue, this paper examined two categories of energy services that the EMS must support for customer interoperation. We designed the required EMS functions under four aspects of service data, service interface, intelligence, and security and then proposed a new EMS model, PAS. To illustrate the feasibility of PAS, we implemented and deployed a testbed in our campus. The testbed consists of a number of energy resources connected to PAS; data processing and service generation component; communication interface for service provision; and data protection mechanism. We ran interesting experiments on top of the testbed and demonstrated our measurements and findings.
In the future, we will investigate further advancement of the EMS as a microgrid platform that can support an energy transaction.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper. | 9,912 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Numerical Investigations for the Two-Phase Flow Structures and Chemical Reactions within a Tray Flue Gas Desulfurization Tower by Porous Media Model
: The computational cost of the full-scale flue gas desulfurization (FGD) tower with perfo-rated sieve trays is too high, considering the enormous scale ratio between the perforated hole at the sieve tray and the relevant size of the full-scale tower. As a result, the porous media model is used to replace the complex perforated structure at the sieve tray in this study, which has been validated for the measured data for both the small-and full-scale FGD tower. Under a lower inlet gas volume flow rate, the simulation result of the four-tray tower indicates that the uprising gas flow of high SO 2 mass fraction can move along the wall of the tower. This region lacks two-phase mixing and, hence, its desulfurization efficiency is similar to that of empty and one-tray towers under the same flow conditions. However, when the gas volume flow rate increases, the liquid column becomes larger because of the stronger inertial of the uprising gas flow. In this situation, the implementation of the sieve tray suppresses the deflection of liquid flow and provides a better mixing within sieve trays, leading to a noticeable increase in desulfurization efficiency. This study provides insightful information for the design guideline for the relevant industries.
Introduction
In order to reduce the environmental acidification of sulfur dioxide (SO 2 ) from the fossil fuel power plants or similar facilities, a flue gas desulfurization (FGD) tower is used to absorb the SO 2 component from the exhaust gas before it enters the environments [1,2].Downward alkaline solution injected from spray nozzles mixes with the upward exhaust gas within the tower [2][3][4].During the desulfurization process, the gas SO 2 dissolves into alkaline solution in aqueous phase to form the sulfurous acid (SO 2 •H 2 O), and the ensuing dissociation produces sulfurous by-products, such as bisulfite (HSO 3 − ) and sulfite (SO 3 2− ).The sulfurous species lead to the acidification of the alkaline solution.A higher pH value of the solution from the nozzles can neutralize the severity of acidification and absorb more SO 2 before its pH value eventually drops [5].
The setups of the nozzles, guide vane, direction of inlet exhaust gas can have tremendous impacts on the two-phase mixing, which can significantly influence the SO 2 removal efficiency.The ideal mathematical model that simplifies the flow conditions with uniform velocity [1,[6][7][8] could hardly capture these complex geometric effects within a facility, and hence, the Computational Fluid Dynamics (CFD) technique can provide more insightful information and benefit to optimal design.CFD has already been applied widely to solve the multiphase flow dynamics within the FGD tower [4,5,[9][10][11][12][13][14][15][16] and other facilities or processes with relevant fields, such as scrubbers of CO 2 emissions [11] and fine particle dust concentrators [17].Depending on its approach to the interactions between liquid-solid and gas phase, the multiphase flow model can be categorized as an Eulerian-Lagrangian model [9,[11][12][13][14] and an Eulerian-Eulerian model [5,10].
The liquid droplets are usually regarded as a dispersed phase in the Eulerian-Lagrangian model with tracking of a large number of particles [18].The droplet is usually regarded as a parcel which contains the same properties [11,12,[19][20][21].The drag coefficient of the particle is determined by the relative Reynolds number, which is based on the slip velocity between the liquid and solid phase.The drag force of a particle usually feeds back to the continuous phase as two-way coupling [9,14].The chemical reaction is modelled by dual-film theory, Henry's Law [11][12][13], and Chilton-Colburn analogy [8].Under the aforementioned framework, Marocco and Inzoli [9] have simulated the hydrodynamics and chemical reaction for a dispersed liquid droplet within an open spray tower.Eight dissolved species from gas to liquid phase are considered during the desulfurization process, and the deposition and splashing of liquid particle as they hit the wall of tower are handled by an empirical dropletwall interaction model.Qu et al. [11] evaluated the effect of droplet diameter.A finer diameter benefits the mass transfer but could reduce the uniformity of the liquid-gas flow, and vice versa for a particle of larger diameter.An optimal diameter such as 200 µm was selected based on their simulations.Qu et al. [12] further investigated the flow structures at different region of the tower, and the two-phase mixing is stronger in the middle region that dominates the desulfurization process.The evaporation occurs at the inlet region of the gas phase, which only has minor influence on the desulfurization performance.Xu et al. [13] investigated the CO 2 removal efficiency by NH 3 solution and the effect of the layout of the nozzle spray.A fraction of upward gas can be allowed to flow downward by the entrainment effect of the downward spraying liquid.An upward spray at a lower altitude can enhance 7% removal efficiency from a downward spray at a higher altitude because the former offers a longer residence time and increases the droplet volume fraction.
As for the Eulerian-Eulerian model, it treats both phases as continuous flow.As the upward gas flow and downward liquid flow have a significant slip velocity, two sets of Navier-Stokes equations are required [5,10].The two-phase mass and force interaction is modelled similar to that in the Eulerian-Lagrangian model, except all the source terms activate in the continuous phase without summation from the dispersed particles.Gomez et al. [5] used an Eulerian-Eulerian model to simulate the desulfurization process by limestone and oxidation for the production of gypsum in the bottom tank.There are six additional species transport equations to tackle the complex chemical reactions.Depending on the complexity of the geometry of the FGD tower, it can be impractical to solve all the chemical species, which could cause convergence issues and increase computational cost.As a result, instead of solving every transport equation, a prepared chemical database that couples with the flow solver [22] was developed in our previous study [10], which significantly reduces the required number of transport equations.
The implementation of a sieve tray can enhance the uniformity of the flow, which is usually favorable to the operation conditions [5,12,23].The tray-equipped FGD tower is found to increase the SO 2 removal efficiency by 7% compared to the empty tower [24].The detailed two-phase mixing of a small-scale FGD tower implemented by sieve tray was already simulated in our previous study [5].The number of the perforated holes at each tray is 300 for this small-scale FGD tower.However, this number exceeds 200,000 for the large-scale FGD tower [4]).Considering the extremely large ratio of hole at tray to height and width of the tower, the computation cost can be very large for simulation of the fullscale tower.As a result, the porous media model [5,25] that replaces the perforated details at the sieve tray is used in this study.The porous media model regards the tray region as a flow domain and compensates for the pressure drop by a momentum sink term.The porous media model has already been found to be able to replace the detailed structures for flow past packed beds, filter papers, and perforated plates [24][25][26][27].The empirical coefficients of the porous media model can be tabulated from Idelchik [28], and the velocity and pressure for a gas flow past tray tower by using porous media model are very consistent with that with a real perforated structure [4], saving significant computational time from the grid resolution near the trays.
In this study, the two-phase chemical flow model is used together with the porous media model, which further validates the capability of the porous media model to replace the complex structures of the FGD tower.It avoids having a tremendous number of grid points near the regions of sieve trays.The required computational time speeds up by 13 times, and hence, it becomes easier to conduct different designs for the small-scale tower, such as the influences of sieve tray and inlet gas flow rate on the two-phase mixing and the ensuing desulfurization efficiency within the FGD tower.The model replacement can also be extended to the full-scale tower simulations of more practicality.This study provides a reliable and fast computational framework and insightful information for the design guidelines for the FGD tower in the relevant industries.
Computational Models
The computational framework is based on an Eulerian-Eulerian two-phase model with Navier-Stokes equations, porous media model, standard k-ε turbulence closure, and species equations.This Eulerian-Eulerian model, chemical reactions, and porous media model are discussed in Sections 2.1-2.3.
Eulerian-Eulerian Two-Phase Flow Model
The Favre-Averaged Navier-Stokes model [5,18,22] has been used to account for the density variation between liquid and gas phases, and the Eulerian-Eulerian two-phase model [5,10] is utilized to handle the slip velocity.Two sets of Navier-Stokes equations are solved in this study.For each phase, the continuity in Equation ( 1) and Navier-Stokes equations in Equation ( 2) based on the Eulerian-Eulerian model are presented as follows: In Equations ( 1) and ( 2), the gas and liquid phases are represented by the subscripts g and l, respectively, and the subscripts i and j are the Einstein notations.The symbol u represents velocity, ρ is density, P is pressure, α is volume fraction, τ is laminar stress, τ R is Reynolds stress, G is gravity (9.8 m/s 2 in -y direction), K i is the exchange coefficient, and M is a momentum sink term for the porous media model.For simulation by real perforated structure, the porous media model is not activated.
As shown in Figure 1, air and SO 2 gas are the components of gas phase, and the sulfurous slurry and magnesium hydroxide (Mg(OH) 2 ) are the components of liquid phase.The Mg(OH) 2 solution removes the SO 2 gas phase by absorption, and new chemical byproduct, namely sulfurous slurry, is generated in the liquid phase.The SO 2 gas is less than 130 ppm, and hence it is reasonable to assume that the corresponding phase change in Figure 1 does not really affect the continuity and momentum equations.Therefore, the source term related to chemical reactions is absent in Equations ( 1) and (2) [1,5,10,29] Figure 1 does not really affect the continuity and momentum equations.Therefore, the source term related to chemical reactions is absent in Equations ( 1) and ( 2) [1,5,10,29] Figure 1.This schematic shows the components in gas and liquid phases during the desulfurization process.
The symbols ρg and ρl in Equation (3) are the mixture densities for the gas and liquid phases, respectively: The value of ρg can be calculated from the mass fractions of SO2 and air within the gas phase (YSO2 and Yair) and their densities (ρSO2 = 2.384 kg/m 3 and ρair = 1.066 kg/m 3 ).Similarly, ρl is obtained from the mass fractions of slurry and Mg(OH)2 solution within the liquid phase (Yslurry and YM) and the corresponding densities (ρSlurry = 998.2kg/m 3 and ρM = 983 kg/m 3 ).
The laminar stress τ in Equation ( 2) is based on the Newtonian assumption, which is the product of the velocity gradient and the mixture viscosity μ.The turbulent stress τ R in Equation ( 2) is computed by standard k-ε turbulence model [5,22,30,31].The turbulent viscosity stems from the empirical relations between turbulence kinetic energy k and turbulent dissipation rate ε.The turbulent stress τ R , which utilizes the Boussinesq's approximation, is also the product of the velocity gradient and the turbulent viscosity.
In Eulerian-Eulerian model, Ki(ug,I − ul,i) in Equation ( 2) is the i-direction interaction force from liquid to gas phase.As a result, the interaction force from gas to liquid in idirection is Ki(ul,I − ug,i), which switches the direction but has the same magnitude.Ki is the exchange coefficient of i-direction, and its value can be determined from the empirical relations as follows [6,22,32]: , , m a x ( , 0 .4 ) 24 18 In Equation (4), f is the drag function, Cd is the drag coefficient, τr is the relaxation time, Rer is the Reynolds number based on two-phase slip velocity and the liquid droplet diameters dl.The size of droplet diameter is chosen as 500 μm in this study from preliminary results, which is also similar to the selection of ref. [11,12].
After chemical reactions, the SO2 within gas phase is reduced, and the slurry is produced in liquid phase, as indicated in Figure 1.This desulfurization process is controlled by Equation ( 5) with source term S, which will be discussed in the next subsection.The symbols ρ g and ρ l in Equation (3) are the mixture densities for the gas and liquid phases, respectively: The value of ρ g can be calculated from the mass fractions of SO 2 and air within the gas phase (Y SO2 and Y air ) and their densities (ρ SO2 = 2.384 kg/m 3 and ρ air = 1.066 kg/m 3 ).Similarly, ρ l is obtained from the mass fractions of slurry and Mg(OH) 2 solution within the liquid phase (Y slurry and Y M ) and the corresponding densities (ρ Slurry = 998.2kg/m 3 and ρ M = 983 kg/m 3 ).
The laminar stress τ in Equation ( 2) is based on the Newtonian assumption, which is the product of the velocity gradient and the mixture viscosity µ.The turbulent stress τ R in Equation ( 2) is computed by standard k-ε turbulence model [5,22,30,31].The turbulent viscosity stems from the empirical relations between turbulence kinetic energy k and turbulent dissipation rate ε.The turbulent stress τ R , which utilizes the Boussinesq's approximation, is also the product of the velocity gradient and the turbulent viscosity.
In Eulerian-Eulerian model, K i (u g,I − u l,i ) in Equation ( 2) is the i-direction interaction force from liquid to gas phase.As a result, the interaction force from gas to liquid in i-direction is K i (u l,I − u g,i ), which switches the direction but has the same magnitude.K i is the exchange coefficient of i-direction, and its value can be determined from the empirical relations as follows [6,22,32]: In Equation ( 4), f is the drag function, C d is the drag coefficient, τ r is the relaxation time, Re r is the Reynolds number based on two-phase slip velocity and the liquid droplet diameters d l .The size of droplet diameter is chosen as 500 µm in this study from preliminary results, which is also similar to the selection of ref. [11,12].
After chemical reactions, the SO 2 within gas phase is reduced, and the slurry is produced in liquid phase, as indicated in Figure 1.This desulfurization process is controlled by Equation ( 5) with source term S, which will be discussed in the next subsection.
An isothermal temperature around 58 • C is measured within the FGD tower by China Steel Cooperation.Therefore, the energy equation is not activated in this study.Similar situations can be seen in ref. [1,5,10,20,29].
Chemical Mechanisms
An isothermal temperature around 58 °C is measured within the FGD tower by China Steel Cooperation.Therefore, the energy equation is not activated in this study.Similar situations can be seen in ref. [1,5,10,20,29].
Chemical Mechanisms
Figure 2 illustrates the chemical reactions during the desulfurization process.In the upper left corner of Figure 2, it indicates that the SO2 gas dissolves into liquid phase and hydrates to form SO2•H2O.One proton is lost and HSO3 − is generated because of the dissociation, and HSO3 − has subsequent dissociation to lose another proton and form SO3 −2 : The subscript "(aq)" in Equation ( 6) denotes the aqueous phase.The Mg(OH)2 solution is a strong electrolyte, which can dissociate completely to release magnesium ions (Mg2 + ) and OH − ions.The neutralization takes place between OH − ions from Mg(OH)2 and H + ion from Equation (6): The nonlinear algebraic equations for Equations ( 6) and ( 7) can be listed as below: The subscript "(aq)" in Equation ( 6) denotes the aqueous phase.The Mg(OH) 2 solution is a strong electrolyte, which can dissociate completely to release magnesium ions (Mg 2 + ) and OH − ions.The neutralization takes place between OH − ions from Mg(OH) 2 and H + ion from Equation ( 6): The nonlinear algebraic equations for Equations ( 6) and ( 7) can be listed as below: In Equation ( 8), K 1 , K 2 , and K 3 are the equilibrium constants for the corresponding reactions based on 58 • C [5].The symbol C H+ l corresponds to the molar concentration (mole/m 3 ) of H + in the liquid phase l, which can also be applied to the symbols for other species.The sulfurous species, such as HSO 3 − , SO 3 2− , and SO 2 in the liquid phase, are usually added together and regarded as a single variable, as denoted byC l Slurry [5,9]: Moreover, Equation (10) is the charge balance: .Additionally, the value of pH from the nozzles has always been kept as 7.7 in the facility.The value of C Mg2+ l can be calculated: In Equation (11), 10 −7.7 is the concentration of H + ion from the inlet of nozzle sprays, and K 3 is equilibrium constant for neutralization as shown in Equations ( 7) and (8).C Mg2+ l is calculated as 3.08 × 10 −7 mole/m 3 from Equation (13).Since Mg 2+ does not participate in the chemical reactions, this value is regarded as constant within the flow domain and only has influence on the balance of the electric charge in Equation (10).
Therefore, from Equations ( 8) to (10), there are six unknowns, , and C slurry l , within five equations.The nonlinear algebraic sets of Equations ( 8)-( 10) can be solved numerically once C l Slurry is known.This value can be calculated in Equation ( 12): The molecular weight of the slurry phase W slurry (kg/mole) is 80 in Equation ( 12) [5,10,33].Similar to Equation (12), the molar concentration of the SO 2 gas, C g SO2 , can be obtained as the molecular weight of the SO 2 W SO2 = 64: The chemical source term S in Equation ( 5) is modelled as follows [5,9,10]: In Equation ( 14), A int is the interfacial contact area per unit volume, which is defined as 6α l /d l .The Henry's law constant H SO2 is 82.The mass transfer coefficient k g SO2 for SO 2 in the gas is 4 × 10 −5 m/s while that in liquid phase (k l SO2 ) is 3.4 × 10 −4 m/s.The enhancement factor E SO2 is 1.3 in this study.All the chemical relevant constants above are taken based on the values of 58 • C [5].
In this study, the flow solver we use is ANSYS Fluent [22].The computational procedures that couple the flow solver and chemical reactions are described as follows: Step 1: Solve the Eulerian-Eulerian two-phase flow models from Equations (1) to (5) at the current time step, while the chemical source term S in Equation ( 14) is obtained from the previous time step.
Step 2: Obtain C l Slurry from Equation ( 12) and C g SO2 from Equation (13) according to the flow variables of the current time step.
Step 3: Input C l Slurry from Equation ( 12) into a prepared chemical database by UDF.This database is written in Matlab to solve the sets from Equations ( 8) to (10).With C l
Slurry
given from flow solver, this chemical database can solve the remaining unknowns and then output the value of C l SO2 to the flow solver.
Step 4: Insert C g SO2 from step 2 and C l SO2 from step 3 into Equation ( 14) to calculate a new chemical source term S for the next time step.
Step 5: Update the time step and repeat step 1 to 4 if necessary.
The couple between flow solver and chemical database can avoid too many additional transport equations for species introduced in Figure 2, which can save computational time and reduce the ensuing numerical difficulties [5].
Porous Media Model
A momentum sink term M, as shown in Equation ( 2), is activated in the designated porous media region to compensate the pressure loss because the detailed solid boundary is not resolved [4,24,25]: In Equation ( 15), the subscript q stands for either gas phase g or liquid phase l, and i represents the direction.Additionally, C represents the inertial loss coefficient, and t h is the thickness of the porous media region.The value of inertial loss coefficient in Equation ( 15) is usually calculated by an empirical equation based on a large number of experimental data [25][26][27][28].For example, Idelchik [28] provided the inertial loss coefficient C: In Equation ( 16), the hole diameter is D h , and f h is the porosity and defined as the ratio of the total hole area to total area of the plate.The coefficients ξ 1 , ξ 3 , ξ 4 , and λ in Equation ( 16) can be tabulated based on the porosity f h , hole Reynolds number Re h , and t h /D h, which is the ratio of the tray thickness to the hole diameter.As for Re h , it is defined as ρVD h /f h µ where ρ, V, and µ correspond to values from gas or liquid phases.
Geometries of the Small-and Full-Scale FGD
A small-scale pilot FGD tower was used for validation first.Figure 3a illustrates the details of the facility.All the spray nozzles and four sieve trays are set up in the right tower while the left tower is closed in this series of experiments.The height of the tower is 2.4 m, and the diameter is 0.6 m.The spacing between each tray is 0.4 m, and the spray system is placed above the highest tray by 0.2 m as shown in Figure 3a,b.The liquid Mg(OH) 2 solution is injected downward without spray angle by 33 nozzles, as arranged in Figure 3b from the top view.The array of nozzles has a radial distance of 0.05 m, and the diameter of each nozzle is 0.01 m.The pH value of liquid Mg(OH) 2 solution is 7.7 at the exit of nozzle.The gas with SO 2 enters the domain from a pipeline of a diameter 0.4 m in the rightmost face, as shown in Figure 3a.With a series of chemical reactions shown in Section 2.2, the desulfurization process reduces the concentration of SO 2 , while the sulfurous process increases the acidity of the slurry.The liquid slurry then is discharged through the outlet under the bottom tank.Furthermore, Figure 3b highlights the sampling locations to measure the SO 2 removal efficiency.Assuming point P in Figure 3b is the origin in the x-y plane, the locations of the sampling points from GT1 to GT3 are (−0.2,0.93), (−0.2, 1.345), and (−0.263, 2.16), respectively, with meter as the unit.
Figure 4 illustrates the full-scale FGD tower.The height and diameter of the tower are 23 and 7 m.The tower has three sieve trays, and the gas is transported by a pipeline with diameter of 3.6 m and it leaves the domain from the outlet with a diameter of 3.6 m.There are two sets of nozzle arrays.One is above the highest tray by 1 m with 192 nozzles, and the other is below the lowest tray by 1.2 m with 29 nozzles.Both sets of nozzles have spray angle of 65 • and diameter of 0.05 m.The experiments for both small-and large-scale towers have already been conducted by China Steel Cooperation [34] with measured data for desulfurization and pressure at designated sampling points.
in the rightmost face, as shown in Figure 3a.With a series of chemical reactions shown in Section 2.2, the desulfurization process reduces the concentration of SO2, while the sulfurous process increases the acidity of the slurry.The liquid slurry then is discharged through the outlet under the bottom tank.Furthermore, Figure 3b highlights the sampling locations to measure the SO2 removal efficiency.Assuming point P in Figure 3b is the origin in the x-y plane, the locations of the sampling points from GT1 to GT3 are (−0.2,0.93), (−0.2, 1.345), and (−0.263, 2.16), respectively, with meter as the unit.
Geometries of the Perforated Sieve Trays of the Small-and Full-Scale FGD
The perforated hole structure on a sieve tray is illustrated in Figure 5 from the top view (left) and side view (right).For the small-scale tower, the important parameters required to determine the inertial loss coefficient C in Equation ( 16) are listed in Table 1 for different cases and phases.The hole diameter Dh is 20 mm, the tray thickness th is 15 mm, and the pitch Ph is 28.8 mm.With the aligned layout shown in Figure 5, the hole diameter Dh and pitch Ph can determine the porosity fh as 34.3%.The hole Reynolds number for gas phase Reh,g = ρgVgDh/fhμg is 5130, where ρg and μg are the density and dynamic viscosity of the gas phase, and Vg (1.4 m/s) is the cross-sectional average velocity of the gas phase within the tower.L/G is defined as the mass flow rate ratio of the total liquid to the exhaust gas.There are two setups for the small-scale tower, and this value varies as 4.9 or 3.2 for the small-scale tower, namely, Case 1 and 2. For Case 1, the velocity of the liquid solution is 0.766 m/s from each spray nozzle with a diameter of 0.01 m, and the velocity of Case 2 is 0.511 m/s.Similar to gas phase, the hole Reynolds number for liquid phase Reh,l is 860 and 570, respectively.
Geometries of the Perforated Sieve Trays of the Small-and Full-Scale FGD
The perforated hole structure on a sieve tray is illustrated in Figure 5 from the top view (left) and side view (right).For the small-scale tower, the important parameters required to determine the inertial loss coefficient C in Equation ( 16) are listed in Table 1 for different cases and phases.The hole diameter D h is 20 mm, the tray thickness t h is 15 mm, and the pitch P h is 28.8 mm.With the aligned layout shown in Figure 5, the hole diameter D h and pitch P h can determine the porosity f h as 34.3%.The hole Reynolds number for gas phase Re h,g = ρ g V g D h /f h µ g is 5130, where ρ g and µ g are the density and dynamic viscosity of the gas phase, and V g (1.4 m/s) is the cross-sectional average velocity of the gas phase within the tower.L/G is defined as the mass flow rate ratio of the total liquid to the exhaust gas.There are two setups for the small-scale tower, and this value varies as 4.9 or 3.2 for the small-scale tower, namely, Case 1 and 2. For Case 1, the velocity of the liquid solution is 0.766 m/s from each spray nozzle with a diameter of 0.01 m, and the velocity of Case 2 is 0.511 m/s.Similar to gas phase, the hole Reynolds number for liquid phase Re h,l is 860 and 570, respectively.
different cases and phases.The hole diameter Dh is 20 mm, the tray thickness th is 15 mm, and the pitch Ph is 28.8 mm.With the aligned layout shown in Figure 5, the hole diameter Dh and pitch Ph can determine the porosity fh as 34.3%.The hole Reynolds number for gas phase Reh,g = ρgVgDh/fhμg is 5130, where ρg and μg are the density and dynamic viscosity of the gas phase, and Vg (1.4 m/s) is the cross-sectional average velocity of the gas phase within the tower.L/G is defined as the mass flow rate ratio of the total liquid to the exhaust gas.There are two setups for the small-scale tower, and this value varies as 4.9 or 3.2 for the small-scale tower, namely, Case 1 and 2. For Case 1, the velocity of the liquid solution is 0.766 m/s from each spray nozzle with a diameter of 0.01 m, and the velocity of Case 2 is 0.511 m/s.Similar to gas phase, the hole Reynolds number for liquid phase Reh,l is 860 and 570, respectively.Each sieve tray has 308 holes for the small-scale tower.By using the porous media model in Equation ( 16), these aforementioned parameters are used to determine the inertial loss coefficient to replace the detailed perforated structures.As for the tray of full-scale tower, its perforated hole is 8 mm.The porosity f h is 27.7%, t h /D h =0.75, the hole Reynolds numbers Re h for liquid and gas phase are 5500 and 940, respectively, which are all similar to those of Case 1 in the small-scale tower.As for the number of holes, there are 200,000 holes at each sieve tray for the large-scale tower.
Working Conditions of the Small-and Full-Scale FGD
For the small-scale tower, the inlet gas flow Reynolds number, Re inle , is around 8 × 10 4 according to the gas mixture density (1.067 kg/m 3 ), velocity (3.2 m/s), and the diameter of the pipeline (0.4 m) at the inlet as shown in Figure 3a.With different L/G values, the inlet Reynolds numbers for the liquid are about 1.6 × 10 4 for Case 1 and 1.07 × 10 4 for Case 2. These flow conditions of the gas and liquid inlets of the small-scale tower are highlighted in Tables 2 and 3.Moreover, the concentration of SO 2 gas at inlet is 152 and 145 ppm, respectively.As for the large-scale tower, the Reynolds number at inlets is 6.35 × 10 5 , 1.35 × 10 5 , and 1.44 × 10 5 for gas at inlet, liquid at the lower, and higher spray array, respectively.The corresponding value of L/G is 5, and the SO 2 gas at inlet is 359 ppm.
Boundary Conditions, Grid Layouts, and Validations for the Small-Scale Tower
Figure 6a illustrates the grid layouts by cut-cell technique [5,22] for the small-scale tower at the symmetric plane at z = 0. Its computational domain is discretized by hexahedral elements with finer resolution around the spray nozzles and sieve trays in Figure 6b.The velocities at inlets for both gas and liquid are prescribed according to Tables 2 and 3, while the pressure is extrapolated.The turbulent quantities are specified as eddy-to-laminar viscosity ratio equal to 1000.At the gas inlet, the volume fraction of gas phase α g is 1.
For Case 1, the concentration of SO 2 gas Y SO2 at this inlet is 3.42 × 10 −4 , converting from the ppm value (152 ppm) and inlet densities.At the liquid inlets, the liquid volume fraction α l and mass fraction Y M for the Mg(OH) 2 are all set up as 1.For the simulation by real perforated sieve tray, the no-slip boundary condition is applied to the surface of the sieve tray and wall of the tower.The turbulent quantity k is zero, while ε is calculated by wall function.In this study, the y+ value for the first grid away from the no-slip wall is around 60.As for the porous media model, the tray region becomes an interior flow domain without solid boundary and retains the same grid resolution as that outside the tray.
The nonlinear sets of equations from Equations (1)~( 5) are solved by Phase Coupled SIMPLE (PC-SIMPLE) [22,35], which is based on an extension of the SIMPLE algorithm to multiphase flows.The Second Order Upwind scheme (SOU) [22,30] is used for the convective nonlinear terms, and the central difference is applied to diffusion terms.The time step size for small-scale tower is 0.01 s.Although the time-dependent computation is considered, the solution finally reaches a steady state for a small-scale tower.
After a series of grid dependency test, the mesh with number of grid points as 1.2 × The pressure at gas outlet is maintained as 1 atm, while remaining variables are extrapolated.The liquid outlet at the bottom surface of Figure 3a is set up as 1.1 atm, in order to maintain the height of the free surface of the bottom tank by 1 m.
For the simulation by real perforated sieve tray, the no-slip boundary condition is applied to the surface of the sieve tray and wall of the tower.The turbulent quantity k is zero, while ε is calculated by wall function.In this study, the y+ value for the first grid away from the no-slip wall is around 60.As for the porous media model, the tray region becomes an interior flow domain without solid boundary and retains the same grid resolution as that outside the tray.
The nonlinear sets of equations from Equations ( 1)-( 5) are solved by Phase Coupled SIMPLE (PC-SIMPLE) [22,35], which is based on an extension of the SIMPLE algorithm to multiphase flows.The Second Order Upwind scheme (SOU) [22,30] is used for the convective nonlinear terms, and the central difference is applied to diffusion terms.The time step size for small-scale tower is 0.01 s.Although the time-dependent computation is considered, the solution finally reaches a steady state for a small-scale tower.
After a series of grid dependency test, the mesh with number of grid points as 1.2 × 10 6 is chosen for the further simulations.Tables 4 and 5 list the simulated SO 2 removal efficiencies at three sampling locations for Case 1 (L/G = 4.9) and Case 2 (L/G = 3.2) by real perforated structures.The measured values [36] have also been compared, and the results are consistent with most of discrepancies less than 10%.After validation in a small-scale tower, this subsection discusses the feasibility of using porous media model for saving computational cost.Case 1 and Case A have the same flow conditions, except the latter applies the porous media model, and the same situation applies to Case 2 and Case B. Different setups for number of sieve trays and inlet gas flow rate will further be discussed by using the porous media model in Section 4. Table 6 presents the case name, flow condition, parameters in porous media model, and SO 2 removal efficiency at outlet for all the different cases.For the small-scale tower of L/G = 4.9, the hole Reynolds number for gas phase Re h, is 5130, porosity f h is 34.3%, and t h /D h is 0.75 for Case 1, as shown in Table 1.These values are used to tabulate in ref. [28], and obtain values of ξ 1 = 0.123, ξ 3 = 0.658, ξ 4 = 0.525, and λ = 0.037 in Equation (16).The inertial loss coefficient in Equation (2) for gas flow in y-direction C g,y is 11.33 after inserting these values into Equation (16).As for liquid phase in Table 1, the hole Reynolds number for liquid phase Re h,l is 860 for Case 1, corresponding to ξ 1 = 0.25, ξ 3 = 0.513, ξ 4 = 0.525, and λ = 0.074.From Equation ( 16), the inertial loss coefficient for liquid phase C l,y is calculated as 10.31.
As the main direction of flow within the perforated hole aligns with the y-direction, the inertial loss coefficients of these two phases in the xand z-directions are amplified by 100 times from their corresponding values in the y-direction [4,22].From our preliminary results, based on the aforementioned setups (C g,y = 11.33 and C l,y = 10.31) for porous media model, the pressure drop within the right tower of the small-scale tower as L/G = 4.9 is 86.7 Pa, while that computed by real perforated structures (Case 1) is 104.6 Pa.This deficiency by using porous media model is caused by Equation ( 16) being calibrated based on experiments with single-phase flow [4,25,28] and the two-phase flows within the tower have opposite direction.
As a result, it is reasonable to increase the inertial loss coefficients for the two-phase counterflow.Considering that gas flow occupies most of the domain within the tower, only the value of gas flow C g,y increases by 33% from 11.3 to 15 after a series of tests.The numerical results of Case A by C g,y = 15 and C l,y = 10.31 yields a pressure drop 103.5 Pa, which is exactly the same as that by Case 1 with perforated structures.As for Case B with a lower L/G, the information in Table 1 yields original values of C g,y = 11.3 and C l,y = 10.48 from Equation (16).Similar to that for Case A, C g,y = 11.3 is increased by 33% to C g,y = 15 while C l,y is maintained as 10.48.The corresponding pressure drop is 96.8 Pa, which is very consistent to 95.5 Pa of Case 2 by real perforated structures.Therefore, these treatments that increase C g,y by 33% and retain the original C l,y are applied to all the simulations by porous media model, as shown in Table 6.
Figure 7 displays the liquid volume fraction (α l ), streamlines of gas phase, and the mass fraction of SO 2 (Y SO2 ) within the right tower of small-scale FGD facility for Case 1 and Case A at L/G = 4.9.Both cases of perforated structures and porous media model have similar flow patterns.Figure 8 presents the liquid flow velocity (u l,y ) and gas flow velocity (u g,y ) in the y-direction along the centerline of the small-scale tower for Case 1 and Case A, and the velocity is normalized by average gas velocity V g (1.4 m/s).The dashed lines display the positions of sieve trays.It is clear that the downward liquid flow in Figure 8a accelerates between each sieve tray and decelerates as the liquid flow approaches the sieve tray.
For Case A, the porous media model can also obtain a strong deceleration as that in Case 1 with real perforated structures.Since the sieve tray is not regarded as a no-slip boundary condition in the porous media model, the liquid flow cannot decelerate to zero velocity when it impacts the porous-based sieve tray, as shown in Figure 8a.Moreover, the sudden reduction of the cross section within the real perforated sieve tray leads to a very strong acceleration.However, the porous media model regards the sieve tray as an interior flow domain, and hence, the strong acceleration cannot be displayed by Case A in Figure 8a.As a result, the liquid velocity between two sieve trays is slower by using the porous media model in Figure 8a.Considering the extremely large liquid-to-gas density ratio, the gas flow inside the liquid column travels downward with the liquid flow instead of rising, which is illustrated in Figure 8b for gas flow at the centerline of the tower for both cases.Since the liquid flow of Case 1 is faster, as mentioned previously in Figure 8a, it can accelerate the gas flow into a faster velocity.Correspondingly, the gas flow velocity between two sieve trays is made slower by using the porous media model in Figure 8b.similar flow patterns.Figure 8 presents the liquid flow velocity (ul,y) and gas flow velocity (ug,y) in the y-direction along the centerline of the small-scale tower for Case 1 and Case A, and the velocity is normalized by average gas velocity Vg (1.4 m/s).The dashed lines display the positions of sieve trays.It is clear that the downward liquid flow in Figure 8a accelerates between each sieve tray and decelerates as the liquid flow approaches the sieve tray.
Case 1
Case A Case 1 Case A Case 1 Case A For Case A, the porous media model can also obtain a strong deceleration as that in Case 1 with real perforated structures.Since the sieve tray is not regarded as a no-slip boundary condition in the porous media model, the liquid flow cannot decelerate to zero velocity when it impacts the porous-based sieve tray, as shown in Figure 8a.Moreover, the sudden reduction of the cross section within the real perforated sieve tray leads to a very strong acceleration.However, the porous media model regards the sieve tray as an interior flow domain, and hence, the strong acceleration cannot be displayed by Case A in Figure 8a.As a result, the liquid velocity between two sieve trays is slower by using the porous media model in Figure 8a.Considering the extremely large liquid-to-gas density ratio, the gas flow inside the liquid column travels downward with the liquid flow instead of rising, which is illustrated in Figure 8b for gas flow at the centerline of the tower for both cases.Since the liquid flow of Case 1 is faster, as mentioned previously in Figure 8a, it can accelerate the gas flow into a faster velocity.Correspondingly, the gas flow velocity between two sieve trays is made slower by using the porous media model in Figure 8b.Since the liquid flow decelerates to zero as it approaches the sieve trays for Case 1 as shown in Figure 8a, it tends to accumulate at each top surface of sieve tray, as shown in Figure 7a.As for the porous media model of Case A, although the liquid flow cannot decelerate to zero velocity onto the sieve tray, it remains sufficient to hold the liquid flow and produce accumulation near the locations of sieve trays.
In contrast with the downward gas flow within the liquid column, the gas flow outside it moves upward, which can be seen in the streamlines in Figure 7b.As a result, the liquid column serves as a boundary to separate the upward and downward gas flow, and hence, a pair of counter vortexes between two neighboring sieve trays can be seen in Figure 7b for both cases.The radius of liquid column can be extracted from Figure 7a, which defines the size occupied by liquid flow.Figure 9 illustrates the radius as function of ydirection for Case 1 and Case A. The trend is comparable with average deficiency as 15% between Case 1 with perforated structure and Case A by porous media model.
Since the two-phase flow structures are comparable for both cases, as shown in Figure 7a,b, Figures 8 and 9, the mass fraction of SO2 (YSO2) in Figure 7c also has similar trends.Tables 4 and 5 further compare the SO2 removal efficiencies at different sampling points, which also displays similar trends and values between perforated structures (Case 1 and Case 2) and porous media models (Case A and B).Most important of all, the number of grid points after the sensitivity test for Case A and Case B is only 2 × 10 5 , which is onesixth that of Case 1 and Case 2. As a result, the computational time is reduced significantly Since the liquid flow decelerates to zero as it approaches the sieve trays for Case 1 as shown in Figure 8a, it tends to accumulate at each top surface of sieve tray, as shown in Figure 7a.As for the porous media model of Case A, although the liquid flow cannot decelerate to zero velocity onto the sieve tray, it remains sufficient to hold the liquid flow and produce accumulation near the locations of sieve trays.
In contrast with the downward gas flow within the liquid column, the gas flow outside it moves upward, which can be seen in the streamlines in Figure 7b.As a result, the liquid column serves as a boundary to separate the upward and downward gas flow, and hence, a pair of counter vortexes between two neighboring sieve trays can be seen in Figure 7b for both cases.The radius of liquid column can be extracted from Figure 7a, which defines the size occupied by liquid flow.Figure 9 illustrates the radius as function of y-direction for Case 1 and Case A. The trend is comparable with average deficiency as 15% between Case 1 with perforated structure and Case A by porous media model.
Tables 4 and 5 further compare the SO2 removal efficiencies at different sampling points, which also displays similar trends and values between perforated structures (Case 1 and Case 2) and porous media models (Case A and B).Most important of all, the number of grid points after the sensitivity test for Case A and Case B is only 2 × 10 5 , which is onesixth that of Case 1 and Case 2. As a result, the computational time is reduced significantly from 8 days to 15 h (32 cores with 3.82 GHz processer), which makes it easier to conduct different designs as Case C-G for the small-scale FGD tower in Table 6.Since the two-phase flow structures are comparable for both cases, as shown in Figure 7a,b, Figures 8 and 9, the mass fraction of SO 2 (Y SO2 ) in Figure 7c also has similar trends.Tables 4 and 5 further compare the SO 2 removal efficiencies at different sampling points, which also displays similar trends and values between perforated structures (Case 1 and Case 2) and porous media models (Case A and B).Most important of all, the number of grid points after the sensitivity test for Case A and Case B is only 2 × 10 5 , which is one-sixth that of Case 1 and Case 2. As a result, the computational time is reduced significantly from 8 days to 15 h (32 cores with 3.82 GHz processer), which makes it easier to conduct different designs as Case C-G for the small-scale FGD tower in Table 6.
Numerical Results by Porous Media Model in a Large-Scale Tower
A mesh with number of grid points 1.2 × 10 6 is required for a small-scale tower that has 1200 holes at those four sieve trays.As for the large-scale tower, the number of holes becomes 600,000, and it could require a mesh of six hundred million grid points if their resolutions remain the same.The previous subsection validated the feasibility of the porous media model in a small-scale tower, which has also been utilized for the large-scale tower in this study to save computational costs.The number of grid points for the large-scale tower is 8 × 10 5 with the porous media model near the sieve trays.
The size and flow conditions of the large-scale tower were introduced in Figure 4, Sections 3.1 and 3.3.The setup of the boundary conditions is the same as that for the small-scale tower in Section 3.4.The characteristics of perforated sieve trays used in the large-scale tower are f h = 27.7%,t h /D h = 0.75, Re h,g = 5500, and Re h,l = 940, corresponding to C g,y = 18.41 and C l,y is 17.3 from Equation (16).By comparing with the measured pressure drop as 901 Pa in the experiment, the inertial loss coefficient of gas C g,y is adjusted 3 times from 18.41 to 73.3, while C l,y remains 17.3 to compensate the effects of two-phase counter flow as mentioned previously in Section 3.5.After these treatments, the pressure drop calculated by the porous media model is 892 Pa, which is very close to the measured data.The simulation result displays unsteady flow structures, and is illustrated in Figure 10a,b with time step size as 0.05 s.The two-phase mixing is very strong within the tray regions in Figure 10a, resulting in a significant reduction of SO 2 mass fraction from the inlet, as shown in Figure 10b.
as mentioned previously in Section 3.5.After these treatments, the pressure drop calculated by the porous media model is 892 Pa, which is very close to the measured data.The simulation result displays unsteady flow structures, and is illustrated in Figure 10a,b with time step size as 0.05 s.The two-phase mixing is very strong within the tray regions in Figure 10a, resulting in a significant reduction of SO2 mass fraction from the inlet, as shown in Figure 10b.Figure 11 shows the instantaneous variation of mass fraction of SO2 (YSO2) at outlet for the full-scale tower.The value of YSO2 varies between 4 and 11 ppm, corresponding to SO2 removal efficiency of 97-98.9%.The average efficiency is 98.3%.The experiment of this large-scale tower by China Steel Cooperation reports the efficiency as 96%.Therefore, this subsection further validates the porous media model in the full-scale tower.
Comparisons between Four-Tray and Empty Tower (Case A and Case C)
Case C has the same flow conditions (L/G = 4.9) as Case A, except all the sieve trays are removed.Figure 12a,b shows the liquid flow structures by αl and gas flow structure by streamlines for Cases A and C. As the implementation of sieve trays can provide deceleration of the liquid phase as it approaches the sieve tray, the liquid phase of Case A is apparently much denser at the tray locations than for Case C in Figure 12a.The gas flow within the liquid column of Case C remains downward because of the large liquid-todensity ratio.Instead of several pairs of counter vortexes, as shown in Figure 12b for Case A, the downward gas flow only forms one larger pair of counter vortexes with the uprising gas flow outside the liquid column.Therefore, the complex flow structure of Case A in Figure 12b can enhance the two-phase mixing and the ensuing chemical reactions in Equations ( 6)~(8) more efficiently.As a result, the SO2 mass fraction within the liquid column of Case A is lower than that of Case C in Figure 12c.
Designs of the Small-Scale Tower from Numerical Experiments 4.1. Comparisons between Four-Tray and Empty Tower (Case A and Case C)
Case C has the same flow conditions (L/G = 4.9) as Case A, except all the sieve trays are removed.Figure 12a,b shows the liquid flow structures by α l and gas flow structure by streamlines for Cases A and C. As the implementation of sieve trays can provide deceleration of the liquid phase as it approaches the sieve tray, the liquid phase of Case A is apparently much denser at the tray locations than for Case C in Figure 12a.The gas flow within the liquid column of Case C remains downward because of the large liquidto-density ratio.Instead of several pairs of counter vortexes, as shown in Figure 12b for Case A, the downward gas flow only forms one larger pair of counter vortexes with the uprising gas flow outside the liquid column.Therefore, the complex flow structure of Case A in Figure 12b can enhance the two-phase mixing and the ensuing chemical reactions in Equations ( 6)-( 8) more efficiently.As a result, the SO 2 mass fraction within the liquid column of Case A is lower than that of Case C in Figure 12c.
density ratio.Instead of several pairs of counter vortexes, as shown in Figure 12b for Case A, the downward gas flow only forms one larger pair of counter vortexes with the uprising gas flow outside the liquid column.Therefore, the complex flow structure of Case A in Figure 12b can enhance the two-phase mixing and the ensuing chemical reactions in Equations ( 6)~(8) more efficiently.As a result, the SO2 mass fraction within the liquid column of Case A is lower than that of Case C in Figure 12c.However, from Table 6, the SO 2 removal efficiency at outlet of Case C (56.9%) is very close to that of Case A (57.1%), which seems to be inconsistent with previous results shown in Figure 12c.The contradiction can be explained in Figure 13. Figure 13 shows the top view of SO 2 mass fraction at different cross sections as y = 0.615 and 2.46 m from Figure 12c.The SO 2 mass fraction is indeed lower for Case A near the center where the liquid column passes through, but there exists a high SO 2 region to the right of the lateral at each cross section.This trend can also be seen in Figure 12c for Case A near the right wall, which is related to the size of liquid column and explained as follows.
Appl.Sci.2022, 12, x FOR PEER REVIEW 17 of 24 However, from Table 6, the SO2 removal efficiency at outlet of Case C (56.9%) is very close to that of Case A (57.1%), which seems to be inconsistent with previous results shown in Figure 12c.The contradiction can be explained in Figure 13. Figure 13 shows the top view of SO2 mass fraction at different cross sections as y = 0.615 and 2.46 m from Figure 12c.The SO2 mass fraction is indeed lower for Case A near the center where the liquid column passes through, but there exists a high SO2 region to the right of the lateral at each cross section.This trend can also be seen in Figure 12c for Case A near the right wall, which is related to the size of liquid column and explained as follows.Figure 8 above displays the deceleration of liquid flow near the top surface of each sieve tray and acceleration as liquid flow penetrates it.Furthermore, the dilute liquid flow can accumulate into a denser liquid column by using sieve trays, as shown in Figure 7a.As a result of the continuity, the deceleration enlarges the region of the liquid column, while the acceleration reduces it.This trend can be seen every time the liquid column passes the tray regions from their top to bottom surfaces in Figure 9. Figure 8 further indicates that the velocity of liquid flow has not reached its terminal velocity between each sieve tray, which implies that the acceleration remains even after the tray region.As a result, the size of the liquid column keeps decreasing in the region between two trays.These aforementioned mechanisms result in a continuous reduction of the radius of liquid column, as shown in Figure 9.
Without a sieve tray, the liquid flow fails to accumulate to form a liquid column with Figure 8 above displays the deceleration of liquid flow near the top surface of each sieve tray and acceleration as liquid flow penetrates it.Furthermore, the dilute liquid flow can accumulate into a denser liquid column by using sieve trays, as shown in Figure 7a.As a result of the continuity, the deceleration enlarges the region of the liquid column, while the acceleration reduces it.This trend can be seen every time the liquid column passes the tray regions from their top to bottom surfaces in Figure 9. Figure 8 further indicates that the velocity of liquid flow has not reached its terminal velocity between each sieve tray, which implies that the acceleration remains even after the tray region.As a result, the size of the liquid column keeps decreasing in the region between two trays.These aforementioned mechanisms result in a continuous reduction of the radius of liquid column, as shown in Figure 9.
Without a sieve tray, the liquid flow fails to accumulate to form a liquid column with a higher liquid volume fraction.The liquid column resembles several different discrete liquid flows.This scatter characteristic makes its variation in size less sensitive to the y-direction.The radius of liquid column is almost decided by the radial location of the outermost nozzle spray shown in Figure 3b.As a result, the radius of the liquid column of Case A is reduced more rapidly than that of Case C in Figure 12a, which is further illustrated in Figure 14 in the y-direction.The size of the liquid column is apparently larger for Case C without sieve tray.To sum up, although the two-phase mixing within the liquid column for Case A is better than that of Case C in Figure 12c, the narrower liquid column in Figure 14 produces a larger area with high SO2 mass fraction when the four sieve trays are implemented.These two competing effects end up with very comparable SO2 removal efficiencies at outlet, as listed in Table 6 (57.1% and 56.9%).
Comparisons between Empty and One-Tray Tower (Case C and Case D)
As shown in Table 6, Case D has the same flow conditions (L/G = 4.9) as Case A and C, except only the lowest sieve tray is implemented, as shown in Figure 15a.Figure 15a The region outside the liquid column is the uprising gas flow.It is basically located around the wall of tower with poor two-phase mixing and corresponds to a region of high SO 2 mass fraction.Therefore, the narrower liquid column for Case A incurs a larger area of uprising gas flow with high SO 2 mass fraction than that in Case C, as illustrated in Figures 12c and 13.Moreover, one can see that the distributions of high SO 2 regions are not symmetric.As the inlet of gas flow is at the right side of the tower as shown in Figure 3a, a portion of the gas flow has already encountered the liquid column at the region of y < 0 in Figure 3a before it enters the left side of the tower as y > 0. Consequently, the SO 2 mass fraction is higher on the right side than on the left side of the tower, as shown in Figure 13.
To sum up, although the two-phase mixing within the liquid column for Case A is better than that of Case C in Figure 12c, the narrower liquid column in Figure 14 produces a larger area with high SO 2 mass fraction when the four sieve trays are implemented.These two competing effects end up with very comparable SO 2 removal efficiencies at outlet, as listed in Table 6 (57.1% and 56.9%).
Comparisons between Empty and One-Tray Tower (Case C and Case D)
As shown in Table 6, Case D has the same flow conditions (L/G = 4.9) as Case A and C, except only the lowest sieve tray is implemented, as shown in Figure 15a.Figure 15a,b shows the liquid flow structures for Cases C and D. Before the liquid flow reaches the lowest sieve tray, Cases C and D have consistent size of liquid column, as shown in Figures 15a and 16.Moreover, the liquid flow reaches the terminal velocity 4.2 m/s at y = 0.75 m for both cases, and this stronger inertia causes a more obvious deflection when it impacts the lowest sieve tray for Case D, as highlighted in Figure 16.Although the liquid column becomes smaller after the lowest sieve tray, as discussed previously, the deflection makes the reduction from a wider liquid column.This results in a larger liquid column for Case D than Case C in Figure 16.Correspondingly, the high SO2 region near the wall of tower is reduced significantly from Case C to Case D, as shown in Figure 15c at y = 0.615 m.The stronger two-phase mixing near the sieve tray and smaller region of high SO2 near the wall lead to a higher SO2 removal efficiency for Case D (61.6%) than Case C (56.9%) in Table 6.In comparison with the four-tray Case A in Figure 13, the one-tray Case D in Figure 15c reduces the high SO2 region significantly, while it lacks two-phase mixing from the other three trays.The benefit of the first overwhelms the drawback from the latter, which makes Case D with only one tray perform better than Case A (57.1%).Correspondingly, the high SO2 region near the wall of tower is reduced significantly from Case C to Case D, as shown in Figure 15c at y = 0.615 m.The stronger two-phase mixing near the sieve tray and smaller region of high SO2 near the wall lead to a higher SO2 removal efficiency for Case D (61.6%) than Case C (56.9%) in Table 6.In comparison with the four-tray Case A in Figure 13, the one-tray Case D in Figure 15c reduces the high SO2 region significantly, while it lacks two-phase mixing from the other three trays.The benefit of the first overwhelms the drawback from the latter, which makes Case D with only one tray perform better than Case A (57.1%).Correspondingly, the high SO 2 region near the wall of tower is reduced significantly from Case C to Case D, as shown in Figure 15c at y = 0.615 m.The stronger two-phase mixing near the sieve tray and smaller region of high SO 2 near the wall lead to a higher SO 2 removal efficiency for Case D (61.6%) than Case C (56.9%) in Table 6.In comparison with the four-tray Case A in Figure 13, the one-tray Case D in Figure 15c reduces the high SO 2 region significantly, while it lacks two-phase mixing from the other three trays.The benefit of the first overwhelms the drawback from the latter, which makes Case D with only one tray perform better than Case A (57.1%).
Comparisons between Different Inlet Gas Flow Rates (Case E-Case G)
From Case A to Case D, the inlet gas volume flow rates Q are all 24 m 3 /min, as shown in Table 6.Further increase in gas volume flow rates to 36 m 3 /min are applied at empty, one-tray, and four-tray tower as Case E, Case F, and Case G.The two-phase inertial loss coefficients are set up as C g,y = 15.7 (33% increase from Equation ( 16)) and C l,y = 10.31 under the same treatments as described in Section 3.5.
The stronger inertia of gas flow deflects the liquid flow into the right side of the tower for Case E (empty tower), as shown in Figure 17a, and its streamlines of gas phase in Figure 17b indicate that the two-phase mixing is very weak in the left of the tower because of the deflection.Therefore, its SO 2 removal efficiency is only 48% in Table 6.As for Case F in Figure 17a, the sieve tray can enhance the uniformity of the gas flow [4] to avoid the deflection in the empty tower and exhibit a better two-phase mixing, as shown in Figure 17b.As a result, the SO 2 removal efficiency increases from 48% to 58.9%, as shown in Table 6.
Comparisons between Different Inlet Gas Flow Rates (Case E-Case G)
From Case A to Case D, the inlet gas volume flow rates Q are all 24 m 3 /min, as shown in Table 6.Further increase in gas volume flow rates to 36 m 3 /min are applied at empty, one-tray, and four-tray tower as Case E, Case F, and Case G.The two-phase inertial loss coefficients are set up as Cg,y = 15.7 (33% increase from Equation ( 16)) and Cl,y = 10.31 under the same treatments as described in Section 3.5.
The stronger inertia of gas flow deflects the liquid flow into the right side of the tower for Case E (empty tower), as shown in Figure 17a, and its streamlines of gas phase in Figure 17b indicate that the two-phase mixing is very weak in the left of the tower because of the deflection.Therefore, its SO2 removal efficiency is only 48% in Table 6.As for Case F in Figure 17a, the sieve tray can enhance the uniformity of the gas flow [4] to avoid the deflection in the empty tower and exhibit a better two-phase mixing, as shown in Figure 17b.As a result, the SO2 removal efficiency increases from 48% to 58.9%, as shown in Table 6.With a lower Q of 24 m 3 /min, the implementation of an additional three sieve trays reduces the SO2 removal efficiency from 61.6% (Case D) to 57.1% (Case A), as discussed in Section 4.2 and shown in Table 6.With all four sieve trays implemented, the gas and liquid flow structures of Case G in Figure 17 are very similar to those of Case A in Figure 12 under a lower value of Q.However, the full implementation of sieve tray under a higher Q of 36 m 3 /min enhances the efficiency from 58.8% (Case F) to 63% (Case G) and even increases the efficiency by 15% from the empty tower (Case E).
Case
As the inertia of the uprising gas flow is stronger as Q increases, the acceleration of downward liquid flow as that discussed in Figure 8 for Case A becomes weaker.For Case A with Q = 24 m 3 /min, the downward liquid flow velocity can reach maximum velocity of 2.5 m/s between two trays, while it is only 1.9 m/s for Case G.This weaker acceleration leads to a gentler reduction of the size of liquid column.As a result, the radius of liquid column for four-tray Case G is only 26% smaller than that of one-tray Case F, as illustrated in Figure 18 (Q = 36 m 3 /min), while the additional three sieve trays incur a greater reduction of 48% (from Case A to Case D), as shown in Figures 14 and 16 With a lower Q of 24 m 3 /min, the implementation of an additional three sieve trays reduces the SO 2 removal efficiency from 61.6% (Case D) to 57.1% (Case A), as discussed in Section 4.2 and shown in Table 6.With all four sieve trays implemented, the gas and liquid flow structures of Case G in Figure 17 are very similar to those of Case A in Figure 12 under a lower value of Q.However, the full implementation of sieve tray under a higher Q of 36 m 3 /min enhances the efficiency from 58.8% (Case F) to 63% (Case G) and even increases the efficiency by 15% from the empty tower (Case E).
As the inertia of the uprising gas flow is stronger as Q increases, the acceleration of downward liquid flow as that discussed in Figure 8 for Case A becomes weaker.For Case A with Q = 24 m 3 /min, the downward liquid flow velocity can reach maximum velocity of 2.5 m/s between two trays, while it is only 1.9 m/s for Case G.This weaker acceleration leads to a gentler reduction of the size of liquid column.As a result, the radius of liquid column for four-tray Case G is only 26% smaller than that of one-tray Case F, as illustrated in Figure 18 (Q = 36 m 3 /min), while the additional three sieve trays incur a greater reduction of 48% (from Case A to Case D), as shown in Figures 14 and 16 Correspondingly, in comparisons with Case F, the full implementation of sieve tray for Case G significantly enhances the two-phase mixing between neighboring trays in Figure 17b, and only reduces the size of liquid column by 26% in Figure 18.The former suppresses the latter and favors the SO2 removal efficiency, as shown in Table 6 for Q = 36 m 3 /min.
Conclusions
This study applied an Eulerian-Eulerian two-phase model with Navier-Stokes equations, porous media model, standard k-ε turbulence closure, and species equations to investigate the fluid mechanics and desulfurization process within the FGD tower of sieve trays.Important findings and contributions can be highlighted as follows: 1.The complex structures of sieve trays are replaced by the porous media model, which significantly saves the computational time with consistent results with those simulated by detailed perforated structures and measured data in a small-scale FGD tower.As for the full-scale tower, the computational cost with detailed structure is too expansive considering the enormous scale ratio between the perforated hole at the sieve tray and radius of tower.The porous media model makes the simulations of full-scale tower more practical and was validated in the experiments, which further proves the feasibility of using a porous media model.2. The liquid column from the nozzle experiences deceleration near the top surface of sieve tray and acceleration within it.As the liquid flow has not reached its terminal velocity, the acceleration remains even the liquid passes the tray region.The deceleration leads to the accumulation of liquid volume fraction near the sieve tray, while the acceleration reduces the size of liquid column because of continuity.These mechanisms result in the reduction of liquid column between two neighboring sieve trays and affect the two-phase mixing within the FGD tower.3. The empty, one-tray, and four-tray towers were simulated at different flow conditions.The size of liquid column with better two-phase mixing in the center and the area of uprising gas near the wall play the most important roles to determine the desulfurization performance.At different flow conditions, such as the variation of inlet gas flow rate, these two competing effects end up with different results, which also affects the selections of tray setups.4. The four-tray tower has the best two-phase mixing.However, its liquid column is the smallest, which also ensues the largest area of uprising gas near the wall with higher SO2 concentration.As a result, the four-tray tower at a lower gas flow rate fails to Correspondingly, in comparisons with Case F, the full implementation of sieve tray for Case G significantly enhances the two-phase mixing between neighboring trays in Figure 17b, and only reduces the size of liquid column by 26% in Figure 18.The former suppresses the latter and favors the SO 2 removal efficiency, as shown in Table 6 for Q = 36 m 3 /min.
Conclusions
This study applied an Eulerian-Eulerian two-phase model with Navier-Stokes equations, porous media model, standard k-ε turbulence closure, and species equations to investigate the fluid mechanics and desulfurization process within the FGD tower of sieve trays.Important findings and contributions can be highlighted as follows: 1.
The complex structures of sieve trays are replaced by the porous media model, which significantly saves the computational time with consistent results with those simulated by detailed perforated structures and measured data in a small-scale FGD tower.As for the full-scale tower, the computational cost with detailed structure is too expansive considering the enormous scale ratio between the perforated hole at the sieve tray and radius of tower.The porous media model makes the simulations of full-scale tower more practical and was validated in the experiments, which further proves the feasibility of using a porous media model.
2.
The liquid column from the nozzle experiences deceleration near the top surface of sieve tray and acceleration within it.As the liquid flow has not reached its terminal velocity, the acceleration remains even the liquid passes the tray region.The deceleration leads to the accumulation of liquid volume fraction near the sieve tray, while the acceleration reduces the size of liquid column because of continuity.These mechanisms result in the reduction of liquid column between two neighboring sieve trays and affect the two-phase mixing within the FGD tower.
3.
The empty, one-tray, and four-tray towers were simulated at different flow conditions.The size of liquid column with better two-phase mixing in the center and the area of uprising gas near the wall play the most important roles to determine the desulfurization performance.At different flow conditions, such as the variation of inlet gas flow rate, these two competing effects end up with different results, which also affects the selections of tray setups.
4.
The four-tray tower has the best two-phase mixing.However, its liquid column is the smallest, which also ensues the largest area of uprising gas near the wall with higher SO 2 concentration.As a result, the four-tray tower at a lower gas flow rate fails to improve the SO 2 removal efficiency from the other two tray setups at a lower gas flow rate.
5.
For a higher inlet gas volume flow rate, the stronger inertial of uprising gas flow leads to a weaker acceleration of the liquid column, and hence, the reduction in size of the liquid column is gentler.Correspondingly, implementing four sieve trays is more efficient under a higher gas volume flow rate.It enhances the performance by 15% from empty tower and 5% from one-tray tower.
Figure 1 .
Figure 1.This schematic shows the components in gas and liquid phases during the desulfurization process.
Figure 2
Figure 2 illustrates the chemical reactions during the desulfurization process.In the upper left corner of Figure 2, it indicates that the SO 2 gas dissolves into liquid phase and hydrates to form SO 2 •H 2 O.One proton is lost and HSO 3 − is generated because of the dissociation, and HSO 3 − has subsequent dissociation to lose another proton and form SO 3 −2 : SO 2(aq) + H 2 O H + + HSO − 3
Figure 2 .
Figure 2.This schematic illustrates the chemical reactions during the desulfurization process.
Figure 2 .
Figure 2.This schematic illustrates the chemical reactions during the desulfurization process.
(OH) 2 solution has complete dissociation, the initial value C OH− l from the spray nozzles is double that of the C Mg2+ l
Figure 3 .
Figure 3.This figure displays (a) details of the small-scale FGD tower and (b) the layout of the nozzle arrays from the top view with the sampling locations for the SO2 concentration.The unit is meter.
Figure 4
Figure 4 illustrates the full-scale FGD tower.The height and diameter of the tower are 23 and 7 m.The tower has three sieve trays, and the gas is transported by a pipeline with diameter of 3.6 m and it leaves the domain from the outlet with a diameter of 3.6 m.There are two sets of nozzle arrays.One is above the highest tray by 1 m with 192 nozzles, and the other is below the lowest tray by 1.2 m with 29 nozzles.Both sets of nozzles have spray angle of 65°and diameter of 0.05 m.The experiments for both small-and large-scale towers have already been conducted by China Steel Cooperation[34] with measured data for desulfurization and pressure at designated sampling points.
Figure 3 .
Figure 3.This figure displays (a) details of the small-scale FGD tower and (b) the layout of the nozzle arrays from the top view with the sampling locations for the SO 2 concentration.The unit is meter.Appl.Sci.2022, 12, x FOR PEER REVIEW 9 of 24
Figure 4 .
Figure 4.This figure highlights the details of the large-scale FGD tower.The unit is meter.
Figure 4 .
Figure 4.This figure highlights the details of the large-scale FGD tower.The unit is meter.
Figure 5 .
Figure 5.This figure illustrates the top view of the perforated structure of the sieve tray.
Figure 5 .
Figure 5.This figure illustrates the top view of the perforated structure of the sieve tray.
Figure 6 .
Figure 6.The figure displays the grid distributions of the small-scale FGD tower at the symmetric plane (z = 0) in (a) and the detailed layouts for the right tower near the sieve trays and nozzles in (b).
Figure 6 .
Figure 6.The figure displays the grid distributions of the small-scale FGD tower at the symmetric plane (z = 0) in (a) and the detailed layouts for the right tower near the sieve trays and nozzles in (b).
Figure 7 .
Figure 7.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO2 within the right tower of the small-scale FGD facility for Case 1 (real perforated structure) and Case A (porous media model).
Figure 7 .
Figure 7.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO 2 within the right tower of the small-scale FGD facility for Case 1 (real perforated structure) and Case A (porous media model).
Figure 8 .
Figure 8.This figure displays the liquid and gas flow velocity along the centerline of the small-scale tower for Case 1 and A.
Figure 8 .
Figure 8.This figure displays the liquid and gas flow velocity along the centerline of the small-scale tower for Case 1 and A.
Figure 9 .
Figure 9.This schematic shows the radius of liquid column of the small-scale tower for Cases 1 and A.
Figure 9 .
Figure 9.This schematic shows the radius of liquid column of the small-scale tower for Cases 1 and A.
Figure 10 .
Figure 10.The figure displays (a) the liquid volume fraction and (b) the mass fraction of SO2 (YSO2) for the large-scale tower at different times, by the porous media model.
Figure 10 .
Figure 10.The figure displays (a) the liquid volume fraction and (b) the mass fraction of SO 2 (Y SO2 ) for the large-scale tower at different times, by the porous media model.
Figure 11 24 Figure 11 .
Figure 11 shows the instantaneous variation of mass fraction of SO 2 (Y SO2 ) at outlet for the full-scale tower.The value of Y SO2 varies between 4 and 11 ppm, corresponding to SO 2 removal efficiency of 97-98.9%.The average efficiency is 98.3%.The experiment of this large-scale tower by China Steel Cooperation reports the efficiency as 96%.Therefore, this subsection further validates the porous media model in the full-scale tower.Appl.Sci.2022, 12, x FOR PEER REVIEW 16 of 24
Figure 11 .
Figure 11.The schematic shows the instantaneous variation of mass fraction of SO 2 (Y SO2 ) at outlet for the full-scale tower.
Figure 12 .
Figure 12.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO2 within the right tower of a small-scale FGD facility for Case A and Case C (empty tower).
Figure 12 .
Figure 12.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO 2 within the right tower of a small-scale FGD facility for Case A and Case C (empty tower).
Figure 13 .
Figure 13.This schematic shows the mass fraction of SO2 (YSO2) at different cross sections from top view for Case A and Case C.
Figure 13 .
Figure 13.This schematic shows the mass fraction of SO 2 (Y SO2 ) at different cross sections from top view for Case A and Case C.
Figure 14 .
Figure 14.This schematic shows the radius of liquid column of the small-scale tower for Cases A and C.
,b shows the liquid flow structures for Cases C and D. Before the liquid flow reaches the lowest sieve tray, Cases C and D have consistent size of liquid column, as shown in Figures 15a and 16.Moreover, the liquid flow reaches the terminal velocity 4.2 m/s at y = 0.75 m for both cases, and this stronger inertia causes a more obvious deflection when it impacts the lowest sieve tray for Case D, as highlighted in Figure16.Although the liquid column becomes smaller after the lowest sieve tray, as discussed previously, the deflection makes the reduction from a wider liquid column.This results in a larger liquid column for Case D than Case C in Figure16.
Figure 14 .
Figure 14.This schematic shows the radius of liquid column of the small-scale tower for Cases A and C.
Figure 15 .
Figure 15.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO2 from the top view at y = 0.615 m for Case C (empty tower) and Case D (onetray tower).
Figure 16 .
Figure 16.This schematic shows the radius of liquid column of the small-scale tower for Case C and D.
Figure 15 .
Figure 15.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO 2 from the top view at y = 0.615 m for Case C (empty tower) and Case D (onetray tower).
Figure 15 .
Figure 15.The figure displays (a) the liquid volume fraction, (b) streamlines of gas flow, and (c) the mass fraction of SO2 from the top view at y = 0.615 m for Case C (empty tower) and Case D (onetray tower).
Figure 16 .
Figure 16.This schematic shows the radius of liquid column of the small-scale tower for Case C and D.
Figure 16 .
Figure 16.This schematic shows the radius of liquid column of the small-scale tower for Case C and D.
Figure 17 .
Figure 17.The figure displays (a) the liquid volume fraction and (b) streamlines of gas flow for Case E-Case G.
Figure 17 .
Figure 17.The figure displays (a) the liquid volume fraction and (b) streamlines of gas flow for Case E-Case G.
Figure 18 .
Figure 18.This schematic shows the radius of liquid column of the small-scale tower for Cases F and G.
Figure 18 .
Figure 18.This schematic shows the radius of liquid column of the small-scale tower for Cases F and G.
Table 1 .
This table displays the important parameters of the sieve tray.
Table 1 .
This table displays the important parameters of the sieve tray.
h /D h Re h,gRe h,l
Table 2 .
This table highlights the important parameters of inlet flow of Case 1 (L/G=4.9)for the small-scale tower.
Table 3 .
This table lists the important parameters of inlet flow of Case 2 (L/G=3.2). for the small-scale tower.
Table 4 .
[36] table displays the SO 2 removal efficiencies of small-scale tower by numerical and experimental results[36]at different sampling points as L/G = 4.9.
Table 5 .
[36] table displays the SO 2 removal efficiencies of small-scale tower by numerical and experimental results[36]at different sampling points as L/G = 3.2.
Table 6 .
This table highlights the different cases of small-scale tower that computed by porous media model with SO 2 removal efficiencies at outlet.
6.Sections 4.2 and 4.3 indicate that the sieve tray can enhance two-phase mixing within the liquid column, but it also increases the region of uprising gas where the SO 2 mass fraction is higher.Depending on different values of the gas volume flow rate Q, these two competing effects end up with different trends for SO 2 removal efficiency. | 18,750.2 | 2022-02-22T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Optoacoustic model-based inversion using anisotropic adaptive total-variation regularization
In optoacoustic tomography, image reconstruction is often performed with incomplete or noisy data, leading to reconstruction errors. Significant improvement in reconstruction accuracy may be achieved in such cases by using nonlinear regularization schemes, such as total-variation minimization and L1-based sparsity-preserving schemes. In this paper, we introduce a new framework for optoacoustic image reconstruction based on adaptive anisotropic total-variation regularization, which is more capable of preserving complex boundaries than conventional total-variation regularization. The new scheme is demonstrated in numerical simulations on blood-vessel images as well as on experimental data and is shown to be more capable than the total-variation-L1 scheme in enhancing image contrast.
Introduction
Optoacoustic tomography (OAT) is a hybrid imaging modality capable of visualizing optically absorbing structures with ultrasound resolution at tissue depths in which light is fully diffused [1][2][3][4][5]. The excitation in OAT is most often performed via high-energy short pulses whose absorption in the tissue leads to the generation of acoustic sources via the process of thermal expansion. The acoustic signals from the sources are measured over a surface that partially or fully surrounds the imaged object and used to form an image of the acoustic sources, which generally represents the local energy absorption in the tissue [6]. Since hemoglobin is one of the strongest absorbing tissue constituents, optoacoustic images often depict blood vessels and blood-rich organs, such as the kidneys [7] and heart [8].
Numerous algorithms exist for reconstructing optoacoustic images from measured tomographic data [6]. In several imaging geometries, analytical formulae exist that may be applied directly on the measured data in either time [9,10] or frequency domain [11] to yield an exact reconstruction. The popularity of analytical formulae may be attributed to the simplicity of their implementation and low computational burden [12] However, analytical algorithms are not exact for arbitrary detection surfaces or detector geometries and lack the possibility of regularizing the inversion in the case of noisy or incomplete data. In such cases, it is often preferable to use model-based algorithms, in which the relation between the image and measured data is represented by a matrix whose inversion is required to reconstruct the image.
In the last decade, numerous regularization approaches have been demonstrated for model-based image reconstruction. The most basic approach is based on energy minimization and includes techniques such as Tikhonov regularization [13] and truncated singular-value decomposition [14]. In these techniques, a cost function on the image or components thereof is used to avoid divergence of the solution in the case of missing data, generally without making any assumptions on the nature of the solution. More advanced approaches to regularization exploit the specific properties of the reconstructed image. Since natural images may be sparsely represented when transformed to an alternative basis, e.g. the wavelet basis, using nonlinear cost functions that promote sparsity in such bases may be used for denoising and image reconstruction from missing data [15][16][17][18][19][20][21]. In images of blood vessels, in which the boundaries of the imaged structures may be of higher importance than the texture in the image, total-variation (TV) minimization has been shown to enhance image contrast and reduce artifacts [22][23][24] In TV regularization, the cost function regularizer is the L 1 norm of the image gradient, which is generally lower for images with sharp, yet very localized, variations than for images in which small variations occur across the entire image. Therefore TV regularization enhances boundaries and reduces texture, where over-regularization may lead to almost piecewise constant images, which are often referred to as cartoon-like. While TV regularization is capable of accentuating the boundaries of imaged objects, it does not treat all boundaries the same. In particular, boundaries with short lengths will lead to a lower TV cost function than boundaries with long lengths. Thus, complex, non-convex boundaries may be rounded by TV regularization to the closest convex form. Recently, Wang et al. have shown that if the directionality of the TV functional is adapted to the image features, TV regularization may be applied for optoacoustic reconstruction of objects with non-convex boundaries without distorting the boundaries [25].
In this paper, we demonstrate a new regularization framework for model-based optoacoustic image reconstruction that overcomes the limitations of TV regularization and is compatible with objects with complex non-convex boundaries. In our scheme, an adaptive anisotropic total variation (A 2 TV) cost function is used, in which the cost function is determined by the specific geometry of the imaged objects [26]. In particular, the A 2 TV cost function wishes to minimize the total variation in directions that are orthogonal to the boundary of the object. In contrast to [25], where the boundaries were calculated using geometrical considerations limited to 2D images, the A 2 TV framework developed in this work is based on eigenvalue decomposition of the image structure tensor, which may be applied in higher dimensions. The proposed formalism in the current study is based on a recent work by part of the authors which is concerned with nonlinear spectral analysis of the A 2 TV functional [26]. The work of Ref. [26] examines shapes which are perfectly preserved under A 2 TV regularization. Earlier works concerning TV have shown that only convex rounded shapes of low curvature are preserved [27]. For A 2 TV, however, it is shown in [26] that a parameter controlling the local extent of directionality is directly related to the degree of convexity (in the sense of [28]) and to the curvature magnitude of structures which are preserved. Thus, with appropriate parameters, long vessels of complex-nonconvex structure can be better regularized, keeping the original structure intact.
The performance of A 2 TV regularization was tested numerically in this work in numerical simulations for complex images of blood vessels and experimentally on 2D phantoms. The simulations were performed for the cases of noisy data and missing data and compared to unregularized reconstructions as well as to TV-L 1 reconstructions [15]. In both the numerical and experimental reconstructions, A 2 TV significantly increased the image contrast and was more capable than TV-L 1 in preserving non-convex structures when strong regularization was performed. In the experimental reconstructions, A 2 TV achieved a higher level of contrast enhancement of weak structures than the one achieved by TV-L 1 .
The rest of the paper is organized as follows: in Section 2 we give the theoretical background for OAT image reconstruction. Section 3 introduces the framework of A 2 TV and the A 2 TV algorithm for OAT image reconstruction developed in this work. The simulation results are given in Section 4, while the experimental ones are given in Section 5. We conclude the paper with a Discussion in Section 6.
The forward problem
The acoustic waves in OAT are commonly described by a pressure field p(r, t) that fulfills the following wave equation [29]: where c is the speed of sound in the medium, t is time, r = (x, y, z) denotes position in 3D space, p(r, t) is the generated pressure, Γ is the Grüneisen parameter, and H r (r)H t (t) is the energy per unit volume and unit time. The spatial distribution function of energy deposited in the imaged object, H r (r), is referred to in the rest of the paper as the optoacoustic image.
The analysis of (1) for an optoacoustic point source at r′, i.e. H r (r) = δ(r − r′), may be performed in either time or frequency domain. In the time domain, a short-pulse excitation H t (t) = δ(t) is used. In this case, the solution to Eq. (1) is given by [30] = p t c t r r r r r For a general image H r (r), the solution for p(r, t) may be obtained by convolving H r (r) with the expression in Eq. (2), which yields: Since the relation between the measured pressure signals, or projections, and the image is linear, it may be represented by a matrix relation in its discrete form: where p and u are vector representations of the acoustic signals and originating image respectively, and M is the model matrix that represents the operations in Eq. (3). In our work, the images is given on a two dimensional grid, and the measured pressure signals are also two dimensional, where one dimension represents the projection number, and the other time. An illustration of the image grid and projection and their respective mapping to the vectors u and p, is shown in Fig. 1. Briefly, the vector u is divided into sub-vectors, each representing the image values for a given column, whereas the vector p is divided to subvectors, each of which represents the time-domain pressure signal for a given location of the acoustic detector.
The ith column of the matrix M represents the set of acoustic signals generated by a pixel corresponding to the location of the ith entry in the vector u. Accordingly, to calculate the matrix M one needs to define time domain signal for a given detector location expected for a discrete pixel. Since the operations in Eq. (3) relate to continuous, rather than discrete images, one first needs to define the continuous representation of a single pixel, and then calculate its respective time-domain signals.
For example, in [31] it was assumed that the image value was constant over each of the square pixels, leading to an image H r (r) that is piecewise constant. While simple to implement, a simple piece-wise uniform model for H r (r) includes discontinuities that lead to significant numerical errors owing to the derivative operation in Eq. (3). In the current work, we use the model of [32], in which the image H r (r) represented by a linear interpolation between its grid points.
The inverse problem
While several approaches to OAT image reconstruction exist, we focus herein on image reconstruction within the discrete model-based framework described in the previous sub-section, which involves inverting the matrix relation in Eq. (4) to recover u from p. The most basic method to invert Eq. (4) is based on solving the following optimization problem: where u* is the solution and ∥ · ∥ 2 is the L 2 norm. A unique solution to Eq. (5) exists, which is given by the Moore-Penrose inverse: Alternatively, Eq. (5) may be solved via iterative optimization algorithms. In particular, since the matrix M is sparse, efficient inversion may be achieved by the LSQR algorithm [33].
In many cases, the measured projection data p is insufficient to achieve a high-quality reconstruction of u that accurately depicts its morphology. For example, when the density or coverage of the projections is too small, the matrix M may become ill-conditioned, leading to significant, possibly divergent, image artifact. In other cases, M may be well conditioned, but the measurement data may be too noisy to accurately recover u. In both these cases, regularization may be used to improve image quality by incorporating previous knowledge on the properties image in the inversion process.
One of the simplest forms of regularization is Tikhonov regularization, in which an additional cost function is added [13]: where L is weighting matrix. In the simplest form of Tikhonov regularization, L is equal to the identity matrix, i.e. L = I, thus putting a penalty on the energy of the image. The value of the regularization parameter λ > 0 controls the tradeoff between fidelity and smoothness, where over-regularization may lead to the smearing of edges and texture in the image. An alternative to the energy-minimizing cost function of Tikhonov are sparsity-maximizing cost functions. Since natural images may be sparsely represented in an alternative basis, e.g. the wavelet basis, a cost function that promotes sparsity may reduce reconstruction errors. Denoting the transformation matrix by Φ, one wishes that Φu be sparse, i.e. that most of its entries be approximately zero. In practice, sparsity is often enforced by using the L 1 norm because of its compatibility of optimization algorithms [16][17][18][19][20][21]. Accordingly, the inversion is performed by solving the following optimization problem: where μ > 0 is the regularization parameter, controlling the tradeoff between sparsity and signal fidelity. When over regularization is performed, compression artifacts may appear in the reconstructed image. Sparsity may be enforced not only on alternative representations of the image, but also on image variations. The discrete TV cost function approximates the l 1 norm on the image gradient, and is given by where u x y n , is the nth entry of the vector u, and the subtraction of "1" to x or y in the subscript corresponds to an entry of u that is respectively shifted in relation to u x y n , by one pixel in the x or y direction of the 2D image. The inversion using TV is thus given by [22][23][24] = + u p Mu u * arg min 2 2 TV (10) where α > 0 is the regularization parameter. In the case of TV minimization, over-regularization may lead to cartoon-like images and rounding of complex boundaries into convex shapes. In some cases, it is beneficial to promote both sparsity of the image in an alternative basis and TV minimization. In such cases, the optimization problem is as follows [15]: We will refer to the regularization described in Eq. (11), as TV-L 1 regularization.
The functional
We would like to use a regularizer that is adapted to the image in such a way that it regularizes more along edges (level-lines of the image) and less across edges (in the direction of the gradient). This idea has been introduced for nonlinear scale-space flows by Weickert [34] in the anisotropic diffusion formulation. However there is no known functional associated with anisotropic diffusion, and it is therefore not trivial to include an anisotropic-diffusion operation in our inverseproblem formulation. A more recent study of Grasmair et al. [35] uses a similar adaptive scheme within a TV-type formulation. In the study of [26], a comprehensive theoretical and numerical analysis was performed for adaptive-anisotropic TV (A 2 TV). It was shown that stable structures can be non-convex and in addition can have high curvature on the boundaries. Illustrations of the stable sets characterize the TV and A 2 TV regularizers are shown in Fig. 2. The degree of anisotropy directly controls the degree of allowed nonconvexity and the upper bound on the curvature. We adopt the formulation of [26], in which the mathematical underpinnings of A 2 TV are described in detail.
Let the A 2 TV functional be defined by
Fig. 2.
An illustration of sets which are stable for TV and A 2 TV -notice A 2 TV admits non-convex and highly curved functions, including ones which resemble arteries.
is a tensor or a spatially adaptive matrix and ∇ A = A(x)∇ is an "adaptive gradient". This functional is convex and can be optimized by standard convex solvers given the tensor A(x). We now turn to the issue of how this tensor is constructed to allow a good regularization of vessel-like structures.
Constructing the tensor A(x)
We assume to have a rough approximation of the image to be reconstructed. This can be done, for instance, by having an initial leastsquare non-regularized approximation or a standard TV reconstruction, as in Eq. (10). We thus have an initial estimation u 0 . The tensor A(x) determines the principle and the secondary directions of the regularization at each point and their magnitude. The construction of A(x) is performed using u 0 according to the following principles: In regions in which u 0 is relatively flat, i.e. ∇u 0 almost vanishes, A(x) should resemble the identity matrix and have no preferred direction, thus leading to the conventional TV regularizer. In regions with dominant edges, A(x) should capture the principle axes of the edge. See Fig. 3
for an illustration of A(x).
Mathematically, the tensor A(x) is defined by adapting the eigenvalues of a smoothed structure tensor of a smoothed image u 0;σ (with a Gaussian kernel of standard deviation σ), defined by, where κ ρ is a Gaussian kernel with a standard deviation of ρ, * denotes an element-wise convolution and ⊗ denotes an outer product. The and .
The structure tensor matrix has eigenvectors corresponding to the direction of the gradient and tangent at each point x; and eigenvalues corresponding to the magnitude of each direction. In order to preserve structure, we should change the relation between those eigenvalues so that for flat-like areas in the image we will smooth the image in an isotropic way, while for edge-like areas, we will perform more smoothing in the tangent direction rather the gradient one. For 2D, we begin by looking in the eigen-decomposition of the structure-tensor, Where V is a matrix whose columns, v v , 1 2 2 , are the eigenvectors of , and D is a diagonal matrix with eigenvalues in the diagonal, Assuming μ 1 ≥ μ 2 . The spatially adaptive matrix A(x), used in Eq. (12), is constructed from a modification of the eigenvalue matrix, denoted by D, as follows [34]: where D is given by In Eq. (19), μ 1,avg is the average value of μ 1 across the image and c(· ; ·) is a function of two parameters defined as follows: (20) where the chosen values for the parameter are the ones recommended in Ref. [34]: c m = 3.31488 and m = 4. The parameter k ≤ 1 determines which regions in the image will be regularized anisotropically and is chosen based on the desired level of anisotropy in the reconstructed image, as shown in Sections 4 and 5. In pixels in which s ≪ k, i.e. μ 1 / μ 1,avg ≪ k, we will obtain c ≈ 1, and D will be reduced to the unitary matrix. Accordingly, for regions in which the image gradient is sufficiently small from the average image gradient, as regulated by k, the A 2 TV functional (Eq. (12)) is reduced to the standard, isotropic TV functional (Eq. (9)). In the rest of the image, where c is sufficiently smaller than 1, regularization is performed more strongly in the direction of the eigenvector v 2 , i.e. the direction in which the image gradient is smaller, thus enhancing the anisotropy in those regions.
It is worth noting that in the 3D case, assuming μ 1 ≥ μ 2 ≥ μ 3 , the only modification to the analysis above is that D accepts the following form: The structure of D in Eq. (21) will be highly anisotropic for tube-like structures and will enforce variation-reducing regularization along the structure length (eigenvector v 3 ), while maintaining low regularization in the cross-section plane (eigenvectors v 1 and v 2 ).
Reconstruction based on A 2 TV
The reconstruction based on the A 2 TV minimizes the following functional: where M is the model matrix and p is the acoustic pressure wave. The solution of which is done by the modified Chambolle-Pock projection algorithm [36] described in Appendix A.
In the process of minimization, the tensor A is initialized as the identity matrix for all x 2 , which reduces the A 2 TV energy to the TV one as it performs the diffusion isotropically. The tensor A is then updated according to the initial solution u 0 . This is repeated until numerical convergence is reached.
We note that while the energy of the A 2 TV is convex for a fixed tensor A(x), it is not convex when A(x) is adaptive and depends on the imaged object. Thereby, we do not have a mathematical proof of convergence. Nonetheless, it has been shown both in our work [26] and in Refs. [34,35] that heuristically, both the image u and the tensor A(x) converge.
Numerical simulations
In this section, we demonstrate the performance of A 2 TV-based inversion for the circular detection geometry illustrated in Fig. 4. The simulations were performed on a 2D vascular image of a mouse retina, obtained via confocal microscopy. The vasculature image, shown in Fig. 5a, was represented over a square grid with a size of 256 × 256 pixels with pixel size of 0.1 × 0.1 mm. The projections were simulated over a 270-degree semi-circle with a radius of 4 cm that surrounded the object, in accordance with conventional OAT systems [4]. A magnification of four square regions of the image is shown in Fig. 5b.
The reconstructions were performed using the conventional L 2based regularization-free approach (Eq. (5)) performed via LSQR, TV-L 1 regularization (Eq. (11)), and the proposed A 2 TV regularization (Eq. (22)) method. Since the scaling of the model matrix M (Eq. (4)) depends on the exact implementation of its construction [32], we normalized M by M M 1 160 1 to assure that the regularization parameters are independent of the scaling of M. To assess the quality of the reconstructions, we used the mean absolute distance (MAD) given by the following equation: where N is the number of pixels in the image. Two cases were tested: In the first case, zero-mean Gaussian noise with a standard deviation of 0.6 times the maximum value of p was added to the projection. The number of projections was chosen to be 256, corresponding to the geometry found in state-of-the-art optoacoustic systems [37] and sufficient for the accurate reconstruction of the tested image in the noiseless case. In the second case, the number of projections was reduced to 32, which is half the number of projections used in low-end optoacoustic systems characterized by reduced lateral resolution [37]. Accordingly, 32 projections are insufficient for producing detailed optoacoustic images using conventional reconstruction techniques. In all the examples, the number of iterations was chosen to be sufficiently high to achieve convergence.
Figs. 6 and 7 respectively show the reconstructions obtained using TV-L 1 and A 2 TV, performed with 3000 iterations, for the cases of additive Gaussian noise. In both figures, 9 reconstructions are shown, corresponding to a scan in the regularization parameters. In the case of TV-L 1 , μ and α represent the strength of the L 1 and TV regularization terms Eq. (11), whereas in the case of A 2 TV, λ represents the strength of the fidelity term in Eq. (22) with respect to the A 2 TV term and k determines the strength of the anisotropy, where lower values of k correspond to higher anisotropy. In the A 2 TV reconstructions, the standard deviations of the smoothing kernels (Eq. (14)) were σ = 1.5 pixels and ρ = 3 pixels. For all the reconstructions, the MAD values appear on the top-right corner of the image. In Fig. 8, we show a comparison between the regularization-free LSQR reconstruction (Fig. 8a) Fig. 5a for the case of additive Gaussian noise using different parameters for the TV-L 1 case. The reconstructions were performed with 3000 iterations. Fig. 5a for the case of additive Gaussian noise using different parameters for the A 2 TV case. The reconstructions were performed with 3000 iterations. ( Fig. 8b) and A 2 TV (Fig. 8c) of Figs. 6d and 7e, respectively, which correspond to the regularization parameters that achieved the lowest MAD values. The middle panel of the figure (Fig. 8d-f) shows a magnification of 4 patches taken from the reconstructions, whereas the bottom panel (Fig. 8g) presents a 1D slice taken over the yellow line in Fig. 8b and c. While both TV-L 1 and A 2 TV significantly improved the reconstruction quality, in A 2 TV more of the noise-induced texture between the blood vessels could be removed without damaging the structure of the blood vessels, thus leading to a lower MAD.
The reconstructions for the case of 32 projections are presented in a similar manner to the case of noisy data. Fig. 9, obtained with 1000 iterations, and Fig. 10, obtained with 1500 iterations, respectively show the TV-L 1 and A 2 TV reconstructions for a scan regularization parameters, whereas Fig. 11 shows a comparison between the LSQR and TV-L 1 and A 2 TV reconstructions that achieved the lowest MAD (Figs. 9d and 10e, respectively). All the A 2 TV reconstructions were obtained with σ = 1.5 pixels and ρ = 1 pixel. Fig. 11 shows that both TV-L 1 and A 2 TV eliminated the streak artifacts that appeared with in the LSQR reconstruction, where the lowest MAD was achieved by the TV-L 1 reconstruction. Since both regularization methods eliminated the streak artifacts, the lower MAD achieved by TV-L 1 is a result of its ability to better preserve texture within the blood vessels, whereas in the A 2 TV much of the blood-vessel texture was lost. Indeed, when examining the 1D slice in Fig. 11g, it is easy to see that the TV-L 1 reconstruction captures the variations within the blood vessels better, whereas in the A 2 TV reconstruction, these variations are smoothed.
In both the cases studied, A 2 TV exhibited a higher capability than TV-L 1 to perform regularization without harming non-convex structures. Even in the case in which TV-L 1 achieved a lower MAD, the higher ability of A 2 TV to preserve the fine details of the blood vessel morphology can be observed when comparing Fig. 11e and f. Additionally, in all the examples, one can observe that when TV-L 1 was performed with a high level of TV regularization (bottom row in Figs. 6 and 9) significant smearing of the blood vessels was observed. In contrast, in the A 2 TV reconstructions, the smearing owing to over-regularization (low values of λ) could be diminished by increasing the anisotropy in the regularization, i.e. reducing the value of k. For the lowest values of k, higher levels of regularization (low values of λ) created undesirable anisotropic vessel-like texture in the reconstructed images.
Experimental results
To further validate the suitability of A 2 TV regularization for OAT image reconstruction, we tested its performance on experimental data. The optoacoustic setup comprised an optical parametric oscillator (OPO), which produced nanosecond optical pulses with an energy 30 mJ at a repetition rate of 100 Hz and at a wavelength of λ = 680 nm. (SpitLight DPSS 100 OPO, InnoLas Laser GmbH, Krailling, Germany). The OPO pulses were delivered to the imaged object using a fiber bundle (CeramOptec GmbH, Bonn, Germany). Ultrasound detection was performed by a 256-element annular array (Imasonic SAS, Voray sur l'Ognon, France) with a radius of 4 cm, and an angular coverage of 270 degrees, comparable to the geometry shown in Fig. 4. The ultrasound detectors were cylindrically focused to a plane, approximating a 2D imaging scenario.
The imaged object was a transparent agar phantom which contained four intersecting hairs. A photo of the phantom is shown in Fig. 12a, where Fig. 12b shows 4 magnified parts of the phantom. The phantom preparation involved mixing 1.3% (by weight) agar powder (Sigma-Aldrich, St. Louis, MO) in boiling water and pouring the solution in a cylindrical mold until solidification. To assure that all four hair strands lie approximately in the same plane, we first prepared a clear cylindrical agar phantom, on which the hairs were placed; additional agar solution was then poured on the structure to seal the hairs.
The optoacoustic image was reconstructed from the measured data using TV-L 1 regularization with 3000 iterations and A 2 TV regularization with 6000 iterations, σ = 1.5 pixels, and ρ = 1 pixel. Figs. 13 and 14 respectively show the images obtained via TV-L 1 and A 2 TV reconstructions for a set of regularization parameters. As in the previous section, over-regularization in the TV-L 1 led to significant loss of structure, whereas for the A 2 TV the ability to capture the image morphology under strong regularization (low λ) was improved when the anisotropy was increased via low values of k. Fig. 15 compares between the unregularized LSQR reconstruction (Fig. 15a) and the TV-L 1 (Fig. 15b) and A 2 TV (Fig. 15c) reconstructions, respectively taken from Figs. 13e and 14e . The second row in the figure (Fig. 15d-f) shows a magnification of 4 patches from the three reconstructions of the top row (Fig. 15a-c), whereas the bottom panel (Fig. 15g) shows a 1D slice from the vertical yellow line in Fig. 15b and c. To allow an easy comparison, the 1D slices were normalized by their maximum values. We note that the negative values in the reconstruction are a common result of the limited detection bandwidth of the ultrasound detector [6]. The figure clearly shows that the A 2 TV reconstruction obtained the highest image quality, in particular for the weak hair structure at the image bottom. In particular, the 1D slice shows that the bottom hairs that appear around the position of 1 cm achieve a peak-to-peak signal over 4-times higher in the A 2 TV reconstruction than in the TV-L 1 reconstruction.
Discussion
In this work, a novel regularization framework was developed for OAT image reconstruction. The new framework is based on an A 2 TV cost function, which represents a generalization of the conventional TV functional, that is compatible with objects that possess complex, nonconvex boundaries. In contrast to TV, the A 2 TV cost function is an adaptive functional whose form depends on the characteristics of the image. When using A 2 TV, one first roughly defines the boundaries of the objects in the image. Then, one uses these boundaries to determine the directions in which the gradients are applied on the image. Similar to TV, A 2 TV is most appropriate as a regularizer in optoacoustic image reconstruction when the images are comprised of objects with welldefined boundaries. However, A 2 TV is useful also when these boundaries are incompatible with TV regularization due to their complexity. A common category of optoacoustic images that fits the above description is blood-vessel images. Since blood is a major source of contrast in optoacoustic imaging, OAT systems often produce images that are dominated by a complex structure of interwoven blood vessels.
In particular, high-resolution images of the micro-vasculature are characterized by a complex network of arterioles, venues, and capillaries with extremely complex, nonconvex boundaries. In our current implementation, A 2 TV required setting 4 parameters: σ, ρ, λ, and k. The first two parameters, σ and ρ, determined the image smoothing used in calculating the image gradients. While smoothing reduces the noise, thus limiting the effect of reconstruction errors on the detection of the principle axes, only moderate smoothing may be used without the risk of merging the boundaries of different objects in the image. Therefore, in all our examples, the smoothing was performed with Gaussian kernels whose standard deviations, σ and ρ, were 3 pixels or less. While the choice of σ and ρ depended on the level of noise and artifacts in the regularization-free reconstruction, the parameters λ and k were determined by the amount of regularization desired in the reconstructed image, where k determined the amount of anisotropy and λ determined the strength of the regularizer. In our examples, performing over-regularization (λ = 0.0001) led to image smearing in the isotropic case (k = 1) and vessel-like artifacts in the case of high anisotropy (k = 0.01). While such artifacts are undesirable, it is worth noting that their presence did not obscure the underlying image morphology, whereas over-regularization in TV-L 1 led to loss of image details.
We compared the performance of the A 2 TV algorithm TV-L 1 algorithm for both numerically simulated data and experimental data. In the numerical simulations, an image of blood vessels was reconstructed for the cases additive noise and sparse sampling of the projection data. In the experimental reconstructions, the imaged object was four intersecting hair strands whose structure emulated the morphology of blood vessels. In both the numerical and experimental examples, A 2 TV demonstrated a higher ability to preserve the blood-vessel morphology for high regularization parameters. Nonetheless, in the numerical example in which the reconstructions were performed with a low number of projections, TV-L 1 regularization achieved a lower MAD owing to the loss of texture in the A 2 TV reconstruction. In the experimental example, A 2 TV led to a considerable improvement in the contrast in the weak structures of the image in comparison to the TV-L 1 reconstruction.
The reconstruction performance demonstrated in this work suggests that A 2 TV may be a useful tool for improving the ability of optoacoustic systems to perform vasculature imaging, which is a major application in the field. Since the texture within the blood vessels is affected by the random distribution of the red-blood cells within the blood vessels, its elimination by A 2 TV may be considered as an acceptable price for better visualization of the blood-vessel morphology. In deep-tissue OAT systems, sub-millimeter vasculature imaging has been suggested as a potential diagnostic tool and has been demonstrated in the human extremities [38,39] and breast [40]. We note that while some OAT systems can also produce images of the tissue bulk, characterized by low frequencies and representative of the density of the microvasculature and fluence map, such systems require transducers capable of detecting ultrasound frequencies considerably below 1 MHz [6]. In high-resolution OAT systems, operating at frequencies above 1 MHz and capable of reaching resolutions better than 100 μm [41,42], only signals from blood vessels may be detected. Finally, when performing optoacoustic imaging at resolutions better than 10 μm, e.g. using raster-scan optoacoustic mesoscopy (RSOM) [43], the image is dominated by the microvasculare and generally lacks any bulk component associated with blood.
We note that the formalism of A 2 TV, in which the structure tensor matrix analysis is performed via eigenvalue decomposition Eq. (13) enables its adaptation to higher image dimensions. TV regularization has been recently performed in 4D optoacoustic reconstruction that included 3 spatial dimensions and time [44]. Since the variation of the pixels in time is generally different than the one space, it may be Fig. 12a for the case of experimental data using different parameters for the A 2 TV case. The reconstructions were performed with 6000 iterations.
S. Biton, et al.
Photoacoustics 16 (2019) 100142 expected that A 2 TV can further improve image fidelity in such cases.
Conflicts of interest
The authors declare that there are no conflicts of interest related to this article.
Acknowledgement
We are thankful to Anna M. Randi for providing information on the image used in Section 4. This work was supported by the Technion Ollendorff Minerva Center. Where M is the model matrix, p is the acoustic pressure wave and u* is the reconstructed image. The algorithm used for inversion is Chambolle-Pock, which solves the following primal form [36]: Fig. 12a for the case of experimental data using (a) LSQR, (b) TV-L 1 , and (c) A 2 TV. The TV-L 1 and A 2 TV reconstructions were produced using 3000 and 6000 iterations, respectively. (g) A 1D slice of the TV-L 1 and A 2 TV reconstructions, corresponding to the vertical lines on the top panel. | 8,173.2 | 2019-08-07T00:00:00.000 | [
"Mathematics"
] |
Emerging Trends in Employee Retention Strategies in a Globalizing Economy : Nigeria in Focus
This study explores the emerging trends in employee retention strategies in a globalizing economy, with a focus on Nigeria. The paper argues that globalization has enhanced the mobility of labor, and has also accelerated the rate of employee turnover in organizations in Nigeria. The paper identifies some of the reasons for turnover to include inequity in the compensation packages of organizations, employees’ dissatisfaction and autocratic managerial pattern in most organizations in Nigeria. It further identifies the effects of turnover to include disruption in production, cost of training new employees, the recruitment and selection cost and knowledge lost. As a panacea to minimize the rate of employee turnover and catch up with the current demands of global economic needs and organizational performance, the study proposes that organizations in Nigeria should adopt critical sustainable retention trends such as establishing a strategic plan, involving employees in decision-making process, initiating personalized compensation plan, installing mechanisms for career planning, training and development and building flexible work programs especially for critical knowledge employees. These will help to retain core employees that will competitively drive the production wheel in the organizations in Nigeria in this era of globalization.
Introduction
The emerging trend in today's fast-changing competitive business environment occasioned by globalization has presented evident challenges before the Human Resources professionals.With increasing globalization, there have been enormous and far reaching changes in global organizations.These changes are the result of fierce international competitive pressure faced by enterprises operating in the global place encircled with knowledgedriven productive economy (Wokoma and Iheriohanma, 2010).
The new demands of international competitive and dramatic advances in information and communications technologies (ICT), and new patterns of consumer demands for goods and services have propelled organizations across the world to change substantially and adopt new methods of production and organization of work.This situation has tremendously enhanced the mobility of individuals, thereby accelerating the rate of employee turnover in organizations.As a result, the recruitment of competent personnel equipped with the requisite knowledge has increasingly been difficult in Nigeria.
Employee commitment, productivity and retention issues are emerging as the most critical challenge on the management of workforce in the immediate future.This challenge is driven by the concerns of employee loyalty, corporate restructuring efforts and tight competition for key talents (Kresiman, 2002).For many firms, employee departures can have significant effects on the execution of business plans and may eventually cause a parallel decline in productivity.This phenomenon is especially true in the light of current economic uncertainty and following corporate downsizing, as occasioned by outsourcing and other intricate production dictates.The impact of losing critical employees increases exponentially (Noer, 1993;Ambrose, 1996;Caplan and Teese, 1997).This is so because every economy relies on the capacity and knowledge -competence of its human resource for economic development.Hence, human resource is the greatest asset of any organization.
Globally, the retention of skilled employees has been a serious concern to management.The desired critical measures for retention of employees have therefore become strategic to sustainable competition among organizations in a globalizing economy such as Nigeria.This development has dramatically changed human resource practice in the area of attracting skilled employees into organizations, and most importantly is the strategy for retaining them (Samuel, 2008;Nwokocha, 2012).
Employee retention connotes the means, plan or set of decision-making behavior put in place by organizations to retain their competent workforce for performance (Gberevbie, 2008).There have been many human resource strategies provided to retain employees for the advantage of the organizations.These strategies are aimed at avoiding employee turnover.Mobley (1982) defines turnover as the cessation of membership in an organization by individuals who have received monetary compensation from the organization.
Organizations rely on the expertise, knowledge, skills, and capital resource and capacity development of their employees in order to compete favorably and indeed gain competitive advantage in the international market.However, recent studies have shown that retention of highly skilled employees has become a difficult task for managers, as this category of employees are being attracted by more than one organization at a time with various kinds of incentives (Micheal, 2009).This behooves on management to create an enabling and sustainable critical culture and strategies to work out retention systems and structures for their existing core employees in these contemporary organizations.This is pertinent because according to Czkan (2005), the motivational strategies used to attain retention in the past are or may no longer be appropriate to motivate critically talented and mobile employees to remain, thereby increasing the rate of turnover.
It is against this backdrop that this paper intends:
(a) To examine the traditional employee retention strategies in organizations, especially in Nigeria, (b) To illuminate on the factors responsible for an observed rising rate of turnover in organizations in Nigeria, (c) To highlight the effects of turnover on organizations, (d) To explore the emerging trends in employee retention strategies that will be sustainable, especially in organizations in Nigeria.
This article benefits copiously from library research, informal discussions as well as personal observations of the authors.It is essentially an explorative discourse.
An Overview of the Traditional Employee Retention Strategies in Organizations in Nigeria
The intent of this sub section is to draw an insight on some of the traditional employee strategies currently being employed by most organizations in Nigeria.
Job Satisfaction
Job satisfaction is a general attitude toward an individual's current job.This encompasses the feelings, beliefs and thoughts about the job.Riggio (2003) describes job satisfaction as consisting of the feelings and attitudes one has about one's job.This includes all aspects of a particular job, good and bad, positive and negative, which are likely to contribute to the development of feeling of satisfaction or dissatisfaction or turnover intentions.This conforms to the views of Kim, Leong, and Lee (2005) and Scherman, Alper, and Wolfson (2006).They agreed that job satisfaction entails what employees' feel and perceive about their jobs and what their experiences on work are.Yang (2009) described job satisfaction as the agreeable emotional condition resulting from the assessment of one's job as attaining or facilitating the accomplishment of one's job values.
Job satisfaction can be influenced by a variety of factors, such as pay practice, quality of workers' relationship with their supervisor, and quality of the physical environment in which they work (Hamdia and Phadett, 2011).Job satisfaction and turnover are basically related to the extent that job satisfaction has direct effect on employee retention and turnover.Al-Hussami (2008) affirmed that if employees are more satisfied with their job, it will enhance their ability of creativity and productivity.This will in turn impart on their intention to remain in the organization.This simply suggests that employees who are satisfied with their jobs are likely to remain with the organization longer than those who are dissatisfied with their jobs.It also implies that employee retention can be achieved and turnover minimized if management is able to identify and apply appropriate variables that will create job satisfaction amongst employees.
Training
Training is referred to as a planned effort to facilitate the learning of job-related knowledge, skills and behavior by employee (Noe, Holleneck, Gerhart, and Wright, 2006).Wan (2007) posits that the only strategy for organizations to radically improve workforce productivity and enhance their retention is to seek to optimize their workforce through comprehensive training and development.To achieve this purpose, organizations will have to invest on their employees to acquire the requisite knowledge, skills and competencies that will enable them function effectively in a rapidly changing and complex work environment.Batt (2002) argues that high-involvement practices such as autonomy, team collaboration, and training are related to reduce employee turnover and increase productivity.
Employees consider training, education and development as crucial to their overall career growth and goal attainment and will be motivated to remain and build a career path in an organization that offers them such opportunity (Samuel, 2008).A study by Babakus, Yavas, Karatepe and Avci (2003), reports that an organization that provides training sends a strong signal to its employees regarding management commitment to their retention and customer service.The study by Steel, Griffeth, and Hom (2002) reveals that empirical data show that lack of training and promotional opportunities were the most frequently cited reasons for high performers to leave the company.Also, the study by Bradley, Petrescu and Simmons (2004) reports that an increase in high-performance work practices is as a result of training which is converted to a decrease in employee turnover in organization.This implies that when an organization provides training to its employees, it will, to a large extent, reduce turnover and enhance employee retention.
Reward Strategy
According to Agarwal (1998), reward is defined as something that an organization gives to the employees in response to their contributions and performance and also something which is desired by the employees.A reward can be extrinsic or intrinsic.The extrinsic variables include company policies, co-workers relationship, supervisory styles, salary, work conditions and security.The intrinsic variables include achievement, recognition, work itself, responsibility, advancement and growth (Bassett-Jones and LIoyed, 2005).Reward can be in form of cash, bonuses, and recognition amidst others.
The purpose of reward strategy is to develop policies and practices which will attract, retain and motivate high quality people (Armstrong, 2003).The result by Taplin, Winterton, and Winterton (2003), confirmed that rewards, as provided by organizations, have positive relationship with job satisfaction and employee retention.This simply suggests that a high level of pay or benefits relative to that of competitors can ensure that an organization attracts and retains high quality employees.
Supervisory Support
The immediate supervisor is very important in organizational change.When a supervisor provides mentoring, the relationship affects the protégés skill development and intentions to remain with the employer (Atif, Kashif, Ijaz, Muhammad and Asad, 2011).When an employee's skill improves, it will positively affect productivity in organization.Conversely, non-supervisory mentor may increase mentee's confidence by providing access to outside organization (Scanduraa and Williams, 2004).A study by Karasek and Theorell (1990) reveals that poor supervision not only caused the dissatisfaction of employees with their work, but also instigated turnover.Keashly and Jagatic (2002) opine that poor supervision leads to dissatisfaction of employees hence the propensity for turnover.Harmon, Scott, Behson, Farias, Petzel, Neuman and Keashly (2007) in their work, argue that the control work practices which are supervision -oriented and supportive significantly correlated with increased job satisfaction and lower turnover rates among the workers.
Literature that supports social and organizational culture indicates that whenever a subordinate is properly supported by supervisor, this will generate positive outcomes both for the organization and the employee (Shanock and Eisenberger, 2006).Simth (2005), in his contribution posits that this is also beneficial for supervisor, because the more competent and more supportive the supervisor is, the more likely the employees and supervisors retain their jobs.He further states that supportive supervision enhances impact on both organizational commitment and job retention.This will in turn impart on productivity in the organization.
Reasons for Employee Turnover in Organizations
The phenomenon of turnover is of interest to organizations and theorists because it is significant, potentially costly and relatively clear cut (Mobley, 1977;Price, 1977;Lazear, 2000).Employee turnover is defined as the rotation of workers around the labour market; between the status of employment and unemployment (Abassi and Hollman, 2000).
Turnover in organizations has not so far proved amenable to prediction.Despite an enormous literature on turnover in organizations, the concept has no universally accepted reason or framework for why employees choose to leave their organizations.However, employee turnover has been classified into two categories: voluntary and involuntary turnover.Voluntary turnover takes place when competent and capable employees particularly leave an organization to work elsewhere.This turnover is costly to the organization, because losing a valued employee reduces organizational productivity, increases expenses associated with recruitment, hiring and training a replacement and also provides an opportunity to competitors to utilize the skills, abilities and critical knowledge of an experienced and competent employee (www.psychologyybhu.blogspot.com/employee-retention).Involuntary turnover occurs when an employee is fired or laid off.A certain amount of involuntary turnover is likely to be considered inevitable, and possibly even beneficial.For instance, firing workers who are not performing at desirable levels can be viewed as positive (Mobley, 1982).This type of turnover enhances the effectiveness of the organization.
Organizational researchers have advanced many factors as being responsible for employee turnover.Sherratt (2000) and Van Vianen, Feji, Krauz, and Taris (2004) have distinguished two motives for turnover; the push and pull motives.The pull motives include inequity in compensation of an organization, the availability of opportunities to improve one's career opportunities on the external labor market and resignation by employees from organization to go into private business.The push motives are related to dissatisfaction with employee's current work situation, autocratic managerial patterns and job stress.Sometimes, it could be the combination of the two motives that propel an employee to seek for an alternative employment.
Griffieth, Hom and Gaertner (2000) posited that pay and pay-related variables have a modest effect on turnover.Their analysis also included studies that examined the relationship between pay, a person's performance and turnover.They concluded that when high performers are insufficiently rewarded, they quit.It suggests that if a job provides adequate financial incentives, the more likely employees remain with organizations and vice versa.In the views of Abassi et.al (2000), poor hiring practices, managerial style, lack of recognition, lack of competitive system in the organizations and toxic workplace environment also account for employee turnover in organizations.It is evident from this review, that many factors are responsible for turnover in organizations.It further suggests an economic indicator for stiff competition, which organizations need to change in their corporate strategies to retain their talented employees.In other words, employers need to understand their rates of labor turnover and how they affect the organizations' performance.An appreciation of the levels of turnover across occupations, locations and particular groups of employees can help inform a comprehensive resourcing strategy (www.cipd.co.uk/hr-topics/retention-turnover).
Effects of Turnover on Organizations
Employee turnover is costly and seemingly intractable human resource challenges confronting several organizations globally.The major factor of employee turnover that impinges on organizations is the costs.These costs include search of external labor market for possible substitute, selection between competing substitutes, induction of the chosen substitute, and formal and informal training of the substitute until he attains performance levels equivalent to the individual who quits (John, 2000).There are also indirect costs that are also involved when an employee leaves the organization.These, according to Sutherland (2004), include the knowledge, skills and contacts that the departing employee takes out of the organization.Gaia and Christopher (2007), posit that turnover affects both employees and organizations.Workers experience disruption, the need to learn new job-specific skills and find different career prospects.From organizational perspective, organization suffers the loss of job-specific skills, disruption in production and incurs costs of hiring and training new workers.All these affect the profitability of the organization.
Researchers have argued that high turnover rates might have negative effects on the profitability of organizations if not managed properly (Hogan, 1992;Wasmuth and Davis, 1993;and Barrows, 1990).Turnover also makes it difficult for organizations to maintain a steady and successful operation.Research estimates that hiring and training a replacement worker for a lost employee costs approximately 50 percent of the worker's annual salary (Johnson, Griffeth and Griffin, 2000;and Susan, 2011).Each time an employee leaves the organization, productivity drops due to the learning curve involved in understanding the job and the organization.Also, the loss of critical and irreplaceable intellectual capital adds to these costs, since not only do organizations lose the human capital and relational capital of the departing employee, but other competitors are potentially gaining these critical assets (Meaghan and Nick, 2002).It is therefore suggested that since turnover is an index of organizational effectiveness, it then requires the attention and comprehensive understanding of information on turnover.This will be relevant for planning, prediction and control of resources for organizational managers to check mate the effects associated with turnover in the organization, especially in Nigeria.
Emerging Trends in Employee Retention Strategies
The fierce competition globally for qualified workforce has made it pertinent for organizations to radically alter and initiate new workplace trends that will provide for sustainable and attractive retention strategies for their critically talented employees.This is so, because as business growth continues to move to the forefront, people issues are becoming even more critical as organizations seek for skilled people to handle the growth.Nigeria, being part of the global environment is not excluded in this quest.Hence, this section seeks to explore the emerging trends in employee retention strategies that will be sustainable, with specific focus on Nigeria.This will be addressed on the following sub headings: Establishment of strategic retention plan
Establishment of Strategic Retention Plan
For organizations to compete favorably in this business world that is characterized by increased global competition and tensed business area, it is imperative that management of organizations in Nigeria should design strategic retention programs that will align and integrate their choice employees into the organization.This should be done by aligning the organization's human capital processes with its overall business strategy.This entails elevating the retention strategies to a more strategic level which in turn yields indisputable business benefits and employee's satisfaction to remain with the organization.In doing this, organizations must regularly analyze the effectiveness of these strategies, making sure that all employees data are captured and aligned.This will help increase the efficiency of the program and also serve as an early warning sign for problem areas (www.ey.com).
Participatory Decision-Making Process
The challenging trends in the competitive global economic market and workplace require organizations to involve the participation of workers in the decision-making process of the organizations in order to retain their critical employees and to secure their loyalty, commitment, dedication and ensure their security.This involves the integration of these choice employees in organizational participation, management and administration that will usher in industrial and organizational efficiency and harmony.Iheriohanma (2008) posits that: Workers in Nigeria desire security of their jobs in their workplace.They desire affection and interaction with colleagues, they want to be recognized, assured of their work life.They want to achieve and prove their competence.These and more can be realized if they are informed, accommodated and integrated in the formulation of policies that guide their work processes.
The need to involve critical employees into decision-making process cannot be over emphasized.Nigerian workers are exposed to workplace issues like their counterpart all over the world.This is occasioned by the advent of Information and Communications Technology (ICT) which has made the world a one global village, which has lead to the cross fertilization of ideas amongst the workers worldwide.Participatory management is a power-sharing mechanism under which both managers and workers, in an accommodating, cooperative and complementary manner, do their jobs better.It gives workers some personal voice in the decisions that govern their workplace (Iheriohanma, 2007).The implication here is that, when workers are integrated into the decision-making process of the organization, they will feel valued, accommodated and this will stifle or blur their intentions to leave or quit the organization.
Creating the atmosphere for participatory management also entails initiating a more humane work environment that will appreciate the contributions of the workers in the organization.This requires the displacement of authoritarian management style that will involve the re-orientation of managerial attitude to reflect a transactional culture in workplace.This also encourages the cultivation of unsuspicious partnership and collaborative teamwork with employees that will help to stimulate a pleasant social work environment that will respond expediently to employee needs and complaints.This relaxed social atmosphere includes a friendly and happy environment reminiscent of a family.This underscores the views of Cappeli (2000) and Michell, Holton and Lee (2001), that social friendship at work acts as drivers to enduring employee retention in the organization.
Published by Canadian Center of Science and Education 203
Personalized Compensation Plan
According to Samuel (2008), money acts as a "scorecard" which enables employees to assess the value the organization places on them in comparison to others.In this context, organizations are required to devise sustainable compensation strategies that will cover the broad spectrum of total compensation, not just basic pay and salary, but including performance -based and special recognition programs to its critical employees.The pay should be equitably comparative to the ones prevailing outside and within the industry for similar jobs.In devising this personalized compensation plan for critical employees in the organization, the plan should cover many diverse compensation techniques like competitive salary, project bonus, superannuation and fringe benefits.
In this way, it is believed that employees would be adequately motivated and would resist the temptation of leaving the organization (Davar, 2003).This will in turn, propel the workers into better performance that will enhance productivity in the organization.
Career Planning, Training and Development
Career development is a system which is organized and formalized, and it is a planned effort towards achieving a balance between the individual career needs and the organization's workforce requirement.Opportunities for career development are considered as one of the most important factors affecting employee retention.
It is suggested that a company that wants to strengthen its bond with its employees must invest in the training and development of these employees (Hall and Moss, 1998;Woodruffe, 1999;Steel, Griffeth, and Hom, 2002;Hsu, Jiang, Klein, Tang, 2003).This entails creating opportunities for promotion within the company and also providing opportunities for training and critical skills development that allows employees to improve their employability competitively on the internal and/or external labor market (Butler and Waldrop, 2001).Wan (2007) argues that the only strategy for organizations to radically improve workforce productivity and enhance their retention is to seek to optimize their workforce through comprehensive training and development.To achieve this purpose, organizations will have to invest on their employees to acquire the requisite critical knowledge, skills and competencies that will enable them function effectively in a rapidly changing and complex work environment.This simply suggests that human resource is central to the accomplishment of organizational goals and objectives.Knowledge workers are treated as assets and partners rather than costs.It is therefore, admissible that organizations in Nigeria should adapt to the changes provided by Information and Communications Technology (ICT) in order to compete in this knowledge -based and knowledge -driven economy.This is because, critical ICT-based knowledge acquisition is a veritable tool for socio-economic issues that challenge production and influence behavior in the workplace (Iheriohanma, 2008).Employees consider training, education and development as crucial to their overall career growth, development and goal attainment and will be motivated to remain and build a career path in an organization that offers them such opportunities (Samuel, 2008).Most organizations lose their talented workers through inefficient career planning.By failing to focus on talent management programs specifically on core employees, organizations stand the risk of losing valuable skills to competitors and waste their investment.
Instituting mechanisms for career planning, training and development of human capital in the organization is therefore suggested.There is also the need for a vibrant and resource -oriented Human Resource manager with a well equipped personnel department.This will lay the foundation for new skills that will ensure that competitive advantage is gained through people and this will result to increased corporate productivity in this globalised economy.
Creation of Work Flexibility and Outsourcing Strategy
Creation of work flexibility entails work-life balance in the organization.Work-life balance is an efficient tool by which every employee is given an opportunity to choose time out during work hours.It is a policy that defines how the organization intends to allow what they do at work to align with the responsibilities and interests they have outside (Armstrong, 2003).Work-life balance is necessary because the current employees attach much importance to quality of life due to the ever increasing work pressure (Cappelli, 2001;Michell, Holton, and Lee, 2001).In the same vein, outsourcing provides organizations the challenging opportunity to fan out jobs to specialist firms and contractors at little or no cost and burden thereby creating enough space and time for their employees to concentrate on the ones they have competence and comparative advantage on.By applying work-life course of actions and outsourcing, an organization can enhance its ability to respond to demands of customers for better access to services and provide the tactics for the organizations to deal with the revolutionized way in order to satisfy both employees and employers (Manfredi and Holliday, 2004).The application of work flexibility and outsourcing in organizations, especially complex organizations and multinationals, will impact on employee retention and minimize rate of turnover, especially in this knowledge -driven economy.This is because there is greater organizational commitment, harmony and improved productivity if employees are accorded access to work-life balance.
The adoption of the afore-mentioned sustainable strategies will provide a roadmap for balancing the needs of the organizations with those of its knowledge -employees as well as address the human capital issues that will challenge the competitiveness in this globalized society.Organizations in Nigeria will benefit copiously if they adopt these critical strategies to encourage commitment and security of their employees.The availability, use and pervasiveness of ICT in work life make the adoption of the above strategies imperative to avert employee turnover, encourage retention of, especially high ability and knowledge -workers and to improve productivity.
Conclusion
This study revealed the need for sustainable retention strategies in organizations in Nigeria.It took into consideration the competitive business environment that is occasioned by globalization.This is inferred from the effects associated with employee turnover in organizations, which express the inadequacies in the traditional retention strategies in organizations in Nigeria.The study therefore, proposes that organizations in Nigeria should adopt certain critical sustainable trends in employee retention such as, establishment of strategic retention plan, involvement of employees in decision-making process, personalized compensation plan, career planning, training and development and creation of work flexibility and outsourcing.This is pertinent if these organizations want to catch up with the current demands of global economic needs which require the use of talented workforce to drive the fundamental changes and production processes that are taking place in work organizations globally.
Employee participation in decision-making Personalized compensation plan Career planning, training and development programs Creation of work flexibility and outsourcing strategy | 5,996.2 | 2012-07-29T00:00:00.000 | [
"Business",
"Economics"
] |
A New Inversion-Free Iterative Scheme to Compute Maximal and Minimal Solutions of a Nonlinear Matrix Equation
: The goal of this article is to investigate a new solver in the form of an iterative method to solve X + A ∗ X − 1 A = I as an important nonlinear matrix equation (NME), where A , X , I are appropriate matrices. The minimal and maximal solutions of this NME are discussed as Hermitian positive definite (HPD) matrices. The convergence of the scheme is given. Several numerical tests are also provided to support the theoretical discussions.
Problem Statement
In this article, we take into account the nonlinear matrix equation (NME) [1]: where X ∈ C n×n is its Hermitian positive definite (HPD) solution, I is an identity matrix of the appropriate size, A is an n × n invertible real or complex matrix and A * is the conjugate transpose of the matrix A. The existence of the solution for this NME was discussed in [2] and it was found that if one HPD solution was available, then all its Hermitian solutions would be positive definite. This problem for any HPD solution X possesses the minimal solution X S and the maximal solution X L such that X S ≤ X ≤ X L , for more details see [3]. The maximal solution for (1) can be derived via where Y S is the minimal solution of the dual NME as follows: The NME in (1) is an important member of the general NME of the following form: where α, β, γ ≥ 1, A, B are arbitrary n × n matrices [4,5].
In several disciplines such as control theory, stochastic filtering, queuing theory, etc., this type of nonlinear equation typically appears, see [6][7][8] for further discussions. On the other hand, it is always a daunting task to find a positive definite solution (PDS) to an NME [9,10]. This is because most of the existing methods are computationally expensive due to presence of the inverse part in each loop, see the discussions given in [11,12].
Literature
One of the pioneer solvers for computing the maximal solutions of (1) is the iterative expression (FPI) below which is based on a fixed-point strategy [13]: Here, for each computing cycle, one matrix inverse must be computed. The authors in [14] presented another iteration scheme (SM) to compute the minimal solution as follows: This scheme is categorized as a method without the computation of an inverse for each computing step, since A −1 is calculated once only by noticing that The main challenge in employing iterative schemes such as (6) is their slow rate of convergence specially at the initial stage of the process. To overcome this, author in [15] proposed to apply the multiple second-order Newton's scheme for finding the HPD solution at the initial phase of convergence. This was pursued as follows: for any 1 ≤ t ≤ 2.
Another well-known iteration scheme is the method proposed in [16,17], here denoted by EAM, which is as follows: Authors in [18] proposed another iteration scheme as a generalization of the Chebyshev's method (SOM) to solve (1) as follows:
Our Result
Our major result is a new iteration scheme similar to the form of SM for extreme solutions of (1). The method needs only the calculation of one inversion at the initial stage of the process and because of this, is categorized under the inversion-free iterative methods. Under mild conditions, the convergence of the scheme as well as some theoretical investigations to show how the method produces HPD solutions are given. In conclusion, we accelerate the long-standing algorithms for an important problem of numerical linear algebra. The experiments considered in this work reconfirm the competitiveness of the proposed iteration scheme compared to known algorithms. These will make this work interesting both practically and theoretically.
Organization of the Paper
After having a short introductory discussion regarding the issues with iterative methods for NMEs in this section, the remaining parts of this article are structured as follows. In Section 2, the solution of this NME as the zero of a nonlinear operator equation is introduced. In Section 2, several numerical experiments are investigated with implementation details given in Section 3 to confirm its applicability and to compare with several wellknown and state-of-the-art methods from the literature. It is shown by way of illustration that the proposed solution method is useful and provides promising results. Lastly, some concluding remarks are made in Section 4.
An Equivalent NME
Another way to calculate the HPD solutions of (1) is to compute the zero of the NME, P (X ) = 0, where [14]: The relation (10) can be re-written as follows [18]: Clearly we can now impose a transformation as follows to further simplify the nonlinear operator equation: To obtain an iteration scheme to find the minimal HPD solutions, now it is necessary to solve the following problem: where
Our Method
We employ (14) and (15) and view this NME as an inverse-finding problem via the Schulz-type iterative schemes (see, e.g., [19]), and accordingly we propose the following method: The matrix iteration (16) requires to compute A −1 once specially at the initial stage of the implementation and it provides the iteration scheme considered an inversion-free method to solve (1).
Moreover, it is straightforward to state that the zeros of the map G(Z) = L −1 − B are equal to the zeros of the map P (X ) = X −1 − H, wherein H = A − * (I − X )A −1 . To discuss further, we find the inverse of H, which corresponds to the minimal HPD solution of (1).
The derivation of (16) can mainly be viewed as a matrix inverse finder. In fact, the last step of (16) is a method that can be used for finding generalized inverses. It is a member of the family of hyperpower iteration schemes employing several matrix products to compute the inverse of a given matrix. The iterative method can be rearranged to be written in its Horner form in order to reduce the rounding and cancellation errors when facing big matrices.
Theoretical Investigations
Lemma 1. Employing the Hermitian matrix X 0 = AA * , the iteration scheme (16) provides a sequence of Hermitian matrices.
Proof. It is obvious that the seed AA * is Hermitian, and H k = A − * (I − X k )A −1 . Therefore, Using an inductive argument, we have: By considering (X l ) * = X l , (l ≥ k), we then show that: Note that H l = (H l ) * has been employed in (18). The conclusion is valid for any l + 1.
Theorem 1.
Assume that X k and A are invertible matrices. Then the sequence {X k } produced by (16) tends to the minimal solution of the NME (1) by having X 0 = AA * . Proof. We start by denoting that H k = A − * (I − X k )A −1 . Thus, we can write: Then, by using (19) and taking a norm on both sides, one obtains: Therefore, the inequality (20) can lead to convergence as long as the initial approximation reads I − HX 0 < 1. This yields to the following condition: Then, by employing the HPD initial approximation X 0 = AA * , one obtains that which holds as long as X is the minimal HPD solution of (1). Furthermore, due to we obtain that X −1 0 > X −1 S , thus X 0 < X S . Hence, by mathematical induction, it is seen that {X k } converges to X S .
It is required to discuss the rate of convergence. Although the proposed method (16) converges to the HPD solution under mild conditions, its q order of convergence is just linear. As will be seen later, however, due to its structure and a higher rate for computing matrix inverses, it converges faster because it comes to the convergence phase more quickly. This linear rate of convergence can be improved by employing an accelerator such as (7), in the initial phase of convergence.
Experiments
The computational efficiency and convergence behavior of PM (16) for the NME (1) are hereby illustrated on some numerical problems. All the compared methods were written in the programming package Mathematica 12.0 [20].
All computations were performed in standard floating point arithmetic without applying any compilations. In addition, computations were done on a laptop with 16.0 GB of RAM and Intel ® Core™ i7-9750H. The matrix inversions whenever required were done using Mathematica 12.0 built-in commands. We found the number of iterations required to observe convergence. In the code we wrote to implement different schemes, we stopped all the applied schemes when two successive iterations in the infinity norm was less than a tolerance, as follows: Here PM and SOM were employed along with (7) with k = 2 and t = 1.5 iterations and then (16) was used. Without imposing (7) the iteration schemes PM and SOM were faster than the other competitive ones, but the superiority was not tangible. Because of this, a multiple Newton's method was imposed at the beginning of the iterations to accelerate the convergence as much as possible. The impact of the multiple Newton's method is that it is useful when there is a clear smoothness around the zero of the nonlinear operator equation. It helps arrive at the convergence region much faster than simple root finders. Note that PM converges to X S , therefore we employed other methods to compute the maximal solution of the dual problem AX −1 A * + X − I = 0 in our written implementations to have fair comparisons.
Numerical evidence of Example 1 reveals that PM converges much faster than existing solvers for the same purpose. Our method requires fewer numbers of iterations to reach the stopping criterion. In fact, under mild conditions, the error analysis, convergence and stability of the scheme were observed based on the expected norm of the error. Example 2. Using = 10 −8 , we compare different iteration schemes to find the minimal solution of (1) when A is a complex matrix as follows: and the solution is: The numerical evidence is given in Figure 2. It too reveals that the proposed method is efficient in solving NME (1). Note that the CPU time for all methods is low, though PM converges in less than a second for the given examples and performs faster than the other solvers.
Concluding Remarks
We have studied the minimal HPD solution of (1). The proposed solver needs the calculation of one matrix inversion at the initial stage of the process and then it is an inversion-free scheme. In this paper, a novel iterative scheme was developed. Several computational tests were given and found in agreement with the theoretical findings. Lastly, based on the computational evidence, we can conclude that the novel scheme is useful and effective in computing the numerical solutions of a broad range of NMEs.
Conflicts of Interest:
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2,634.8 | 2021-11-23T00:00:00.000 | [
"Mathematics"
] |
Structural and Optical Properties of PMMA-MgO Nanocomposite Film
This study investigates the structural, morphological, and optical properties of poly(methyl methacrylate) (PMMA) films incorporated with magnesium oxide (MgO) nanoparticles. The PMMA-MgO polymer nanocomposite (PNC) films were fabricated via solution casting method using varying weight percentages (1-4 wt%) of MgO nanoparticles. X-ray diffraction analysis confirmed the integration of MgO nanofillers in the PMMA matrix. Fourier transform infrared spectroscopy revealed interactions between PMMA and MgO nanoparticles. Atomic force microscopy demonstrated increased surface roughness in PNC films with higher MgO loading. Optical characterization using UV-visible spectroscopy showed enhanced absorption in the UV region and a noticeable peak at 280 nm due to MgO nanoparticles. The refractive index of PMMA-MgO PNCs increased with rising MgO content while the optical bandgap marginally decreased. The study highlights the potential of PMMA-MgO PNC films for advanced optoelectronic applications requiring high optical transparency and tuneable refractive index.
Introduction
The capacity of transparent polymer nanocomposites to display a wide range of optical features, such as tailored emission/absorption characteristics, varied refractive indices (both high and low), and strong nonlinear optical properties, has attracted a lot of attention [1].This heightened interest is rooted in the promising potential for optoelectronic applications [2].Typically, these nanocomposites are crafted by integrating metal oxide nanoparticles (NPs) into a transparent polymer matrix, such as polymethyl methacrylate (PMMA).The polymeric component contributes essential attributes like processability, transparency and flexibility, while the inclusion of NPs imparts desired thermal, electrical and optical properties [3]- [6].Demonstrating their technological and economic advantages, polymers have made significant strides in diverse fields, including automotive, chemical sensors and electronics [7]- [9].The fusion of metal oxide NPs and polymers has become a focal point for scientists, offering a platform to seamlessly combine the unique properties of both components [2].This convergence opens up new avenues for innovative applications and further reinforces the versatile role of transparent polymer nanocomposites in advancing modern materials and technologies [10].1300 (2024) 012020 IOP Publishing doi:10.1088/1757-899X/1300/1/012020 2 Various studies have investigated the preparation and characterisation of Polymer Nanocomposites (PNCs), employing PMMA as the polymer matrix and integrating metal oxide fillers such as TiO2, SnO, ZnO, CuO, and SiO2 [1], [11], [12].Within the realm of transparent polymer nanocomposites, a research gap exists in exploring the preparation and characterization of PNC films using PMMA and MgO NPs.The inclusion of metal oxide nanoparticles, such as Magnesium Oxide (MgO), enhances the already impressive optical properties of these materials.Among other metal oxide nanomaterials, MgO stands out for its cost-effectiveness and eco-friendly nature, with lower toxicity than alternative metal oxides [13]- [15].
Blending PMMA with MgO nanoparticles introduces distinctive contributions to the optical characteristics of PNC films.This integration not only capitalizes on the inherent advantages of the polymeric component (processability, transparency and flexibility) but also utilises MgO's potential for tailored thermal, electrical, and optical properties.Polymers, known for their economic and technological benefits, especially in automotive, chemical sensors, and electronics, gain further advantages through the strategic integration of MgO nanoparticles.
The study recognizes these composites as particularly appealing for enhancing multifunctional optoelectronic devices and smart, flexible lightweight microelectronics.The synergistic relationship between polymer versatility and MgO's tailored properties positions these materials as promising solutions for those seeking cost-effective and eco-friendly advancements in advanced materials and nanocomposites.Given the mentioned details, this research aims to investigate the comprehensive structural behaviour through XRD data analysis, morphology via AFM, optical properties using UV-Vis spectroscopic measurements, and chemical interactions through FTIR.The focus is on PNC films prepared through a sonicated solution casting method with a PMMA matrix loaded with MgO nanoparticles.
Experimental Details
Polymer films were produced using the solvent casting method, involving the preparation of a poly(methyl methacrylate) (PMMA) stock solution by dissolving 1 gram of PMMA in 15 ml of chloroform.The PMMA solution was stirred until complete dissolution using a magnetic stirrer at room temperature.Simultaneously, magnesium oxide (MgO) nanoparticles were dispersed in chloroform at different weight percentages.The PMMA solution and MgO nanoparticle dispersions were combined and stirred for an hour to achieve proper blending and homogeneity, crucial for uniform distribution within the PMMA matrix.A separate pure PMMA film was prepared for comparative analysis, excluding MgO nanoparticles.The resulting doped nanocomposite and pure PMMA solutions were poured into Petri dishes and allowed to dry, forming MgO-doped PMMA polymer films.These films were systematically labelled based on the incorporated MgO weight percentages.The prepared PMMA-MgO PNC films exhibited a uniform thickness ranging from 0.12 to 0.17 mm and were devoid of air bubbles.
The X-ray diffraction (XRD) analysis of the prepared samples was conducted using a Rigaku Miniflex 600 X-ray powder diffractometer.Fourier-transform infrared spectroscopy (FTIR) spectra were obtained using an Alpha Bruker ATR-FTIR instrument.Surface morphology images were captured utilizing Atomic Force Microscopy (AFM).The optical properties of the films were assessed through UV-Visible microscopy.
XRD
Figure 1(a) showcases the X-ray diffraction pattern of MgO nanoparticles, with peaks aligning well with the typical MgO diffraction pattern [JCPDS No. 87-0652].The grain size was calculated by employing Scherer's formula and it is found to be 22 nm. Figure 1(b) shows the XRD pattern of the PMMA-MgO sample.An amorphous peak at 13° confirms the presence of PMMA polymer in all the samples.Notably, MgO-doped PMMA polymer composite films exhibit robust peaks characteristic of MgO nanoparticles, confirming their integration into the composite (16).The intensity of peaks varies nonlinearly with filler concentration.Furthermore, XRD analysis reveals that the positions of the original peaks of pristine MgO nanofillers remain unchanged upon dispersion in the host PMMA matrix.This indicates that MgO nanofillers maintain their original structure within the PMMA-MgO polymer nanocomposite (12).It's worth noting that no impurity peaks are detected, affirming the purity of the sample (4).
FTIR
FTIR spectroscopy provides valuable insights into the interface affinity between PMMA-MgO.Figure 1(c) presents the FTIR spectra of pure PMMA, MgO nanoparticles and various compositions of PMMA-MgO nanocomposites.Notably, the intensity of transmission spectra demonstrates a linear reduction with an increase in filler concentration, suggesting the presence of MgO in the polymer composite [16].All samples exhibit a consistent pattern of peaks without any deviation.Additionally, PMM2 and PMM4 samples reveal peaks below 400 cm -1 , attributed to the metal oxide Mg-O stretching.The band at 1719 cm -1 is associated with C=O stretching vibrations, while the band within the range of 1140 to 1270 cm -1 corresponds to C-O stretching vibrations.The band at 1067 cm -1 is observed due to -C-O-C stretching vibration.In the spectral range of 1492-1275 cm -1 , additional bands are linked to CH3 bending vibrational modes.Additionally, asymmetric stretching of CH3 vibrations was observed in the range of 2998-2950 cm -1 .The significant decrease in the intensities of the entire FTIR spectra may be attributed to the intermolecular bonding between the PMMA matrix and MgO NPs [17].Thus, FTIR studies provide supporting evidence for the results obtained from XRD. presence of nanoparticles on the surface of the composite film [18].This observation supports the effective dispersion of MgO nanoparticles within the PMMA matrix.
UV-VISIBLE SPECTROSCOPY
The optical absorption spectra analysis provides insights into the band structure and energy band gap of polymeric materials.Figure 3(a) illustrates variations in absorption across incident wavelengths (190-800 nm) for PMMA-MgO samples.In the UV spectrum of nanocomposite samples, absorption notably increases within the 190-250 nm range under UV radiation exposure.The pure PMMA film exhibits minimal absorbance across the visible region (λ = 400-800 nm) but shows increased absorbance in the UV region, confirming its ability to absorb UV radiation [19].UV absorbance features are attributed to electronic transitions within the C=O group of the ester-linked to the repeat unit in the PMMA chain [11].Figure 3(a) indicates that absorption in PMMA-MgO composite films consistently rises with a higher weight percentage of MgO particles integrated into the PMMA matrix, accompanied by a gradual red shift in the sharp absorbance edge.Additional absorbance peaks at 280 nm in doped PMMA samples, increasing with MgO concentration, confirm the presence of MgO nanofiller, supporting XRD results.The plateau in the visible region underscores the transparency of PMMA-MgO composites specifically for photons in this range [19].
The optical transmittance (T%) and reflectance (R%) of PMMA-MgO PNCs are illustrated in Figure 3(b).There was a decline in transmittance in the UV as well as the visible region as the MgO content increased.Within the wavelength range of 240-270nm, there is a sudden decrease in the value of extinction coefficient(k) as shown in Figure 3(d).Beyond 300 nm, k stabilizes at a constant value, suggesting low optical loss in this region [20].Moreover, with an increase in MgO content in the PNCs, k also rises, indicating heightened light dissipation due to scattering and absorption by MgO nanoparticles [21].This significant interaction between the MgO nanoparticles and the polymer blend induces changes in crystallinity, thereby influencing the band structure and absorption percentage [20].
Figure 3(e)presents variation of n, demonstrating a decrease in the refractive index of the films as the wavelength of the incident photon increases, eventually becoming constant at higher wavelengths.The findings further reveal that the refractive index of PMMA-MgO polymer composites rises with increasing MgO content in the visible region.Furthermore, the transparency of the composite samples is underscored by the small values of the extinction coefficient (k) compared to the refractive index (n) [21].The Tauc plot, featured in Figure 3(f), was used to examine the optical properties and distinguish the optical bandgap of the material.The observed linear segment confirms a direct bandgap in PMMA-MgO PNC film.The film's bandgap was established by extrapolating the linear section of the Tauc plots.The band gap of the pristine films slightly decreased from 5.089 eV to 5.069 eV as the filler concentration increased.PNCs characterized by transparency and a high refractive index hold significant potential across various applications in optical design and advanced optoelectronic devices.These applications encompass technologies like LEDs, image sensors, and waveguide systems [22].
Conclusion
This study comprehensively investigates the structural, morphological, and optical characteristics of PMMA-MgO nanocomposite films fabricated via a solution-casting approach.XRD analysis confirms the successful integration of MgO nanoparticles in the PMMA polymer matrix.FTIR spectroscopy reveals intermolecular bonding between PMMA and MgO.AFM studies demonstrate increased surface roughness with higher MgO loading.UV-visible spectroscopy highlights enhanced UV absorption and the presence of a distinctive MgO peak at 280 nm.The refractive index rises with increasing MgO content, while the optical bandgap shows a marginal reduction.The key findings underscore the potential of PMMA-MgO nanocomposites for diverse optoelectronic applications requiring customizable optical properties.The eco-friendly nature and cost-effectiveness of MgO make these nanocomposites economically viable.Overall, this study provides significant insights into the structureproperty relationships in PMMA-MgO nanocomposites.Further tuning the properties through optimal selection of materials and methods can aid the development of these multifunctional nanocomposites for advanced technologies and devices. | 2,444.4 | 2024-04-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
NOD2 deficiency increases retrograde transport of secretory IgA complexes in Crohn’s disease
Intestinal microfold cells are the primary pathway for translocation of secretory IgA (SIgA)-pathogen complexes to gut-associated lymphoid tissue. Uptake of SIgA/commensals complexes is important for priming adaptive immunity in the mucosa. This study aims to explore the effect of SIgA retrograde transport of immune complexes in Crohn’s disease (CD). Here we report a significant increase of SIgA transport in CD patients with NOD2-mutation compared to CD patients without NOD2 mutation and/or healthy individuals. NOD2 has an effect in the IgA transport through human and mouse M cells by downregulating Dectin-1 and Siglec-5 expression, two receptors involved in retrograde transport. These findings define a mechanism of NOD2-mediated regulation of mucosal responses to intestinal microbiota, which is involved in CD intestinal inflammation and dysbiosis.
In this manuscript, Rochereau and colleagues investigate the role of NOD2 in influencing the retrotranslocation of SIgA and SIgA-immune complexes into mouse (and a limited number of human) Peyer's patch tissues. While the study is intriguing, the investigators are quick to draw conclusions without sufficient experimental numbers and/or without important controls to justify claims about specificity of the SIgA transport in wild type and NOD2-/-mice. There long list of concerns compromises the ability of the authors to make conclusions possible connections to C rohn's disease.
The following constitutes a list of major/moderate concerns: 1. For studies related to Figure 1, the authors should stain for anti-SIgA (or anti-SC ), not anti-IgA alone, to make claims about retrotranslocation. From the present figure, one cannot make claims about whether the IgA was enriched from inside (interstitial fluids) or outside (retrotranslocation). It is not implausible that interstitial IgA accumulates in the PP of C D patients.
2. For studies related to Figure 2, the authors should specific the number of PP used per experiment. The legend suggests 6 mice per experiment. Does that translate to one PP per mouse? That is an extremely small sample size.
3. For studies related to Figure 2, only SIgA was used for transport/uptake experiments. The authors should use an inert antigen (e.g., BSA) and a control immunoglobulin (e.g., IgG) to test whether the effects observed are specific (or not) to SIgA. The same applies to the oral immunization studies, which could have been done with IgG-p24 complexes or deglycosylated SIgA (as done previously by the a 4. The results in Figure 3 A should be replotted and analyzed statistically to compare wild type mice to the NOD2 KO mice for common treatment condition. For example, wild type and NOD2 mice for the DSS + laminarin should be plotted side by side and then statistics applied between the two mouse strains, not among treatments for single mouse strain as is currently presented. Only then will they authors be able to make claims about NOD. Also, there is no indication of sample sizes for these experiments.
5. On line 157-158, the authors state that Salmonella challenge in the presence of Sal4SIgA worsens DAI. This is counter intuitive since that antibody and similar work by the authors has shown that anti-LPS SIgA are protective and promote agglutination in the lumen. Are the authors making the claim that the NOD2 mice take up SIgA-aggregates? This should be shown by microscopy.
6. The transcytosis studies with C aco-2 with M-like cells conversion are lacking important controls, including the appearance /demonstration of M-like cells and that transcytosis occurs exclusively via these cells following conversion of C aco-2 monolayers. These controls should be included as supplemental figures. 7. In Figure 4, the knockdown studies should be accompanied by an apical-to-basolateral transport control with a protein known to use those relevant pathways (e.g., a toxin). C o-localization studies by confocal microscopy are also required. Simply demonstrating the effect of a knock down on SIgA transport by ELISA (and with a few select pulldown assays) is not sufficient to make claims about cellular pathways of transport.
Manuscript number: NCOMMS-19-15971
Saint-Etienne, 20 th December 2019 Fig 3), then it would be difficult to determine the exact influence of NOD2 on this phenotype. This point is now detailed in the first major point of the reviewer 1. Figure 5 present convincing evidence of a role for Nod2 in IgA-complex transport. This data would add to the field and should be considered for publication.
The in vitro data showing increased SIgA transport in NOD2 knockdown cells and cells stimulated with MDP, provide strength to the initial findings of increased IgA+ cells in CD patients. Further, increased Dectin-1 and Siglec-5 expression in the knockdown cells and M-cells reinforce the involvement of NOD2 in IgA retrograde transport. Overall, the findings in CD patients with NOD2 mutations and data in
We greatly appreciate the positive appraisal of our work by Reviewer 1.
IgA retrograde transport is only mediated via M cells (Rochereau et al. Plos Biology 2013). We deleted NOD2 in a specific in vitro M-like cells model containing only enterocytes and M cells.
Using this specific model of FAE, we were able to monitor the role of NOD2 during IgA reverse transcytosis only in M cells.
Besides as the editor says "Exploration of cell type-specific roles of Nod2 is not a critical requirement from the editorial perspective.", we didn't went into cell-type specific roles.
Since the mice used were littermates, it would be important to confirm that the gut microbiota is not significantly different in the mice used in Fig 2 to show increased IgA transport into the PP. Alternatively, comparing the reactivity of the IgA between WT and Nod2KO mice to the gut microbiota (similar to what was done in McCarthy et al. 2011 JCI) may further indicate when the role of NOD2 is most critical.
We totally agree with this comment and have now performed these experiments as suggested by the reviewer. First, we verified the ability of IgA to bind to the same microbiota in littermate or NOD2 2KO mice, as previously described by McCarthy et al (Fig 2c). To quantify our observations, we also measured the MFI of IgA coated bacteria by flow cytometry (Fig 2b).
Major points: 1. The initial findings and data in Figure 5 present convincing evidence of a role for NOD2 in IgAcomplex retrograde transport, however, the in vivo data is underwhelming and does not provide direct evidence of a role for Nod2 in the sensing or immune response to these IgA-complexes. As this is the title and main conclusion of the paper, it is a concern. Could the data from Fig 2 (we think that you talked about the figure 3 as you mentioned previously) be represented to show WT and Nod2KO mice on the same plot to allow for statistical comparison? This is the only way to clearly show a role for Nod2 in the mice.
We agree with the reviewer 1 and we replotted the salmonella-IgA without laminarin conditions for littermate WT and NOD2 KO mice only (supplemental Fig. 1d). Statistical analysis of this new comparison revealed that there was a bias in our first analysis. All inflammatory parameters measured in the serum (IL-6, CRP and LPS) were taken at D5 post-infection but for neutrophil infiltrations, the colon histology was done at the time of mouse sacrifice (D9 for WT mice as they showed fewer clinical signs and D5 for NOD2 KO mice). Weight loss from D5 to D9 of WT mice indicates that they became increasingly sick. The Nancy score was not comparable between the WT and NOD2 KO mice. We reproduced the experiment on littermate WT and NOD2 KO mice focusing on the laminarin-free condition. In addition to colon histological analysis, we also measured IL-6, LPS and CRP at D5 in the blood. With these new data presented in supplemental figure 1d and implemented in figure 3a, 3b, and supplemental figure 1a and 1b, we now confirm the role of NOD2 in our observations.
Reviewer #2 (Remarks to the Author):
In this manuscript, Rochereau and colleagues investigate the role of NOD2 in influencing the retrotranslocation of SIgA and SIgA-immune complexes into mouse (and a limited number of human) Peyer's patch tissues. While the study is intriguing, the investigators are quick to draw conclusions without sufficient experimental numbers and/or without important controls to justify claims about specificity of the SIgA transport in wild type and NOD2-/-mice. There long list of concerns compromises the ability of the authors to make conclusions possible connections to Crohn's disease.
The following constitutes a list of major/moderate concerns: 1. For studies related to Figure 1, the authors should stain for anti-SIgA (or anti-SC), not anti-IgA alone, to make claims about retrotranslocation. From the present figure, one cannot make claims about whether the IgA was enriched from inside (interstitial fluids) or outside (retrotranslocation). It is not implausible that interstitial IgA accumulates in the PP of CD patients.
We thank the reviewer 2 for this point. Colocalization between SIgA and DC-SIGN (Fig. 1b) was already added and clearly show that counted SIgA-positive cells could come from a retrograde transport of IgA through M cells and a consecutive uptake by dendritic cells. A new colocalization (Fig. 1b) using anti-IgA and anti-secretory component (SC) staining has been now added and confirmed that IgA was not enriched from inside (interstitial fluids) but from the lumen (retrotranslocation). Figure 2, the authors should specific the number of PP used per experiment. The legend suggests 6 mice per experiment. Does that translate to one PP per mouse? That is an extremely small sample size.
For studies related to
This experiment was repeated on 6 mice per group but each point represents the average of 3 PP per mouse. This precision has been added in the corresponding figure legend. Figure 2, only SIgA was used for transport/uptake experiments. The authors should use an inert antigen (e.g., BSA) and a control immunoglobulin (e.g., IgG) to test whether the effects observed are specific (or not) to SIgA.
For studies related to
We have now performed these control experiments (Fig 2a).
The same applies to the oral immunization studies, which could have been done with IgG-p24 complexes or deglycosylated SIgA (as done previously by the authors).
The role of IgA as a vector carrying a protein such as p24 has already been published (Rochereau et al, EJI 2014 and JACI 2015). We also showed that IgG was not able to cross epithelium via M cells in vitro (Rochereau et al, Plos Biology 2013) and in vivo (in this article). In Figure 2d, we already showed control groups such as IgA alone and p24 alone. Moreover, we know that administration of IgG-p24 by oral or nasal route does not induce p24-specific antibodies in WT mice (data not shown). | 2,485 | 2020-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Genetically complex epilepsies, copy number variants and syndrome constellations
Epilepsy is one of the most common neurological disorders, with a prevalence of 1% and lifetime incidence of 3%. There are numerous epilepsy syndromes, most of which are considered to be genetic epilepsies. Despite the discovery of more than 20 genes for epilepsy to date, much of the genetic contribution to epilepsy is not yet known. Copy number variants have been established as an important source of mutation in other complex brain disorders, including intellectual disability, autism and schizophrenia. Recent advances in technology now facilitate genome-wide searches for copy number variants and are beginning to be applied to epilepsy. Here, we discuss what is currently known about the contribution of copy number variants to epilepsy, and how that knowledge is redefining classification of clinical and genetic syndromes.
In older terminology, genetic epilepsies were referred to as 'idiopathic epilepsies' [4]. Syndromes, and sometimes subsyndromes, are delineated when the seizures are defined by easily recognizable electroclinical features and similar enough to be regarded as a homogeneous group, distinct from other groups in the same classification level (Table 1). For example, genetic generalized epilepsies are frequently divided into their subsyndromes of childhood absence epilepsy, juvenile absence epilepsy, juvenile myo clonic epilepsy and generalized tonic clonic seizures.
There is a subset of epilepsy syndromes that are clearly monogenic, and traditional linkage studies in large families have been useful for identifying causative genes [5,6]. However, the vast majority of the genetic epilepsies are multifactorial, with an underlying genetic contribution that is polygenic, where few or usually none of the sus cep tibility genes have been identified. This multifactorial concept dates back to the early works of William Lennox [7] and was well established in the modern era with additional twin data [8]. It is important to note that epilepsy with complex genetics and complex epilepsy are distinct concepts. To the geneticist, complex epilepsy is epilepsy with complex genetics; that is, multifactorial epilepsy that is polygenic and influenced by environ mental effects, both internal and external. Complex epilepsy to the epileptologist, on the other hand, refers to the complexity of the seizure pattern. Without an appre cia tion of the difference, interactions between basic and clinical scientists can be, and have been from personal experience, confused by 'complex epilepsy' meaning differ ent things to different people. In the context of this article, complex epilepsy will mean that which is multi factorial in origin, rather than necessarily having complex seizure patterns.
Monogenic epilepsies
To date, more than 20 genes have been identified for the group of genetic epilepsies that are primarily monogenic [5,6,9,10], prompting a recent update of clinically based classification [1]. While individual syndromes that com prise each of these groups are generally diagnosed through clinical assessment, molecular testing now facili tates more accurate definition of clinically similar disorders that are now known to be caused by mutation of different genes. While gene identity provides an alternative or additional criterion for syndrome classifica tion, it also has clinical efficacy providing a rapid definitive diagnosis to obviate an otherwise circuitous set of invasive or costly investigative procedures. Further more, in some cases, specific therapeutic intervention can be enabled to achieve improved outcomes or more accurate prognosis. Genetic testing for the epilepsies has high clinical utility in cases that may involve SLC2A1 (glucose transporter type 1 deficiency), SCN1A (Dravet syndrome), PCDH19 (familial epilepsy and mental re tard ation limited to females, 'Dravetlike' PCDH19 syn drome), ARX (Xlinked infantile spasms and myoclonic seizures, dystonia, and Xlinked lissencephaly with ambigu ous genitalia) or STK9 (Xlinked infantile spasms) mutations. Testing has high analytical sensitivity (ability to detect the presence of a causative mutation) and high analytical specificity (ability to exclude mutation in a candidate gene) for all of the monogenic epilepsies, but not necessarily high clinical utility apart from some of the syndromes associated with the above genes [9]. It has little or no clinical utility at this time when knowledge of the gene is not needed for accurate syndrome classi fication, when knowledge of the gene does not direct or affect treatment, or in cases of genetically complex epilepsies triggered by the combined effects of multiple genes spread across the genome, most likely each having only a small effect on phenotype.
Complex epilepsies
Speculation of the genetic architecture for the genetically complex epilepsies centers on the common disease common variant hypothesis [11] and the common disease rare variant hypothesis [12]. The general failure of linkage and association studies applied to the complex epilepsies [1316] argues against the common diseasecommon variant hypothesis, although the major criticism of such studies is that they are underpowered to detect the magnitude of odds ratios that are likely associated with susceptibility variants in the genetically complex epilepsies [17] and indeed other neuropsychiatric brain disorders.
The common diseaserare variant hypothesis, which suggests a variable subset of multiple rare genetic vari ants, has greater appeal for complex epilepsy [18,19], especially given the failure of association studies, which work on the premise of the common diseasecommon variant hypothesis [16], to deliver consistent findings. A mixture of the two models is also entirely plausible [19] with functional differences in the electrophysiological properties of ion channels demonstrated for both rare and polymorphic genetic variation detected at the GABRD (encoding γaminobutyric acid A receptor, δ), CACNA1H (encoding calcium channel, voltagedependent, T type, α 1H subunit) and CLCN2 (encoding chloride channel 2) genes [2023], for example. Computer simulation supports the notion that genetic variations associated with only very small functional changes in ion channel properties are sufficient to make meaningful contributions to increasing susceptibility to epilepsy [24].
Multiple sclerosis is another disorder with complex inheritance where extensive study suggests 'risk variants likely to include hundreds of modest effects and possibly thousands of very small effects' [25]. Similar conclusions with systematic effects of multiple rare variants across the genome have been suggested for schizophrenia and bipolar disorder [26]. We predict the same for epilepsy with complex inheritance, with seizure susceptibility thresh olds determined by combinations of many rare to moderately common sequence variants, copy number variants (CNVs) and perhaps noncoding DNA sequen ces with functional effects. Weak effects will only be detectable by genomewide association studies using massive sample sizes. Kryukov et al. [27] preempted out comes from deep resequencing by massively parallel sequencing (previously referred to as nextgeneration sequencing [28]) by promoting an association study approach based on the premise of multiple rare variants present in susceptibility genes in higher numbers for a given disease group (for example, epilepsy) than in their corresponding controls. The statistical tools to support that approach are now surfacing [29].
The heritability of genetic generalized epilepsy suggests a major genetic component [8] but virtually none has yet been identified. This constitutes the 'dark matter' [30]. The task is to find this missing heritability and charac terize it in terms of number of loci, effect sizes, allelic frequencies of variants and the nature of the variants [31]. Areas being investigated include cisacting genome wide regulatory variants [32], genomewide copy number variants [33,34] as discussed below, and, in the future, nextgeneration sequencing [28].
Copy number variation in epilepsy
CNVs are deletions, duplications or insertions of DNA in the genome that range in size from approximately 1 kb to Genomewide methods to detect CNVs include array comparative genomic hybridization (arrayCGH) and SNP genotyping arrays. These technologies can be targeted to specific chromosomal regions [43,4549]. However, their real power lies with capability for genomewide interrogation, where there is no need for a priori knowledge of where a lesion may lie [33,34,46,50]. Using that approach, Depienne et al. [46] discovered a Dravet like syndrome caused by severe PCDH19 mutations on chromosome X, and McMahon et al. [50] 'rediscovered' the 15q13.3 CNV and found a novel 10q21.2 micro duplication. Mefford et al. [33] and Heinzen et al. [34] used genomewide approaches to establish the extent of rare CNVs in the genetic epilepsies (see below). For CNVs with boundaries extending beyond the target gene, array CGH is a powerful tool for accurately determining size and gene content. Large epilepsyassociated CNVs detectable by MLPA, but extending well beyond the one gene of special interest (for example, beyond SCN1A), can also be reliably detected by array technologies [40,43,45].
The role of CNVs in epilepsy has now been addressed by several groups using both targeted and genomewide approaches. Helbig and colleagues [51] first directed our attention to the role of the 15q13.3 microdeletion in the etiology of epilepsy. This microdeletion was first described in a series of patients with ID, most of whom also suffered from seizures [52], but is much more common in epilepsy cohorts [51,53,54]. This is one of the most prevalent genetic risk factors identified for the genetic generalized epilepsy syndromes. A range of rare mutations within SLC2A1 encoding the GLUT1 glucose transporter are at least as important within the childhood absence epilepsy subsyndrome of genetic generalized epilepsy [55,56]. Although estimated confidence intervals are broad, the estimated odds risk ratio of 68 (95% confidence interval 29 to 181) for the 15q13.3 deletion [54] greatly exceeds that of most common susceptibility variants detectable by genomewide association studies in disorders other than epilepsy. Despite its relative 'severity' in relation to risk, its frequency in epilepsy cohorts is relatively high at around 1.3%. Conversely, this variant is difficult to find in the general control population, despite the screening of large numbers of controls, even though family studies following detection of an index case disclose frequent transmissions from nonpenetrant carrier parents [54,57]. Moreover, the position of the original mutation in the pedigree is often not too far back into its living ancestry, suggesting a relatively high recurrent mutation rate. Of the seven genes within the lesion, haploinsufficiency of CHRNA7 (nicotinic acetylcholine receptor, α7) is considered to be the most likely pathogenic element, although it is not the only neuronally expressed gene affected by the deletion. Interestingly, early genomewide linkage studies impli cated the CHRNA7 region in juvenile myoclonic epilepsy [58], but this could not be replicated [59], and screening of CHRNA7 did not detect convincing mutations [60]. Could it be that the families studied by Elmslie et al. [58] contained enough families segregating the 15q13.3 microdeletion to give a linkage signal?
Subsequent studies investigated the role of other large CNVs that had previously been associated with increased risk of ID, autism and schizophrenia [53]. Somewhat surprisingly, significant numbers of the same recurrent CNVs involved in the disorders listed above were implicated as a component of the polygenic pathogenic genetic architecture in the clinically and genetically com plex (idiopathic) epilepsies. Two microdeletions commonly associated with epilepsy are at 15q11.2 and 16p13.11 [33,34,53]. Together with the 15q13.3 microdeletion, their combined frequency in test populations of genetic generalized epilepsy is approximately 3% [33]. Other large recurrent CNVs associated with ID, autism or schizophrenia that have also been detected in epilepsy are at 1q21.1, 16p12, 22q11 and two regions within 16p11.2 [33,53]. These CNVs represent clearly defined genetic determinants that overlap with a number of hitherto regarded distinct disorders comprising part or all of their genetic architectures. The three most common recurrent CNVs, which together account for up to 3% of epilepsies, are shown in Figure 1. Notably, the 15q13.3 microdeletion has been consistently present in 0.5% to 1% of all genetic generalized epilepsy cohorts but has not been seen in >3,000 patients who presented with focal epilepsy syndromes [34], and therefore it may be a risk factor specifically for generalized epilepsy syndromes. Deletions at 16p13.11 and 15q11.2 have been found in both generalized and focal epilepsies [33,34,53].
The large, recurrent CNVs described above occur because of specific genomic architecture at each respec tive chromosome region. CNV is mediated by naturally occur ring sets of low copy repeats or segmental duplications [6163] that facilitate nonallelic homolo gous recombina tion [64,65], resulting in deletion or duplication of the intervening unique sequence. There fore, each region with such architecture is prone to rearrange ment at meiosis, causing recurrence of large CNVs with nearly identical breakpoints in unrelated individuals. Because CNVs at these rearrangementprone regions of the genome occur with an appreciable frequency, it has been possible to detect a statistically significant difference between cases and controls.
Apart from the recurrent CNVs discussed above, the rare nonrecurrent CNVs are also likely to play a significant role in the genetic etiology of epilepsy. Two recent studies applied genomewide technologies to detect CNVs in affected individuals. Heinzen and colleagues [34] evaluated 3,812 individuals and found an enrichment of large (>1 Mb) deletions in affected individ uals, the majority of which were seen in one individual each. Mefford et al. [33] evaluated 517 individuals with various types of epilepsy and found that nearly 10% carried one or more rare CNVs that had not been previously found at an appreciable frequency in controls. Again, the majority of events were seen only once, and represent a subset of the rare nonrecurrent CNVs involving genes that have been implicated in ID, autism or schizophrenia.
Syndrome constellations associated with CNVs
Taken literally, a constellation is a number of stars grouped within an outline. Here, we regard the CNV as the 'outline' encompassing a group of its associated syndromes comprising the syndrome constellation. Different combinations of syndromes define the constel lations that are packaged within different CNVs. The CNVs can be recurrent in the population, and any recurrent CNV located in a given region is virtually identical from patient to patient. The syndrome constel lations include one or more types of ID, dysmorphism, autism, schizophrenia and, more recently, genetic generalized epilepsy. The various syndromes within the constellations are themselves genetically and pheno typically heterogeneous, and in some cases have defined subsyndromes. For example, genetic generalized epilepsy consists of the subsyndromes childhood absence epilepsy, juvenile absence epilepsy, juvenile myoclonic epilepsy and generalized tonic clonic seizures. Recurrent deletions at 15q13.3 (1.5 Mb, seven genes), at 16p13.11 (1.2 Mb, eight genes) and at 15q11.2 (1.3 Mb, four genes) are emerging as the most common genetic determinants for various distinct disorders with complex inheritance. These generally include intellectual disability with or without dysmorphism, autism, schizophrenia or genetic generalized or focal epilepsy. Epilepsy was the latest addition to the constellations of syndromes associated with each of these CNVs, and is now well established [33,34,51,53,54]. A similar picture is emerging for the rarer recurrent CNVs at 1q21.1, 16p12 and two regions within 16p11.2 [33,53]. Given the comorbidity of ID and epilepsy, autism and ID, and autism and epilepsy, for example, perhaps it should not be surprising that some CNVs cause over lapping neuropsychiatric features in affected individuals. However, it seems remarkable that the same CNV susceptibility lesion can be a genetic determinant for apparently disparate conditions (for example, only epilepsy in one patient, only schizophrenia in another). One possible explanation might be that odds risk ratios associated with disorders included within a given constel lation of syndromes is relatively high in the context of disorders with complex inheritance. For example, genetic generalized epilepsy has an odds risk ratio of 68 (95% confidence interval 29 to 181) for the 15q13.3 deletion [54]; this is far higher than for susceptibility variants generally detected in complex genetic disorders. Certainly another possible explanation is the presence of as yet undetected additional genetic or epigenetic variants that influence the phenotypic outcome. All of the 'common' recurrent CNVs in epilepsy (15q13.3, 16p13.11 and 15q11.2) have probably been identified already, given the extent of the arrayCGH genomewide searches already completed [33,34]. Some of the less common recurrent microdeletions at 1q21.1, 16p12 and two regions within 16p11.2 may be associated with their own multisyndrome constellations.
Rare or unique nonrecurrent CNVs are collectively more common than the combined recurrent ones. These lesions provide a wealth of leads to candidate epilepsy genes within or closely adjacent to them. The number, frequency and distribution of each genebearing CNV are consistent with the common diseaserare variant model for the genetic architecture for complex epilepsy. Overall genetic profiles of susceptibility genes for each individual are likely to be unique and fit the polygenic heterogeneity concept [18]. Genes within these epilepsy associated CNVs and genes identified through massively parallel sequencing [66] each represent independent oppor tunities to break out of the ion channel paradigm that might potentially constrain our thinking when the genetic architecture of epilepsy might extend beyond ion channels. Results of studies performed so far suggest that haploinsufficiency (deletions) or overexpression (duplica tions) of some of the genes in nonrecurrent CNVs may elicit the same syndromes as those in their associated constellations.
There are two common threads in these discussions. First, the constellations of syndromes associated with each recurrent CNV can include a range of diverse pheno types, including, in most cases, some combination of ID, autism, schizophrenia and epilepsy. Each CNV probably elicits its own specific distribution of pheno types and frequency of each phenotype, defining the associated constellation. Second, the mechanism for genesis of this extreme clinical heterogeneity observed within virtually identical lesions is not yet known. Several mechanistic possibilities have been outlined [34,6769] but none has been proven as a general mechanism, or even a mechanism specific to any given CNV. The clinical heterogeneity is likely to depend upon the nature of the other risk factors or genetic modifiers in the rest of the genome that alone or in combination may specify the phenotype.
Conclusions and future perspectives
The concept of extensive clinical heterogeneity in epilepsy associated with a welldefined genetic lesion is not new. Well known examples are genetic generalized epilepsy with febrile seizures plus [19], caused by mutations in sodium channel genes, and recently, genetic generalized epilepsy caused by the 15q13.3 CNV [70]. These observations have challenged complete reliance on the phenotypefirst approach to diagnosis. Investigations will always begin with general clinical evaluation to broadly classify cases into disease categories. Taking genetic generalized epilepsy as an example, is it then necessary to further refine down to subsyndromes using clinical criteria alone, and to even contemplate endo phenotyping for deeper clinical refinement? The answer is clearly no in the context of syndromic constellations associated with some CNVs and phenotypic spectrums associated with some familial missense mutations. The aim of that exercise of making phenotypes as clinically homogeneous as possible would be to promote genetic homogenization of study populations so that associations are easier to detect. But for CNVs and missense mutations in some genes, collections of the same CNV or same mutation are already genetically homogeneous, at least for that component of the complex polygenic architecture.
The approach needs to be turned upside down, by adoption of a genotypefirst approach where novel genomic disorders such as genetic generalized epilepsy are classified and defined by detection of a common deletion or duplication. The collection of large numbers of patients with the same CNV genotype but wide variety of phenotypes including epilepsy will facilitate genotype phenotype studies that might provide insight into the mechanisms that influence phenotype diversity in these and other disorders. Conversely, the collection of large numbers of genetic generalized epilepsy patients (not even subtyped into subsyndromes) with significantly more multiple rare DNA sequence changes within the same putative epilepsy susceptibility gene, as compared with unaffected controls, might be an outcome of their pursuit through massively parallel sequencing. That would enable us to work backwards, to endophenotype just those cases with mutations in a defined susceptibility gene to see if they have subtle phenotypic features in common. Thus might emerge a subsyndrome classifi cation that is different to that currently in use, based on more relevant components of the phenotype that better reflect the underlying molecular genetics.
Finally, we agree that careful clinical phenotyping is a vital component of our research, as the constellations associated with each of the CNVs need to be accurately characterized. Consider cohorts comprising 15q13.3 deletions, for example. Some of the cases are regarded as epilepsy only. Others are regarded as having dual pheno types, of epilepsy and ID, for example. Are these really dual phenotypes? Consider the hypothetical possibility that the haploid content of the 15q13.3 region lowers the seizure threshold and adversely affects intelligence in everyone who carries it. Some carriers will not have epilepsy because their susceptibility profile contains too few susceptibility variants at other loci throughout the genome, in addition to 15q13.3, to take them across the seizure threshold. Some carriers will not have ID because their baseline intelligence quotient will be high enough to begin with that even with some depression of intelligence quotient through the effects of the 15q13.3 deletion they remain within the normal range. Others, toward the lower end of the normal range to begin with, unfortunately drop down into the ID range. We challenge the clinical researchers to prove us wrong or, like us, seriously question the notion of dual phenotypes presenting in only a subset of the 15q13.3 deletion carriers.
Competing interests
The authors declare that they have no competing interests. | 4,867.8 | 2010-10-05T00:00:00.000 | [
"Psychology",
"Medicine",
"Biology"
] |
Systematic meta-analysis of research on AI tools to deal with misinformation on social media during natural and anthropogenic hazards and disasters
The spread of misinformation on social media has led to the development of arti fi cial intelligence (AI) tools to deal with this phenomenon. These tools are particularly needed when misinformation relates to natural or anthropogenic disasters such as the COVID-19 pandemic. The major research question of our work was as follows: what kind of gatekeepers (i.e. news moderators) do we wish social media algorithms and users to be when misinformation on hazards and disasters is being dealt with? To address this question, we carried out a meta-analysis of studies published in Scopus and Web of Science. We extracted 668 papers that contained keyterms related to the topic of “ AI tools to deal with misinformation on social media during hazards and disasters. ” The methodology included several steps. First, we selected 13 review papers to identify relevant variables and re fi ne the scope of our meta-analysis. Then we screened the rest of the papers and identi fi ed 266 publications as being signi fi cant for our research goals. For each eligible paper, we analyzed its objective, sponsor ’ s location, year of publication, research area, type of hazard, and related topics. As methods of analysis, we applied: descriptive statistics, network representation of keyword co-occurrences, and fl ow representation of research rationale. Our results show that few studies come from the social sciences (5.8%) and humanities (3.5%), and that most of those papers are dedicated to the COVID-19 risk (92%). Most of the studies deal with the question of detecting misinformation (68%). Few countries are major funders of the development of the topic. These results allow some inferences. Social sciences and humanities seem under-represented for a topic that is strongly connected to human reasoning. A re fl ection on the optimum balance between algorithm recommendations and user choices seems to be missing. Research results on the pandemic could be exploited to enhance research advances on other risks.
Introduction
F ake news is old news: different forms of misinformation have occurred repeatedly throughout history (Novaes and de Ridder, 2021). Nevertheless, the impact of different communication technologies on content, production, distribution, and consumption of misinformation-and hence, its dissemination-has changed over history (Posetti and Matthews, 2018). In the age of social media, misinformation spreads at a fast pace: the pervasive nature of misinformation in the digital age is being reinforced by both technical and socio-psychological factors (Dallo et al., 2022).
The speed of diffusion influences the amplitude of negative impacts which, because of their cascading effects, can be exponential in the context of disasters (McGee et al., 2016). The COVID-19 "infodemic," as the World Health Organization (2022, p. 1) defines it, had various negative impacts; these include psychological consequences (such as anxiety, depression, or posttraumatic stress disorder), reduced trust in public authorities and health institutions, adoption of inadequate protective measures by the population, and increased purchases of medical supplies and other products which stresses on the market (Pian et al., 2021).
Misinformation can also occur in relation to natural disasters; for instance, during and after the Hurricane Irma disaster which hit the Caribbean in September 2017, several rumors spread, among others, fake news concerning the number of deaths on the French territory of Saint Martin. According to the rumors, there were between over 100 and over 1000 dead, while the real death toll was 11. This fake news continued to circulate for more than a year, negatively impacting the territory's social cohesion and post-hurricane reconstruction (Moatty et al., 2019).
In the last two years, numerous studies have been carried out to develop artificial intelligence (AI) tools that can deal with misinformation in risk management contexts that require a very fast response. The very recent and fast development of research efforts on this topic necessitates a timely and efficient review of the current research trends. In this study, we conduct a meta-analysis of the literature to identify the main research gaps and to answer the following question: what kind of gatekeepers (i.e., news moderators) do we wish social media algorithms and users to be in terms of dealing with misinformation on hazards and disasters? This meta-analysis will contribute to developing a communication model based on social media moderation and recommendation practices that are aligned with human rights and journalism ethics.
Background
The spread of misinformation on social media. Misinformation has ancient origins. According to Kaminska (2017), one of the earliest records of misinformation goes back to 30 BCE, to the time of hostilities between Mark Antony and Octavian over the leadership of the Roman world. Across history, the impact of misinformation has changed with the evolution of technology: for instance, the invention of the printing press led to the first largescale news hoax (Thornton, 2000). In the digital age, the dissemination of misinformation is so greatly amplified that, since 2018, several governments have started to introduce regulatory measures at the national level to combat fake news (Posetti and Matthews, 2018).
Various definitions are proposed in the literature to define different kinds of information disorders. Lazer et al. (2018, p. 2) define fake news as "fabricated information that mimics news media content in form but not in organizational process or intent [and] overlaps with other information disorders, such as misinformation (false or misleading information) and disinformation (false information that is purposely spread to deceive people)." Ireton and Posetti (2018, p. 43) recommend using the terms "misinformation" and "disinformation" to indicate information disorders, and to avoid using the term "fake news" as it has been "politicized and deployed as a weapon against the news industry, as a way of undermining reporting that people in power do not like." In this study, we refer exclusively to misinformation, as this is frequently used as a general term in the scientific literature to include different information disorders, such as disinformation, rumors, misinformation, and hoaxes.
Misinformation on social media and risk management. Social media contribute to the social representation of hazards and disasters (Sarrica et al., 2018); in other words, they shape the population's perception and attitude regarding hazards and disasters. Ng et al. (2018) compare traditional media with social media and highlight that the latter has a stronger effect in terms of increasing their readers' risk perception. Tsoy et al. (2021) suggest that social media can shape hazard experience in two ways: either by amplifying risk perception or reducing it.
In this context, misinformation can strongly affect risk management. One example is the spread of rumors and hoaxes on social media that followed the 2017 Manchester Arena Bombing (Qiu, 2017). In particular, the news that unaccompanied children had been sheltered in hotels was a false rumor; this illustrates how such misinformation can misdirect the affected population and cause confusion and chaos (Hunt et al., 2020). Obviously, other examples can be taken from the COVID-19 pandemic. The significant impact of misinformation during the pandemic led the United Nations to urge countries to take action to combat the "infodemic," defined as "too much information including false or misleading information in digital and physical environments during a disease outbreak. It causes confusion and risk-taking behaviors that can harm health" (World Health Organization, 2022, p. 1).
This research addresses both hazards and disasters-concepts that are related, but distinct. According to the United Nations Office for Disaster Risk Reduction (United Nations Office for Disaster Risk Reduction (UNDRR), 2023) and the Federal Emergency Management Agency FEMA (2023) of the United States, a hazard represents a potential threat, while a disaster is the actual damage caused by a hazard. Misinformation can impact both hazards and disasters and can hinder efforts to prevent and reduce risks associated with these events.
The need for AI tools to deal with misinformation. In the last decade, a wide variety of data mining tools have been developed to gauge public opinion by exploring big digital communication datasets. It has become possible to automatically detect misinformation thanks to natural language processing, machine learning, and deep learning (Ayo et al., 2020;Hossein and Miller, 2018;Murfi et al., 2019). Machine learning and deep learning algorithms ( Fig. 1)-two subsets of the broader category of artificial intelligence-are two of the most common approaches to automating the process of classifying unreliable or reliable news (Varma et al., 2021).
Over a decade ago, Grzywińska and Borden (2012) stated that social media were replacing traditional media as the preeminent information source and the main player in public agenda-setting. This trend has strengthened to such an extent that newspaper headlines frequently quote social media items as sources of information (Paulussen and Harder, 2014). Along with the increasing importance of social media, AI tools are playing a key role in our society to deal with the rapid spread of misinformation, while guaranteeing the right of access to information mentioned in Article 19 of the Universal Declaration of Human Rights (United Nations, 1948).
Toward a new communication model. The traditional role of the press as "gatekeepers" has weakened, as journalists no longer hold exclusive rights to select, extract, and disseminate the news in the digital age (Canter, 2014, p. 102). Within the context of social media, content recommendation algorithms and individual media users have taken on a "gatekeeping function" (Napoli, 2015, p. 755).
Within the expanding and diverse area of studies dedicated to AI tools to deal with misinformation, scientists cannot discard the following question: what kind of gatekeepers do we wish moderation and recommendation algorithms-and also social media users-to be? This question addresses fundamental human rights and journalism ethics, such as freedom of expression, the right of the public to be informed, accuracy, and differentiation between fact and opinion, privacy protection, hate and discrimination prevention, and plagiarism prevention. For instance, Jørgense and Zuleta (2020, p. 52) point out that social media constitute a public space where freedom of expression is vulnerable, given that these platforms are "governed by private actors operating outside the direct reach of human rights law." According to the two authors, European Union policies encourage content moderation by private companies, but their "over-removal" practices are rarely aligned with the principles of "strictly necessary" and "proportionate" limitation that are mentioned in the international human rights law (Jørgensen and Zuleta, 2020, p. 59).
Methodology
Data for analysis. We extracted our corpus of abstracts from Web of Science and Scopus, two platforms that provide access to abstract, reference, and citation data from academic journals. The abstracts were all studies published up until 1 July 2022. We extracted the corpus of studies on the basis of the following key terms related to the research area of AI tools to deal with misinformation on social media during hazards and disasters.
Corpus selection based on keyterms included in abstracts (Web of Science & Scopus)
Abstract =(disaster) OR (emergenc*) OR (hazard) OR (disaster) OR (flood) OR (earthquake) OR (industrial accident) OR (terrorist attack*) OR (COVID) OR (pandemic) OR (wildfire) OR (Coronavirus) Fig. 1 Concept map outlining the various techniques used for detecting fake news detection, as proposed by Varma et al. (2021). The authors focused their study on Machine Learning (ML) and Deep Learning (DL), which are two of the most commonly used approaches for classifying misinformation. ML algorithms can be divided into two subfields: supervised learning and unsupervised learning. The former includes techniques such as Naïve Bayes, support vector machine, and logistic regression, while the latter includes K-Means and DB-Scan among others. Ensemble Learning is a type of Supervised Learning that is both accurate and innovative, according to the authors. It encompasses techniques such as AdaBoost, XGBoost, decision tree, and random forest. DL algorithms are also effective in detecting misinformation. The most popular DL techniques include convolution neural network (CNN) and recurrent neural networks such as long short-term memory. LSTM is a widely used technique. The Boolean operator "OR" means that the selected abstracts must include one of the keyterms. The Boolean operator "AND" means that the selected abstracts must combine two search queries. A search query always starts with "Abstract" = in order to search for keyterms included in the abstracts.
The search keyterms include different anthropogenic and natural hazards. As discussed in section "Introduction", misinformation can affect both anthropogenic and natural hazards and disasters. Furthermore, the Horizon 2020 CORE project (European Union's Horizon 2020 research and innovation program, 2023) highlights that the practitioners (e.g., civil protection), who have to cope with different types of hazards and sometimes have to face multiple overlapping risks, are requesting a dedicated strategy to deal with risk misinformation in different contexts.
The search keyterms include "social media" and the names of popular platforms: "Twitter," "WhatsApp," "Facebook," "Instagram." We used these keyterms in order to include studies with a focus on one of these platforms.
We then used the PRISMA 2020 flow diagram ( Fig. 2) (Page et al., 2021) to report which papers were selected and included in our study. After extracting 246 abstracts from Web of Science and 422 abstracts from Scopus, we removed 163 duplicate records and 37 review papers. We then manually screened the remaining 468 abstracts and excluded 205 of them, as they did not refer to one of the following topics as central subjects of the study presented in the corresponding paper: anthropogenic or natural disasters and hazards, misinformation, social media, and AI methods. Finally, we manually assessed for eligibility 289 articles, excluding 23 papers that did not refer to the above-mentioned topics as key topics for the study. As a result, 266 studies were included in the meta-analysis.
Literature review. Our initial corpus of studies on AI tools to deal with misinformation on social media during hazards and disasters included various review papers. These papers were examined to establish the current state of the art for our research topic. The corpus of studies, presented in the section "Data for analysis", includes 37 review papers, of which 24 were excluded for not being central to our literature review. The 24 review papers in question did not indeed consider the following themes to be central to their literature review: anthropogenic or natural hazards and disasters, misinformation, social media, and computer-aided methods. This left us with 13 review papers that were directly relevant to our research topic. Fig. 2 Our data selection process, guided by the PRISMA 2020 flow diagram, which outlines the steps involved in conducting a meta-analysis and the corresponding information flow. This diagram serves as a useful tool for documenting the number of documents that were selected, assessed, deemed eligible or ineligible, as well as the reasons for exclusion (Page et al., 2021).
We analyzed these 13 review papers to achieve two goals: 1. Verify if they covered all of our research themes, namely all the search key terms used to select our paper corpus or only a part of them; 2. Identify any research variables that were not addressed in these papers.
To begin, we identified all the research themes and variables proposed in the 13 review papers. These are listed in the two tables (Tables 1 and 2).
Different research themes are addressed in each of the 13 papers to select the corpus of documents for the review. Table 1 summarizes and compares these research themes. The authors of the reviews also propose different research variables. We present and compare the main research variables in Table 2.
We used the research themes and research variables identified in Tables 1 and 2 to elicit the following observations, which are an important step for our research: 1. All the review papers focus on the COVID-19 crisis or other disease outbreaks. None of the 13 review articles cover other types of natural or anthropogenic hazards. Hence, using a meta-analysis of a corpus covering both anthropogenic and natural hazards, we opened up the scope of our study compared to other reviews. Indeed, there does not seem to be a precedent for a meta-analysis covering such research themes. 2. With regard to the research variables, four review papers (Ansar and Goswami, 2021;Gabarron et al., 2021;Himelein-Wachowiak et al., 2021;Varma et al., 2021) focus on reviewing the main AI tools used to deal with misinformation. Varma et al. (2021) compare different AI methods and two different publication periods (before and after the pandemic). Ansar and Gaswami (2021) compare the AI methods, misinformation origins, and contents. Gabarron et al. (2021) compare misinformation contents and impacts. Himelein-Wachowiak et al. (2021) specifically focus on bots and compare their origins, topics, and dissemination patterns. What appears to be missing as a research variable, however, is a wider reflection on the different research objectives in the literature on the topic of "tools to deal with misinformation on social media related to hazards and disasters." 3. As well as the research variable objectives of the study, three other research variables seem to be missing: -The research areas covered by this topic; -The natural and anthropogenic hazards covered; -The location of the funding sponsors, These research variables define the scope of our meta-analysis.
Methods of analysis. The proposed meta-analysis aims to explore the 266 studies according to the research themes and research variables identified above: research area, type of hazard, research objective, and location of the funding sponsor. We also considered the "publication year" in order to comprehend how relevant the recent increase in publications is and if it is correlated with other trends. For each research question we applied different methods of analysis: descriptive statistics (for the year of publication, research area, type of hazard, sponsor's location), network representation of keyword co-occurrences (for the type of hazard and the related topics), and flow representation of research rationale (for the objective of the study).
Year of publication. Scopus and Web of Science automatically provide the information on the year of publication in a separate column of the abstract dataset (in CVS format) that can be exported from both websites. The number of publications per Table 1 Research themes described in the review papers. year can easily be extracted and visualized with a bar chart (available in Microsoft Excel for descriptive statistics).
Research areas. Web of Science automatically provides the research area of each paper as part of the abstract dataset, in the column entitled "WoS categories." Scopus does not include information on the research area in the exportable abstract dataset. The list of research areas is, however, available on the search result webpage of Scopus as part of the filter tool entitled "Subject Area." This search filter makes it possible to organize and extract the abstract from the dataset in different subsets corresponding to different research areas. The next step was to refine the research area classifications proposed by Web of Science and Scopus. The list of research areas is indeed rich, but it is not uniform in Scopus and Web of Science, and an important number of the studies are associated with more than one research area. We thus simplified the list of research areas by merging neighboring disciplines and synonymous terms. We obtained a single simplified list of 12 research areas.
We used the following scoring system to calculate the portion of studies that refers to each research area. We assigned 12 points to a research area when a study referred to it as its sole research area; 6 points when a study referred to it plus a second research area; 4 points when a study referred to it plus two other research areas; and 3 points when a study referred to it plus three other research areas.
We summed the points assigned to each research area. We then converted the total scores, corresponding to each research area, to a percentage. To illustrate the distribution of studies across research areas, we created a bar chart with Excel.
Type of hazard and related topics. We manually screened the abstracts and articles to identify which hazard each study refers to. We identified six types of hazards (multiple hazards, disease outbreaks, COVID-19, floods, earthquakes, and hurricanes) and we counted the number of articles referring to each type of hazard. We finally calculated the percentage of studies referring to each type of hazard.
To further develop the analysis, we tried to explore other topics covered in each study and their relation to the type of hazard that we had previously identified. We followed different steps to explore these topics. First, we extracted the author keywords and the index keywords associated with each article (which were provided by Scopus and Web of Science in two dedicated columns of the abstract dataset). We merged the synonyms presented in Table 3.
In the next step, we produced a network representation with VOSviewer (Centre for Science and Technology Studies, 2022) based on the list of keywords and their co-occurrence in each paper. In the resulting network, each node corresponds to a keyword. The size of the node depends on how many papers refer to the keyword: the bigger a node, the greater the number of papers that cite it. A link between two nodes (i.e., between two keywords) appears if two keywords co-occur in the same paper. A color code is used to identify different node clusters. The only nodes and clusters that appear in the network representation are nodes with at least 5 co-occurrences and clusters with at least 5 nodes. social network natural language processing sy natural language processing natural language processing systems natural language processing machine learning models machine learning machine learning processing machine learning Nlp natural language processing Pandemics pandemic Humans human Except for the first line, each line in the table specifies a label (in the "keyword" column) and an alternative label (in the "replaced by" column), meaning that the label was replaced by the alternative label. ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-023-01838-0 The objective of the study. Manual screening made it possible to identify 5 general research objectives and 21 research subobjectives. Given that several studies in our sample refer to more than one general objective or sub-objective, we used the following scoring system to calculate the portion of articles that covers each general objective or sub-objective. We assigned 1 point to a general objective/sub-objective referred to in a study as its sole general objective/sub-objective. We assigned 0.5 points to a general objective/sub-objective referred to it in a study as a general objective/sub-objective plus a second general objective/ sub-objective. We summed up the points assigned to each general objective and sub-objective. We then converted the total scores assigned to each general objective and sub-objective to a percentage. We created a Sankey plot, a flow diagram, with Sankey-MATIC (Bogart, 2022) to illustrate the distribution of studies across the 5 general objectives and the 21 sub-objectives.
The geographical location of the sponsor. When an article refers to the organization that funded the study, Scopus and Web of Science provides this information in a dedicated column of the abstract dataset. Of the 266 papers that constitute our corpus, 90 refer to a funding organization. We labeled each of the 90 papers manually with the location of the funding organization. The location corresponds to a country or a region (in the case of studies funded by the European Union). We identified 33 different countries. We counted the number of studies associated with each sponsor's location and identified 7 ranges of values: (i) 25 papers; (ii)at least 14-16 papers; (iii) at least 12-13 papers; (iv) 11 papers; (v) 6 papers; (vi) at least 3-4 papers; and (vii) at least 1-2 papers. We defined a color code, with 7 different colors corresponding to the different ranges of values, so that we could use a map to illustrate which countries are associated with which value ranges.
We then wanted to verify if the research efforts vary in each country because of varying degrees of COVID-19 impact. Hence, we compared the number of publications per country with the local number of deaths due to COVID-19 (number of deaths per 1 million population reported by Worldometer (2023)).
Results and discussion
This study consists of a meta-analysis of 266 eligible studies on the topic of AI tools to deal with misinformation on social media during hazards and disasters. As described in the previous section, for each eligible paper we analyzed its objective, the sponsor's location, the year of publication, the research area, the type of hazard, and its related topics according to different methods. In this section, we present the results obtained for each of our five research variables.
Year of publication. Figure 3 illustrates how many studies in our sample have been published each year since 2010. We can observe that the number of publications per year starts to increase in 2020 with 32 articles, and an important peak follows in 2021 with 148 publications. Given that the papers were collected up until 1 July 2022, we cannot observe the results for the full current year. One study is dated 2023 because it was published before the journal revision process was finalized. These results confirm that since 2020 there has been a fast and significant development in the number of studies dedicated to AI tools to deal with misinformation on social media during hazards and disasters. We can infer that this trend is due to the COVID-19 pandemic. Indeed, this research topic is strongly connected to the information disorders that occurred during the pandemic.
Research area. Figure 4 shows the per research area distribution of the studies included in our sample. The chart highlights that the largest portion of papers (50.3%) concerns studies in the field of "Computer Science"; "Engineering" (12.8%) and "Medicine" (12.4%) appear as the second and third most relevant research areas in our corpus. We can also observe that "Social Sciences" (5.8%), "Humanities and Communication" (3.5%), "Business, Management and Decision Sciences" (3%), and "Psychology and Neuroscience" (1%) seem underrepresented, given that these last four research areas are strongly connected to human reasoning. This result could be explained by the use of different terminology in different scientific fields: it is possible that the research areas that are underrepresented rarely use terms such as "detect," "monitor," "prevent," "screen," "AI," "artificial intelligence," namely the search keyterms that we used to select our corpus of studies.
Type of hazard and related topics. Figure 5 illustrates the per hazard type distribution of the studies included in our sample. The chart clearly shows that a striking majority of studies concern COVID-19. This result seems to confirm our hypothesis that the context of the pandemic strongly contributed to the rapid increase in publications. On the other hand, studies that concern other types of hazards (i.e., floods, earthquakes, hurricanes, multiple hazards, and disease outbreaks other than COVID-19) seem underrepresented in our corpus. Figure 6 shows a network representation with 71 nodes and four clusters of nodes. The network links highlight if two keywords (the nodes of the network) are cited in the same paper. One of the biggest nodes corresponds to the keyword "COVID-19", which means that many papers in the corpus refer to this keyword. This finding confirms the result presented in Fig. 6 (i.e., the majority of the studies in our sample concern COVID-19). This is also confirmed by 16 smaller nodes related to the topic of COVID-19, while other types of hazards are not covered by the keywords that appear in the network. We can also observe that another relevant node is "social media." This result is because "social media" is among those key terms that we used to select our corpus; the vast majority of papers in our sample thus include this key term.
We can observe four clusters of nodes: each cluster brings together the keywords that frequently co-occur. Most of the keywords included in the red cluster (with 21 nodes) and the blue cluster (with 15 nodes) refer to the research scope. For instance, the blue cluster includes keywords such as "coronavirus," "infodemic," "public health," and "information dissemination,"
ARTICLE
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-023-01838-0 and the red cluster includes keywords such as "rumor detection," "vaccine hesitancy," and "Twitter." Both clusters include keywords referring to COVID-19 (6 keywords in the red cluster and 8 keywords in the blue cluster), while other hazards do not appear among the keywords. The red cluster includes four keywords that refer to AI methods ("machine learning," "supervised learning," "topic modeling," and "sentiment analysis"), but the number of keywords of this type is small in comparison to those in yellow cluster and the green cluster.
Indeed, the yellow cluster (with 15 nodes) and the green cluster (with 20 nodes) include a majority of keywords referring to methods of analysis and more specifically to AI techniques. These keywords (25) come from the jargon of "Computer Science." This result is consistent with the data in Fig. 4 which highlight that "Computer Science" is the most prolific research area on the topic of AI tools to deal with misinformation on social media during hazards and disasters.
We can also observe that the yellow cluster does not include any keyword referring to hazards, while the green cluster includes only two words related to COVID-19.
The cluster structure highlights that part of our sample studies, through the keywords selected by the authors and the editors, is identified as a contribution to the research on COVID-19 information. Another part of the study is identified for its contribution to the development of new or improved AI methods.
The objective of the study. Figure 7 is a Sankey plot that illustrates, on the left, the research general objectives and, on the right, the corresponding sub-objectives, which are covered by the studies included in our corpus. We can see that a huge variety of general objectives are covered: from the detection of misinformation, impact assessment, and content analysis to the identification of the causes of misinformation and combating misinformation. The sub-objectives are also very diverse: from multilingual detection or bot-debunking to dissemination pattern monitoring or the analysis of the heuristic process, etc.
The plot highlights that most studies in the corpus refer to "detecting misinformation" (68%) as a general objective and to "classification" solutions (52%) as a sub-objective. These studies provide solutions to identify unreliable information but do not Fig. 6 Visualization of the abstract dataset as a network. The author keywords and the index keywords are depicted as nodes, and their co-occurrence in each publication is shown as links between two nodes. One of the most prominent nodes in the network corresponds to the keyword "COVID-19". The network can be further divided into four clusters, where each cluster groups together keywords that frequently appear together. The red and blue clusters, with 21 and 15 nodes respectively, consist mainly of keywords related to the research scope. In contrast, the yellow and green clusters, with 15 and 20 nodes respectively, contain mostly keywords related to analytical methods, especially AI techniques. These 25 keywords are part of the specialized language of "Computer Science".
directly deal with the question of "combating misinformation," a general objective that only 6% of the studies have.
Location of the sponsoring organization. According to Fig. 8, few countries are major funders of research on the topic of AI tools to deal with misinformation on social media during hazards and disasters. The United States is the most frequent funder (with 25 papers), followed by China, Spain, and Italy (with between 14 and 16 papers) in second position. The countries that have between 11 and 13 papers are all located in the European Union and can therefore access the programs funded by the European Commission. The number of sponsored papers per country is also presented in Fig. 9 and compared with the number of deaths due to COVID-19 per million population in each country (Worldometer, 2023). As we can see in Fig. 9, three countries (United States, Italy, and Spain) with the highest number of publications (between 14 and 25) are among the countries with the highest number of deaths due to COVID-19 per million population (between 2500 and 3500), leaving aside China which undercounts COVID-19 deaths according to the World Health Organization (Wang and Qi, 2023). Nevertheless, we can also notice that other countries with a very high number of deaths due to COVID-19 per million population (such as Brazil, Mexico, and Slovakia) have a very limited number of publications (between 1 and 4).
Conclusions and perspectives
This study aims to provide new insight into the research gaps that need to be filled on the topic of AI tools to deal with misinformation on social media during hazards and disasters. Such a meta-analysis will contribute to developing a communication model based on social media moderation and recommendation algorithms that are aligned with human rights and journalism ethics.
The results confirm that after the COVID-19 pandemic, there was a marked acceleration in the number of scientific publications per year on the topic of AI tools to deal with misinformation on social media related to hazards and disasters. This trend mainly concerns papers on COVID-19, while other risks are covered by a minor share of publications. We suggest that results developed in the framework of research on the COVID-19 pandemic could be exploited to enhance research advances on other risks. On the other hand, caution should be taken when interpreting the results. The trends we describe below characterize the studies on COVID-19 that are dominant in the sample we examined, and they cannot be generalized to the studies on other risks, as these are underrepresented in the sample.
The results suggest that research in the fields of social science, decision science, psychology, humanities, and communication is underrepresented if we consider that the topic, "AI tools to deal with misinformation on social media during hazards and disasters," is strongly connected to human reasoning. This result may be because social scientists rarely refer to detection, monitoring, prevention, screening, or artificial intelligence. This trend may be indicative of the limited involvement of social scientists in the design of AI detection tools.
There is a gap to be filled by supporting these research areas that are essential to enhancing the protection of human rights and Fig. 7 Sankey plot displaying the distribution of studies across general and sub-objectives. The plot was created using sankeymatic.com (Bogart, 2022): it provides a summary of the diverse range of general objectives and sub-objectives covered in our corpus. Additionally, it highlights that the majority of studies focus on "detecting misinformation" (68%) and "classification" solutions (52%) as their general objectives. journalism ethics. For instance, these research areas contribute to reflections on the regulatory, digital, or educational solutions that support digital inclusion and critical thinking.
The results also highlighted that most of the studies are dealing with the issue of detecting misinformation. This remark opens up new research questions: is the decision to filter the news left to the discretion of individual users? Are the individual user's considered active actors in the attempt to combat misinformation? Do researchers and practitioners have the same vision? A reflection on the optimum balance between algorithm recommendations and user choices seems to be missing.
Finally, the results section shows that there are a few countries, the main one being the United States, that fund research on the topic of "AI tools to deal with misinformation on social media during hazards and disasters." We can suppose that the high impact of COVID-19 contributed to increasing the research efforts on the topic. Nevertheless, this was not the only factor that determined the number of publications per country, as not all the countries that have been strongly affected by COVID-19 also have a high number of publications.
In the future, it would be interesting to compare these results with other data on digitalization trends at the national level-for instance, in the industry or education sectors-to verify if this trend is correlated with the leadership of a few countries in the field of digitalization.
The major research question of our work was about the kind of gatekeepers (i.e., news moderators) we wish social media algorithms and users to be when we are dealing with misinformation on hazards and disasters. In our view, gatekeeping should be based on communication standards that are aligned with international human rights and journalism ethics. These general principles need to be translated into operational guidelines that are tailored to the context of social media and its rapid evolution. Here, future research can play a key role by providing the knowledge required to develop and implement these operational guidelines. However, several research gaps must be filled on this topic, as we highlight in our study. Given these considerations, it seems to us essential that policies and programs encourage research on the topic of AI tools to deal with misinformation on social media: 1) about risks other than COVID-19 2) in the fields of social science, decision science, psychology, and humanities 3) with particular attention to the complementary role played by algorithms and users in gatekeeping, 4) as well as to the less digitally competitive countries. This policy framework would be essential to develop a communication model based on social media moderation and recommendation practices that are aligned to human rights and journalism ethics.
Data availability
The data that support the findings of this study are available from Scopus and Web of Science but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Scopus and Web of Science. | 8,719.2 | 2023-06-17T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Sociology"
] |
Programmable Data Gathering for Detecting Stegomalware
The “arm race” against malware developers requires to collect a wide variety of performance measurements, for instance to face threats leveraging information hiding and steganography. Unfortunately, this process could be time-consuming, lack of scalability and cause performance degradations within computing and network nodes. Moreover, since the detection of steganographic threats is poorly generalizable, being able to collect attack-independent indicators is of prime importance. To this aim, the paper proposes to take advantage of the extended Berkeley Packet Filter to gather data for detecting stegomalware. To prove the effectiveness of the approach, it also reports some preliminary experimental results obtained as the joint outcome of two H2020 Projects, namely ASTRID and SIMARGL.
I. INTRODUCTION
Modern business models are increasingly demanding for more agility in the creation, operation and management of ICT services. To this aim, new computing paradigms are being progressively introduced, which leverage virtualization and cloud, as well as service-oriented architectures to interconnect software, devices, and data over a pervasive and seamless computing continuum encompassing different technological and administrative domains, i.e., IoT, data centers, and telco infrastructures. Despite of the benefits in terms of time-tomarket and dynamicity, novel paradigms outdate the legacy security perimeter model and lead to additional threats [1].
The difficulty to effectively deploy cyber-security appliances in virtualized and cyber-physical systems is making cloud and network services the preferred target for a wide-range of novel attacks. As a matter of fact, the growing adoption of micro-services and service mesh architectures to implement large digital value chains creates interdependencies among software processes in different domains and paves the way for multi-vector attacks that combine together social engineering, malware, steganography, software bugs, and network vulnerabilities. The effective detection of threats in such scenarios requires to collect and correlate even apparently independent events from multiple subsystems, which is not possible for legacy standalone security appliances.
According to recent reports from cyber-security vendors, attacks are becoming ever more complex and stealthy, in order to elude well-known detection techniques based on signatures and behavioral patterns. As a paradigmatic example, steganography can be used to hide the presence of malicious code in digital media and network traffic is used as the carrier to covertly exfiltrate data or to stealthily orchestrate nodes of a botnet. Therefore, many recent attacks are difficult to detect and such a trend is expected to continue to grow [2].
Under this evolutionary scenario, programmatic access to data and events is an important requirement to improve the likelihood of detecting stealthy software and communications. In this paper, we show how research outcomes from complementary projects can be joined to this purpose. We consider the challenging scenario of detecting stegomalware in virtualized services, which encompasses both cloud applications and network functions virtualization. The platform for lightweight and programmatic monitoring and inspection is taken from the ASTRID project 1 , whereas SIMARGL 2 aims at developing scalable mechanisms to counteract novel malware endowed with steganographic techniques and crypto-lockers. While detection of network attacks has been largely discussed in the literature, information-hiding-capable threats pose new challenges, as they exploit bandwidth-scarce channels and their detection is a poorly generalizable process [3]- [5].
In more detail, we investigate the use of the programmable monitoring and inspection framework developed by ASTRID to collect data and measurements from virtualized environments, which are usually difficult to gather in an efficient way with existing tools. In addition, since the detection of steganographic threats is tightly coupled with the used carrier embedding the secret (e.g., the enumeration of sockets, the acquisition of a lock on a file or the manipulation of field within the header of a protocol), instrumenting virtual machines and network nodes without impacting their performances could be a hard and time-consuming task. To this aim, we leverage the extended Berkeley Packet Filter (eBPF), a framework integrated in the Linux kernel for the inspection of system calls, e.g., page faults and traffic stimuli [6]. Obtained measures can be then evaluated with toolkits envisaged in SIMARGL to reveal malware, hidden Command & Control (C&C) attempts or attacks targeting virtualized architectures.
Summarizing, the contributions of this paper are: i) the review of works enforcing security through virtualization; ii) the design of an architectural blueprint to integrate ASTRID and SIMARGL with emphasis on the flows of information that can be exchanged to perform detection of emerging threats; iii) the identification of malware taking advantage of steganography and the various technological behaviors that should be considered to collect the data both to reveal and neutralize the attack; iv) a preliminary experimental campaign on the use of eBPF to support the detection of stegomalware exploiting a local covert channel between two malicious software components.
The remainder of the paper is structured as follows. Section II reviews previous works on virtualization and security, while Section III provides the background. Section IV deals with the architecture combining the features of ASTRID and SIMARGL. Section V discusses technical aspects of data gathering and detection, and Section VI showcases a preliminary performance evaluation on the use of eBPF for detecting stegomalware. Section VII concludes the paper and hints at some possible future developments.
II. RELATED WORKS
The use of virtualization to support security-related duties and the mitigation of attacks targeting virtual services has been already partially investigated in the literature. For instance, the approaches proposed in [7], [8] take advantage of an orchestrator for controlling pervasive and lightweight security hooks embedded in the virtual layers of cloud applications. The work in [9] discusses a mechanism to enhance a network hypervisor with new functions for implementing a flexible monitoring service. The proposed approach can ease the engineering of monitoring and security-related services, especially when in the presence of complex architectures based on micro services or cloud technologies. For the case of cybersecurity applied to networking, Deep Packet Inspection (DPI) is an important technique as it allows to evaluate several aspects of a flow, including alterations in the header adopted by many covert channels [3], [10]. Indeed, network virtualization is often at the basis of large-scale scenarios where engines responsible of performing DPI can be deployed as software components on commodity hardware. To this aim, [11] proposes an approach for their dynamic placement to contain power consumptions and costs while delivering suitable degrees of scalability and performance. A more comprehensive framework is discussed in [12], where authors propose a virtualized security architecture to enforce integrity of virtual machines, isolate higher software layers, and provide adaptive network security appliances (e.g., intrusion detection systems and firewalls) encapsulated within virtual machines. Another important aspect concerns the definition and the implementation of effective orchestration policies. A possible idea exploits meta-functions to dynamically construct security services for satisfying various security requirements [13].
Concerning the detection of steganographic malware, at the best of our knowledge, there are not any works taking advan-tage of virtualization to gather data or neutralize attacks. In fact, literature abounds of works on the sanitization of carriers or the normalization of network traffic (see, e.g., references [5] and [3] and the references therein) but not any prior work investigates how a covert channel or a steganographic threat could be detected or prevented by means of virtualization. A variety of works address the security of virtualized architectures, but they mainly focus on the following classes of hazards [14]: access control of resources, DoS and DDoS attacks, virtualization to build the network infrastructure, and security management of virtualized assets. Even if the security risks caused by mobility, migration and hopping of virtual machines are well understood [15], steganographic threats are solely discussed without considering the overall architecture or the peculiarities of large-scale virtualized network scenario. Rather, they are addressed for very specific cases, e.g., colluding containers or virtual machines, but without gathering data via agents automatically deployed [10], [16].
III. BACKGROUND
Despite a unique and precise definition is missing, the term stegomalware is used in SIMARGL to identify malware endowed with steganographic functionalities or able to exploit a covert channel [4]. In general, an attacker deploys steganography for the following use-cases. Colluding Entities: they are those trying to create a covert channel to exchange data within the single host, mainly to bypass the security policies deployed in the underlying software and hardware layers, the guest OS or the hypervisor [5], [19]. Thus, colluding applications implement a sort of unauthorized inter-process communication service between several software entities, for instance, virtual machines and containerized applications, as well as regular applications and processes. To exchange data, the typical approach sets a local covert channel built by modulating the space available in the file-system, the CPU load or the state of TCP sockets. To detect such communications, two main techniques are available: monitor the syscalls handling the I/O or the access to the used carrier (e.g., a temporary file) or evaluate the sleep/wake patterns of processes as to identify those having overlapping behaviors. In general, this attack can be used to exfiltrate data from an entity to another, e.g., to allow a containerized application handling sensitive data without network privileges to push the stolen information to another container with less constraints [16]. Moreover, virtual machines could also cooperate for reconnaissance purposes, e.g., to allow the malware to determine if it is confined on a honeypot or to map the underlying physical infrastructure [22]. Network Covert Channels: they permit to stealthily transfer data by injecting the information within a network artifact acting as the carrier. To this aim, the sender can modulate the content of packets or alter some traffic features, e.g., he/she encodes information by manipulating the inter-packet time with proper delays. For the case of malware, network covert channels are typically used for: data exfiltration, implementation of a C&C infrastructure, development of cloaked transfer
Threat
Behavior of Interest Sample Measures Refs.
Colluding Applications Activation statistic of processes or threads stack trace of waker, total blocked time [17], [18] Colluding Apps. or Containers Monitoring type and number of calls to VFS type of VFS func, count [16], [19] Colluding Virtual Machines Memory usage to infer anomalies page cache hit/miss, read and write hit [16], [20] CC manipulating file-system Processes that are performing disk I/O type of op, disk, number of op [3], [10], [17] CC manipulating CPU Time spent using the CPU duration, count, load distribution [3], [10], [17] CC manipulating files Reads and writes performed reads, writes, r kb, w kb, type [3], [10], [17] CC enumerating sockets TCP state change information socket address, pid, cmd, state, duration [3], [10], [17] Miners, Cryptolockers and CC Block disk I/O activity and performance I/O latency, duration and count of operations [4], [5] Cryptolockers Type of disk I/O operations and CPU usage type of op, kbytes in R/W [21] services for retrieving additional software components, botnet orchestration, and elusion of firewall rules [3], [5], [10]. Cycle-stealing threats, ransomware and cryptolockers: despite their precise characterization is still an open problem, such threats share the aggressive usage of resources when attacking [23]. For instance, malicious code used for mining cryptocurrencies or orchestrating a botnet will impact on the average utilization of computing or network resources, while the encryption of content stored on the file-system accounts for a significant volume of syscalls handling I/O operations. Therefore, being able to filter and monitor specific events within the kernel is a prime requirement for building effective indicators. This is even more important since many malware increase their stealthiness by adopting "time bomb" mechanisms triggering the execution after a predefined period of time. This can reduce the chance of spotting the attack via a cause-effect observation (e.g., a lag in the graphical user interface due to excessive loads on the CPU) [24]. Similarly to the case of stegomalware, also cycle-stealing threats and ransomware require the access to different usage statistics, which are difficult to define a priori. A possible idea exploits abstract indicators, such as the used power [17]. In this case, being able to perform measurements "close" to the device driver or within the kernel is definitely important. Table I resumes the most popular attacks discussed in the literature, the behavior of interest, as well as some possible measurements to perform detection. For instance, in order to detect colluding applications or containers, a possible approach could monitor the behavior of the Virtual File System (VFS).
IV. MONITORING VIRTUALIZED SERVICES
Model-driven engineering is being increasingly used both in the cloud and NFV domain to design high-level process models (defined as service graphs), which are then dynamically and automatically mapped into software functions and infrastructural resources based on the evolving context by orchestration tools. This approach brings unprecedented agility in life-cycle management of digital services, but has not been yet properly integrated with cyber-security paradigms.
The ASTRID architecture chases an effective solution to deliver multiple cyber-security services for virtualized applications. Its workflow starts with the revision of the service graph, made by cyber-security experts (see the chain at the top of Figure 1), before it is made available to service providers for automatic deployment through software orchestration tools.
Context Broker
Control plane Data plane Fig. 2. Software architecture of the ASTRID platform.
The revision provides indications about security agents to be included in the deployment plans and the connection to the ASTRID platform. The definition of security agents as part of the design allows to deploy them whenever and wherever necessary, hence following the evolution of the service topology at run-time (e.g., scaling, replication, and replacement operations). As depicted in Figure 1, the ASTRID platform is conceived as a mediation layer between the detection and analysis logic and the physical and virtualization environments where virtual applications run. This design allows to disaggregate detection and analysis algorithms from the necessary hooks in the monitored system, so many different cyber-security appliances can be loaded and run at run-time, depending on the specific needs of the service and the evolving threat landscape. The distinctive and innovative approach of ASTRID is the integration with software orchestration tools to automate the deployment and management of security agents. The ASTRID platform allows both to change the configuration and behavior of single agents and to perform management actions of the whole graph, through the service orchestrator. Possible actions include the deployment of additional inspection and monitoring processes and the replacement of compromised functions. The overall architecture of ASTRID includes a Context Broker, a Security Controller, a user Dashboard, complementary Security Services, and plugins to interact with different orchestration software [7]. In this paper, we only focus on the Context Broker, which is responsible to programmatically collect the security context from local security agents deployed within virtual functions and to feed the SIMARGL toolkit in an effective way. Figure 2 shows the part of the ASTRID software architecture that is relevant for this work. It includes local agents deployed in each virtual function as security sidecars and a centralized Context Broker for collection and analysis of security-related data and measurements. We do not consider the external service orchestrator that deploys local agents and the other ASTRID components (Security Controller and user Dashboard), because they are not used in the rest of this work. The implementation of local agents and the Context Broker largely builds on and extends the proven Elastic Stack framework. It includes common agents (defined as beats in the Elastic jargon), for the collection of log files, system metrics, and packet statistics. Additional agents are being developed by ASTRID to run eBPF programs. The proposed framework can also implement a common Local Control Plane (LCP) for all agents in order to provide a unified interface for changing the configuration of Elastic beats, loading Logstash pipelines, and injecting eBPF programs into the local environment.
The Context Broker implements an abstraction of the service topology and current configuration of security agents allowing to know the type and structure of data that are delivered from them. The interaction between the Context Broker and local agents happens on two different planes: control and data. The control plane is used to access the LCP, whereas the data plane is the logical channel to push events, measurements, and data from local agents to the Context Broker, through a Kafka bus. The architecture of the Context Broker is based on the Elastic Stack as well. At the data plane, it stores data from local agents in Elasticsearch, and includes a Logstash instance for performing aggregation and data fusion. Moreover, a Time Series database is also provided for collection of big historical data. The Context Manager is the logical subcomponent that implements control and management actions. It discovers available security agents and their capabilities and it is used to change their configuration and to inject programs at runtime. The Context Manager implements a REST API interface for querying the internal databases and to ask configuration changes. Messages on the Kafka bus from local agents may be received directly from security appliances, by subscribing the corresponding topics.
A. Towards the Integration with SIMARGL
To integrate the aforementioned software layers with SIMARGL, the toolkit has to subscribe for specific measurements/events to the Context Manager, which configures local agents via the BpfProcessing agent 3 . Once configured and activated, it starts sending events on the Kafka bus. Data is caught both by the central Logstash instance and the SIMARGL toolkit: the former stores data in Elasticsearch also to make information available for off-line processing. The SIMARGL toolkit listens on the Kafka bus to retrieve data as soon as they are published. Alternatively, it could periodically query Elasticsearch. We point out that algorithms implemented for detecting stegomalware can define and enumerate at runtime the needed measurements, and they can even load into ASTRID new eBPF programs for tailored data and statistics. This represents a major advance with respect to existing technology as recent cybersecurity appliances and tools are quite flexible in the definition of the context to be collected (i.e., logs from specific applications, packet filters, and event notifications), but are rather rigid in the management. In fact, if a monitoring tool (e.g., a log collector or a packet classifier) is not available a manual intervention from humans is required. In a similar way, reaction is often limited to traffic diversion, packet filtering, and changes in access rules.
V. DATA GATHERING FOR STEGOMALWARE
The broad range of potential covert channels and carriers to be used for steganographic purposes makes the detection of stegomalware a challenging task. As a matter of fact, stegomalware may leverage network packets, file system properties, memories and caches, CPU performance, and so on. Common detection mechanisms based on the analysis of log files and network statistics could be largely ineffective in this respect.
The detection of stegomalware requires temporal and spacial correlation of fine-grained system properties and actions. DPI can be used to spot anomalous usage of some header fields, while system tracing is required to analyze the behavior of the operating system. For example, by only narrowing the discussion to the most popular attacks targeting the single host/node, we can report the following considerations: • Colluding Entities: typical attacks modulate the space available in the file-system, the CPU load or the state of TCP sockets. To detect them, two possible techniques can be used: i) monitor the syscalls handling the I/O or the access to the used carrier (e.g., a temporary file) or ii) evaluate the sleep/wake patterns of processes as to identify those having overlapping behaviors, thus possibly colluding. • Cryptolockers (or ransomware): in general, such threats produce a huge load on the filesystem to encrypt its content. Thus, monitoring operations on the I/O could be an effective indicator.
Several data sources are already present in the kernel to follow the execution of system calls and internal functions: kprobes 4 , uprobes 5 , tracepoints 6 , dtrace-probes [26], and lttngust. Multiple tools are available to collect information from these tracing hooks (ftrace, perf, sysdig, SystemTap, LTTng), but eBPF is probably the most powerful framework for gathering data in the perspective of detecting information-hidingcapable threats. Specifically, there are no additional modules to install in the kernel, custom programs can be defined and attached to tracing hooks to do any kind of aggregation and lightweight processing. As an example of the broad range of applicability of the eBPF framework, Table II lists existing programs from the BPF Compiler Collection 7 (BCC) useful to detect some well-known covert channels.
Put briefly, BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and already includes several tools for tracing the most common features. Besides, it allows writing eBPF programs in C language and compiling them with LLVM. It also includes front-ends in Python and Lua. ASTRID improves this framework by feeding Elastic Stack with the output from eBPF programs.
This framework can be used to implement an effective suite of probes to gather the data needed by the SIMARGL toolkit for detecting stegomalware. Through ASTRID, investigation should start from eBPF programs to detect high-level indicators, while additional programs are then injected as soon as the scope is narrowed. Since there is no dependency on a specific set of programs, new programs can be implemented at run-time to cope with new or unknown covert channels and threats using information hiding.
VI. EXPERIMENTAL RESULTS
To prove the effectiveness of eBPF-based tracing, we selected a colluding applications threat where two endpoints exchange data through the chmod-stego technique. The chmodstego is a covert channel that injects secrets in Unix file privilege numbers. The application is made of two peers, a sender and a receiver. The sender encodes data in the privilege of a set of files within a given directory, while the receiver listens for the specified file permissions. Then, according to changes, it decodes the secret message.
Since the chmod-stego technique is based on the manipulation of the file-system, the most straightforward way to design a detection strategy is by tracing the __x64_sys_chmod kernel function, which provides better indications than more generic I/O activity (e.g., read/write operations through __x64_sys_read and __x64_sys_write).
To validate such an idea, we investigated if and under what conditions a steganographic transmission could be detectable during system activity. We created an experimental setup composed of a Virtual Machine with Debian GNU/Linux 10 (buster) running the Linux kernel 4.20.9 and the aforementioned chmod-stego 8 steganographic method. A kernel compilation was run as the main system activity, which entails many I/O system calls and can be easily replicated for comparison.
In essence, such a scenario allowed to investigate the presence of a malware which exfiltrates data or bypasses some local security policies, e.g., the sandboxes.
To gather data, a simple eBPF filter was injected to trace invocations of the __x64_sys_chmod kernel function and to report its relevant parameters, i.e., file and permissions, the Process ID (PID), and the Thread ID (TID).
We performed two different tests. The first aimed at evaluating the tradeoff between the steganographic bandwidth of the covert channel and its detectability. To this aim, we fixed the length L of the secret message to be transmitted and we varied the time between the transmission of two consecutive characters, denoted as Δt. Specifically, we conducted trials with L = 30 and Δt = 0.5, 5, 10, 20 s. In the second round of tests, we investigated the influence of the size of the data exchanged between the two colluding applications. Hence, we set Δt = 5 s and we performed trials with L = 30, 60, 90, 120 characters, which may be representative of the exfiltration of a PIN, a cryptographic key or the information of a credit card. In both tests, the "clean" configuration has been considered the one characterized by the load of traced kernel functions due to the compilation of the Linux kernel 5.5.5. All the trials lasted 10 minutes. We point out that such parameters allowed to consider a wide range of threats (e.g., slow and long communications characterizing advanced persistent threats or malicious applications wanting to exfiltrate as quick as possible sensitive information) while guaranteeing the adequate statistical relevance. __x64_sys_chmod kernel functions at the begin. In fact, higher transmission rates reduce the time needed to transmit the secret message. This can be viewed in Figure 3(b), where the instantaneous time evolution is shown. Similar results have been observed for the second set of experiments, which are showcased in Figure 4. In this case, the steganographic bandwidth is fixed and the length of the message is the unique factor that makes the transmission more or less detectable.
For what concerns detection, in general, channels with a higher steganographic bandwidth and longer messages are easier to detect. Indeed, they imply either sudden peaks or larger volumes of __x64_sys_chmod kernel functions. Clearly, on-line detection is not straightforward, because of the difficulty to find an effective decision rule able to discriminate between legitimate usage and the presence of hidden transmissions for different use cases. Luckily, for the case of chmod-stego technique, a possible signature is given by a quick change in the volume of __x64_sys_chmod kernel functions at the end of the trials. This is due to the sender that restores the original file permissions, as to avoid the detection by common file system monitoring tools. Moreover, taking into account additional parameters available from tracing (e.g., the file names) can be used to further improve the likelihood of the detection. In this perspective, the SIMARGL toolkit will take into account multiple complementary indicators, possibly independent of the specific threat.
VII. CONCLUSIONS AND FUTURE WORKS
In this paper, we showcased how the framework developed within ASTRID can be used by the SIMARGL toolkit for the detection of novel threats, such as stegomalware and cryptlockers. To prove the effectiveness of this vision, we presented a preliminary performance evaluation on the use of the eBPF to collect variable of interests. The provided data can then be used to feed detection models or create datasets to train machine-learning-capable techniques.
Future works aim at refining the approach. In particular, the main objective is the definition of a more programmatic process to progressively narrow down the scope from generic indicators to fine-grained tracing of execution patterns for specific covert channels or information-hiding-capable threats. In this respect, ongoing research deals with the development of threat-independent signatures such as energy consumption, RAM usage patterns, and the time statistics of running processes. | 6,105.4 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
The Effelsberg survey of FU~Orionis and EX~Lupi objects II. -- H$_2$O maser observations
FU Orionis (FUor) and EX Lupi (EXor) type objects are two groups of peculiar and rare pre-main sequence low-mass stars that are undergoing powerful accretion outbursts during their early stellar evolution. Water masers are widespread in star forming regions and are powerful probes of mass accretion and ejection, but little is known about the prevalence of them toward FUors/EXors. We perform the first systematic search for the 22.2 GHz water maser line in FUors/EXors to determine its overall incidence to perform follow-up high angular resolution observations. We used the Effelsberg 100-m radio telescope to observe the 22.2 GHz H2O maser toward a sample of 51 objects. We detect 5 water masers; 3 are associated with eruptive stars, resulting in a 6% detection rate for eruptive sources. These detections include one EXor, V512 Per (also known as SVS 13 or SVS 13A), and two FUors, Z CMa and HH 354 IRS. This is the first reported detection of water maser emission towards HH 354 IRS. We detect water maser emission in our pointing towards the FUor binary RNO 1B/1C, which most likely originates from the nearby deeply embedded source IRAS 00338+6312 (~4'', from RNO 1B/1C). Emission was also detected from H$_2$O(B) (also known as SVS 13C), a Class 0 source ~30'', from the EXor V512 Per. The peak flux density of H$_2$O(B) in our observations, 498.7 Jy, is the highest observed to date. In addition to the two non-eruptive Class 0 sources (IRAS 00338+6312 and H$_2$O(B) /SVS 13C), we detect maser emission towards one Class 0/I (HH 354 IRS) and two Class I (V512 Per and Z CMa) eruptive stars. We demonstrate the presence of 22.2 GHz water maser emission in FUor/EXor systems, opening the way to radio interferometric observations to study these eruptive stars on small scales. Comparing our data with historical observations suggest that multiple water maser flares have occurred in both V512 Per and H$_2$O(B).
Introduction
Low-mass young stellar objects (YSOs) are stars in the early stages of stellar evolution, specifically protostars and pre-main sequence (PMS) stars, which can undergo accretion-driven episodic outbursts. Studies of outbursting objects provide crucial information on the formation and the evolution of Sun-like stars. Amongst PMS stars, there are two small, but rather spectacular classes of outbursting low-mass YSOs: FU Orionis and Member of the International Max Planck Research School (IM-PRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne.
EX Lupi-type stars (FUors and EXors for short, respectively). Members of both classes show major increases in their optical and near-infrared (NIR) brightnesses. FUors can brighten by up to 5 -6 magnitudes in the optical, triggered by enhanced accretion from the accretion disk onto the protostar (Hartmann & Kenyon 1996;Herbig 1989). This phase can last for several decades, or even centuries (e.g. the recent review by Fischer et al. 2022, and references therein). For example, the prototype of the FUor class, FU Orionis, went into outburst in 1936 (Wachmann 1954), and remains in a highly active state. After a few other objects were observed to experience similar outbursts, Herbig (1977) defined the FUor class, which continues to increase in Article number, page 1 of 12 arXiv:2305.00736v1 [astro-ph.SR] 1 May 2023 size as new FUor-type objects are identified (e.g., Audard et al. 2014;Szegedi-Elek et al. 2020) and currently contains more than a dozen objects. The EXor class was defined by Herbig (1989), based on the properties of the prototype star EX Lupi, and currently also includes more than a dozen objects (e.g., Audard et al. 2014;Park et al. 2022). EXors can brighten by up to 1 -5 magnitudes in the optical and remain in a bright state for a few months or a few years (see e.g., Jurdana-Šepić et al. 2018); furthermore, their outbursts are recurring (e.g., Audard et al. 2014;Cruz-Sáenz de Miera et al. 2022).
Interstellar masers are powerful tools for studying the physics of star formation on small scales, frequently probing regions of enhanced density and temperature (e.g., Elitzur 1992;Reid & Honma 2014). While masers have been substantially used to probe both low-and high-mass star formation regions (e.g., Abraham et al. 1981;Omodaka et al. 1999;Hirota et al. 2011;Furuya et al. 2001Furuya et al. , 2003, so far little information exists on masers in FUors/EXors. Pioneering studies found compact maser emission in the 1720 MHz hyperfine structure line of hydroxyl (OH) toward the archetypal FUor V1057 Cyg (Lo & Bechis 1973). This emission, which comes from the immediate vicinity of the star (Lo & Bechis 1974) and is highly time variable (Winnberg et al. 1981), is unique in the literature. The 22.2 GHz transition of water (H 2 O) is the most widespread interstellar maser (see, e.g., Gray 2012, and references therein). It has been detected towards numerous low-to high-mass star forming regions in the Milky Way (see e.g. Ladeyschikov et al. 2022). Pumping models indicate that 22.2 GHz water masers are excited at elevated temperatures (∼500 K) and densities (10 8−9 cm −3 ), which are typically found in the compressed postshock regions of jets/outflows from YSOs (Elitzur et al. 1989a;Elitzur & Fuqua 1989;Gray 2012;Gray et al. 2022). With verylong-baseline interferometry (VLBI), multi-epoch observations of water masers associated with protostellar outflows can be used to study mass accretion and ejection (see, for example, Burns et al. 2016;Moscadelli et al. 2019). This suggests that water masers could potentially serve as valuable probes of mass accretion and ejection in FUors/EXors.
Despite the fact that water masers are closely associated with mass accretion and ejection in protostars, a systematic search for 22.2 GHz H 2 O masers in FUors/EXors has not yet been performed. Hence, the overall incidence of 22.2 GHz water masers in these classes of eruptive objects is unknown. In this paper, we present the first dedicated 22.2 GHz water maser survey of low-mass young eruptive stars, using the Effelsberg 100-m telescope. Our single-dish survey is a first step in investigating water masers in low-mass outbursting systems, aimed at investigating the existence and prevalence of water masers in these objects and identifying targets for follow-up interferometric observations. This paper is the second in a series (the first being Szabó et al. 2023) presenting radio and (sub)millimeter observations of FUors and EXors and their natal environments, and is organized as follows. In Sect. 2, we summarize our observations. In Sect. 3, we present our results, focusing on sources with water maser detections. In Sect. 4, we discuss our results, and in Sect. 5 we summarize our most important findings.
Observations
The H 2 O J K a ,K c = 6 16 − 5 23 transition (rest frequency 22235.0798 MHz, from the JPL Molecular Spectroscopy database 1 , Pickett et al. 1998) was observed simultaneously 1 https://spec.jpl.nasa.gov/ with the three lowest metastable NH 3 transitions ((J, K) = (1, 1), (2, 2) and (3, 3)), which were presented in Paper I (Szabó et al. 2023). The observations were carried out on 2021 November 18, November 23, and 2022 January 25 using the Effelsberg 100-m telescope in Germany 2 (project id: 95-21, PI: Szabó). The sample consisted of 51 sources: 33 FUors, 13 EXors, and 5 Gaia alerts. Gaia alert sources were chosen from the variable sources identified by the Gaia Photometric Science Alerts system (Hodgkin et al. 2021) based on light curve characteristics and luminosities similar to those of FUors/EXors. Five Gaia alert sources in our sample are yet to be classified; one source, Gaia18dvy, is listed with its Gaia alert name (Table B.1) but counted as a FUor based on its classification by Szegedi-Elek et al. (2020).
Our observations were performed in position-switching mode with an off-position at an offset of 5 east of our targets in azimuth. During our observations, the 1.3 cm double beam and dual polarization secondary focus receiver was employed as the frontend, while the Fast Fourier Transform Spectrometers (FFTSs) were used as the backend. Each FFTS provides a bandwidth of 300 MHz and 65536 channels, which gives a channel width of 4.6 kHz, corresponding to a velocity spacing of 0.06 km s −1 at 22.2 GHz. The actual spectral resolution is coarser by a factor of 1.16 (Klein et al. 2012).
At the beginning of each observing session, pointing and focus were verified towards NGC 7027. On 2021 November 18 we also targeted W75N, known for its H 2 O and NH 3 emission, to make sure that the system was working properly (see Appendix A). Pointing was regularly checked on nearby continuum sources, and was found to be accurate to about 5 . NGC 7027 was also used as our flux calibrator, assuming a flux density of ∼5.6 Jy at 22.2 GHz (Ott et al. 1994). The on-source integration time was 2.5 minutes per spectrum, and during each observing epoch, 4 spectra per source were obtained.
The majority of our sources were observed on 2021 November 18 and 23 (see Tables 2 and B.1). On 2021 November 18, we detected H 2 O maser emission toward V512 Per (SVS 13A), RNO 1B/1C, and HH 354 IRS. To study the time variability of the maser emission, we re-observed detected sources in as many subsequent epochs as possible (see Table 2), within the constraints of our allocated observing sessions. For Z CMa, which was known to have water maser emission (Moscadelli et al. 2006) but could not be observed in November 2021 due to time constraints, we searched for short-term maser variability by observing this source for two 4×2.5 minute blocks separated by 2.5 hours in January 2022. No variability was detected on this timescale, so all 8 spectra of Z CMa were averaged for the subsequent analysis. We note, that due to the weak detection of the water maser in HH 354 IRS, the spectrum was spectrally smoothed by a factor of 2 using the smooth built-in function in CLASS. The smoothed spectrum is presented throughout this paper. Having detected unusually high-amplitude (factor of ∼4 with respect to the previous observation) and rapid variability in the H 2 O maser spectra towards V512 Per (SVS 13A) (see Sect. 3.2.1), we also carried out nine-point observations and 1 ×1 On-The-Fly (OTF) mapping of this source on 2022 February 5 to investigate whether emission from nearby sources in the telescope sidelobes could be contributing to the observed emission. Consequently, we serendipitously detected strong water maser emission toward H 2 O(B) (SVS 13C), which is 30 from V512 Per (SVS 13A) (see Sect. 3.2.1 and 3.3.1). We also performed single-pointing observations towards H 2 O(B) during this epoch. We adopted the method introduced by Winkel et al. (2012) for our spectral calibration which resulted in a calibration uncertainty of about 15%. The half-power beam width (HPBW) was about 40 at 22 GHz and the main beam efficiency was 60.2% at 22 GHz. The conversion factor from flux density, S ν , to main beam brightness temperature, T mb , was T mb /S ν =1.73 K/Jy. Typical RMS noise levels for observations of detected sources are given in Table 2 and 3σ upper limits for non-detections are given in Table B.1.
The data were reduced using the GILDAS/CLASS package developed by the Institut de Radioastronomie Millimétrique (IRAM) 3 (Pety 2005;Gildas Team 2013). For each target, spectra observed on the same day were averaged to improve the signal-to-noise ratio prior to subtracting a linear baseline. Velocities are presented with respect to the local standard of rest (LSR) throughout this paper.
Results
Of our 51 targets, we detected >3σ water maser emission towards two FUors (Z CMa and HH 354 IRS) and one EXor (V512 Per/SVS 13A), corresponding to a detection rate of ∼6% towards eruptive stars. We also serendipitously detected water maser emission towards two non-eruptive embedded protostars, which we discuss in Sects. 3.3.1 and 3.3.2. The basic parameters of sources with maser detections, including types, coordinates, distances, and evolutionary classifications are listed in Table 1. In all, we detected water masers in two non-eruptive Class 0 sources (IRAS 00338+6312 and H2O(B)/SVS 13C) and in one Class 0/I (HH 354 IRS) and two Class I (V512 Per/SVS 13A and Z CMa) eruptive objects, using the standard classification scheme (see, e.g., Greene et al. 1994;Evans et al. 2009).
For sources with water maser detections, we fitted each velocity component with a Gaussian to obtain its LSR velocity ( LSR ), line width (∆ ), and peak flux density (S ν ), given in Table 2. The peak flux densities of detected water masers vary from 0.11 Jy to 498.7 Jy, spanning over 3 orders of magnitude. The observed maser velocities are within 10 km s −1 of the systemic cloud velocities measured from NH 3 emission. While shock velocities of 50 km s −1 are expected in theoretical models (e.g., Elitzur et al. 1989b), the modest velocity offsets between water masers and dense gas observed in our sample are generally consistent with observations of water masers towards high-mass YSOs (e.g., Urquhart et al. 2009;Cyganowski et al. 2013, Fig. 4 and Fig. 16 respectively). Isotropic H 2 O maser luminosities, L H 2 O , were calculated as (e.g., Anglada et al. 1996;Urquhart et al. 2011;Cyganowski et al. 2013): where D is the distance to the target (see Table 1). Estimating the isotropic H 2 O maser luminosities of individual velocity components separately, we find a range of L H 2 O of 7.9×10 −10 L to 6.1×10 −7 L (see Table 2). In the following subsections, we discuss our results for sources with detected water masers. Our non-detections are presented in Appendix B, where Table B.1 lists the targeted sources along with their types, coordinates, 3σ upper limits, whether they were previously searched for 22.2 GHz maser emission and 3 https://www.iram.fr/IRAMFR/GILDAS/ if so the reference, the date of observation in the current survey, their classification and reference, and distances.
For 31 sources in our sample, no previous observations of the 22.2 GHz water maser line have been reported in the literature.
Z CMa
Z CMa consists of an FUor (southwest component) and a Herbig Ae/Be star (northeast component) that are only 0.1 apart (Koresko et al. 1991;Bonnefoy et al. 2017). Figure 1 shows the H 2 O maser spectrum observed toward Z CMa, the only source among those detected observed at only one epoch (Sect. 2). As shown in Figure 1, there is only one bright maser feature, at LSR =7.82 km s −1 , blueshifted by ∼6 km s −1 with respect to the thermal NH 3 emission. Although Z CMa has been observed in many previous water maser studies (Blitz & Lada 1979;Thum et al. 1981;Deguchi et al. 1989;Scappini et al. 1991;Palla & Prusti 1993;Moscadelli et al. 2006;Sunada et al. 2007;Bae et al. 2011;Kim et al. 2018
As shown in Figure 2, we detected weak H 2 O maser emission (peak flux densities <0.2 Jy, Table 2) towards HH 354 IRS in two epochs. These are the first detections of water maser emission towards this source. On 2021 November 18, we detected a weak H 2 O maser at LSR =1.18 km s −1 . On 2022 January 25 we detected two features at LSR = −10.51 and LSR = 5.04 km s −1 but the 1.18 km s −1 feature had disappeared. This variability is Strom et al. 1976). An optical outburst was detected in the late 1980's (Mauron & Thouvenot 1991) and observations by Eisloeffel et al. (1991) confirmed it showed EXor properties. The variable name V512 Per was assigned in the 71 st Name-List of Variable Stars by Kazarovets et al. (1993), who noted SVS 13 and V512 Per were the same source. A radio counterpart of the optical/near-infrared source, named VLA 4, was first detected by Rodríguez et al. (1997) and later resolved into a binary (VLA 4A and 4B; Anglada et al. 2000). Rodríguez et al. (2002) note that SVS 13 (therefore V512 Per) and VLA 4 are the same source, consistent with other studies (see, e.g., Goodrich 1986;Fujiyoshi et al. 2015). The source is also commonly known as SVS 13A (see, e.g., Plunkett et al. 2013, and references therein) and is associated with several Herbig/Haro objects (HH 7-11; e.g., Rodríguez et al. 1997;Bachiller et al. 2000). In this paper we refer to the source as V512 Per, noting that this name might be more familiar to the variable star community (e.g., Kazarovets et al. 1993;Audard et al. 2014) while SVS 13, VLA 4, or SVS 13A may be more familiar to the radio astronomy community (e.g., Rodríguez et al. 2002;Plunkett et al. 2013). Figure 3 shows the spectra obtained towards V512 Per in 2021 November. On 2021 November 18, we detected at least 6 maser features towards V512 Per (see Figure 3), with the brightest one being 20.2 Jy. Here we note that only 5 of them are shown in Table 2 (Haschick et al. 1980). H 2 O(A) is associated with V512 Per. H 2 O(B), also known as HH 7-11(B), VLA 2, SVS 13C, or MMS3, is a Class 0 source located ∼0.5 , to the southwest (Cesaroni et al. 1988;Segura-Cox et al. 2018;Chen et al. 2013;Plunkett et al. 2013), while H 2 O(C) is ∼2.5 southeast of V512 Per (Haschick et al. 1980).
To investigate which of the observed velocity components may be associated with V512 Per, we carried out a nine-point grid of observations centred on V512 Per on 2022 February 5 (with pointings separated by 20 ). The results indicate that the strong water maser features at 5-10 km s −1 are brightest at an offset position (−20 ,−20 ) rather than toward V512 Per (0 ,0 ), suggesting that these maser features do not arise mainly from V512 Per. The ∼12 km s −1 component, in contrast, is strongest towards V512 Per and is likely associated with the eruptive source ( Figure 4, see also Figure 5).
A water maser flare in H 2 O(B)
In addition to the nine-point map described above (Sect. 3.2.1), we also performed OTF mapping towards V512 Per and H 2 O(B), shown in Figure 5. As illustrated by the channel maps in Figure 5, spectral features at LSR ≤11 km s −1 peak around H 2 O(B) while spectral features at LSR >11 km s −1 peak around V512 Per. Figure 4 compares our pointed observations toward V512 Per and H 2 O(B) on 2022 February 5: the spectra show very similar profiles between 4 km s −1 and ∼10 km s −1 but the intensities are different by a factor of ∼20. This similarity suggests that our pointed observations of V512 Per, including those shown in Figure 3, have significant contributions from H 2 O(B). We estimate this contribution for our 2022 February 5 observations assuming a perfect Gaussian beam pattern with a beam size of 40 . A source at an offset of 38.7 (the angular separation between V512 Per and H 2 O(B) derived from our observations, see Table 1) will fall at the 7.5% response level of the beam, or between the 3.7-14% levels assuming a typical pointing error of 5 . Thus H 2 O(B), with a flux density of 498.7 Jy, would contribute 18.4-69.8 Jy to the spectrum observed towards V512 Per, comparable to the observed value of 21.3 Jy (Table 2).
Notably, in our pointed 2022 February 5 observations, the peak flux density of the water maser in H 2 O(B) is 498.7 Jy at LSR = 6.1 km s −1 . This is the highest flux density reported for this source to-date (c.f. Haschick et al. 1980;Lyo et al. 2014), indicative of a maser flare (see also Sect. 4.1).
In our pointing towards RNO 1B/1C, we detected water maser emission in four epochs, as shown in Figure 6. During our first observations on 2021 November 18, we detected two maser features at LSR = −28.78 km s −1 and LSR = −15.79 km s −1 , and five days later the flux densities and LSR velocities of the two maser features were nearly unchanged. The source was observed again on 2022 January 25 and February 5: in these observations, the LSR ∼ −15.8 km s −1 feature had disappeared and the blueshifted maser was weaker and had slightly shifted in velocity, to LSR ∼ −28.48 km s −1 . The 3σ upper limits for the LSR ∼ −15.8 km s −1 feature are 0.12 Jy and 0.15 Jy for the observations on 2022 January 25 and February 5, respectively. We also note that the LSR ∼ −28 km s −1 feature has the largest velocity offset with respect to the cloud among our detections, ∼10 km s −1 (see Table 2). Based on comparing our results to the literature, the water maser features detected in our survey are most likely to originate from IRAS 00338+6312 rather than RNO 1B/1C. The velocities of our detected masers are similar to those of the masers associated with IRAS 00338+6312 in the VLA observations (Fiebig 1995;Fiebig et al. 1996), and also match the velocity range of the molecular outflow (about −30 km s −1 to −5 km s −1 Snell et al. 1990;Yang et al. 1991) driven by IRAS 00338+6312 (Henning et al. 1992;Wouterloot et al. 1993;Anglada et al. 1994;Furuya et al. 2003;Bae et al. 2011). We therefore do not count the water maser emission in our RNO 1B/1C pointing as a detection towards an eruptive star, and the 3σ upper limits are given in Table B.1.
Long-term time variation
Water maser flares have been recognized in star forming regions for decades (e.g., Boboltz et al. 1998;Kramer et al. 2018), with recent observations suggesting that water maser flares can accompany ejection events associated with accretion bursts in massive and intermediate-mass stars (e.g., MacLeod et al. 2018;Brogan et al. 2018;Chen et al. 2021;Bayandina et al. 2022). Hence, one might expect such water maser flares from FUors/EXors. We therefore investigate if our targets have experienced water maser flares. Figure 7 presents long-term time series for the water masers detected in our survey, which show that these masers are quite variable in both flux density and LSR velocity. Based on data from the literature, Z CMa appears to be in a relatively active phase, with the flux density of 2.4 Jy during our observations the highest observed to date (c.f. Blitz & Lada 1979;Thum et al. 1981;Deguchi et al. 1989;Scappini et al. 1991;Palla & Prusti 1993;Moscadelli et al. 2006;Sunada et al. 2007;Bae et al. 2011;Kim et al. 2018). For HH 354 IRS, no water maser emission was detected by previous observations (Wouterloot et al. 1993;Persi et al. 1994;Sunada et al. 2007). We report the first water maser detection toward this source. Since the upper limits of previous observations are comparable to the detected flux densities (see Figure 7), we cannot conclude whether the maser was in its active or quiescent phase during our observations. For V512 Per, Figure 7 compares the velocity component in our observations that likely arises from V512 Per (see Sect. 3.2.1) to archival data that include both single-dish and interferometric measurements (Haschick et al. 1980;Claussen et al. 1996;Rodríguez et al. 2002;Furuya et al. 2003).We note that in the case of the Claussen et al. (1996) data, the results were measured from the published figures. Based on this comparison, we identify three water maser flares, in 1978, 1992, and 1998, which reached peak flux densities of ∼310 Jy, 660 Jy, and 244 Jy on 1978 February 17, 1992 November 28, and 1998 June 22, respectively. The observations spanning these dates were performed with single-dish telescopes with large beams (>1 ), so H 2 O(B) could potentially contribute to the observed flux densities (see Sect. 3.2.1). Claussen et al. (1996) note, however, that the maser features detected in their 1991-92 observations all had velocities consistent with those of H 2 O(A)/V512 Per, suggesting that this flare was associated with the eruptive star.
For H 2 O(B), we find no suggestion in the literature of this source being an eruptive variable at optical or near-infrared wavelengths, but our comparison with previous water maser observations (Figure 7; Haschick et al. 1980;Lyo et al. 2014) shows three maser flares with peak flux densities of >100 Jy, on 1975 November 30, 2012 May 28, and 2022 February 5. As for V512 Per, Figure 7 compares the velocity components in our observations that likely arise from H 2 O(B) (Sect. 3.2.1&3.3.1) with historical data. Again, the large single dish beams encompass both H 2 O(B) and V512 Per, meaning that we cannot rule out a contribution from V512 Per to the historical flares. For instance, the observations of Lyo et al. (2014) had a HPBW of 120 . As noted in Sect. 3.3.1, the water maser flare detected in our observations on 2022 February 5 is the brightest to date, with a peak flux density of 498.7 Jy.
For IRAS 00338+6312, there is similarly no suggestion in the literature of this being an eruptive source in the optical or near-infrared, but Figure 7 suggests its water maser emission was in an active phase in 1998 and 2004 (Cesaroni et al. 1988;Henning et al. 1992;Wouterloot et al. 1993;Persi et al. 1994;Fiebig Fig. 5: Channel maps of H 2 O masers in H 2 O(B) (SVS 13C) and V512 Per (SVS 13A). The contours start at 0.5 Jy, and then increase by a factor of two. The plus signs represent the positions of the two H 2 O masers (orange and green) previously detected by Haschick et al. (1980) and YSOs (purple; e.g., Plunkett et al. 2013). Based on previous observations (Plunkett et al. 2013;Podio et al. 2021), the outflow directions are indicated by red and blue arrows. The beam size is shown in the lower right corner of the last panel. The colour bar represents the flux density in units of Jy. (Szabó et al. 2023(Szabó et al. ). 1995Codella et al. 1995;Furuya et al. 2003;Sunada et al. 2007;Bae et al. 2011), but relatively quiescent during our observations. The highest flux density reached was ∼31 Jy on 1998 January 5 (Furuya et al. 2003).
Periodic variations have been reported in some velocity components of the 22.2 GHz H 2 O (and the 6.7 GHz Class II CH 3 OH) masers associated with the intermediate-mass YSO G107.298+5.639, and cyclic accretion instabilities have been invoked to explain this peculiar behavior (Szymczak et al. 2016). Low-mass stars like FUors and EXors might also experience cyclic accretion events, but we do not find evidence for periodic variations in Figure 7.
Scarcity of water masers in selected eruptive systems
Our water maser detection rate of 6% in FUors and EXors is perhaps surprising in light of the close connection between water maser emission and mass accretion and ejection in protostars (see Sect. 1). In this section, we consider possible explanations for the low detection rate.
First, the low detection rate could be caused by an evolutionary effect. Previous observations indicate that the water maser detection rate decreases from Class 0 to Class II objects (e.g., Furuya et al. 2001 Class I and Class II objects (see Tables 1 and B.1), one would expect a lower detection rate compared to Class 0 objects. Furthermore, our detection rate is comparable to that (6.3%) for Class I objects in Furuya et al. (2001). We do not detect any water masers toward Class II objects, which further supports the evolutionary trend proposed by Furuya et al. (2001). Second, water masers have relatively low luminosities in low-mass star formation regions. Statistical studies have shown that the maser luminosities are correlated with bolometric luminosities (e.g., Figure 16 in Urquhart et al. 2011). This suggests lower maser luminosities in low-mass star formation regions, so lower flux densities would be expected. This could contribute to our low detection rate toward low-mass eruptive stars. This is supported by previous water maser surveys toward the Serpens South and Orion molecular clouds (Kang et al. 2013;Ortiz-León et al. 2021), which give detection rates of 2% for low-mass protostars.
Third, water masers show rapid time variations. The time variability of water masers is evident in our study (see also Figures 2, 4 and 7). Water masers can be in a quiescent phase for ∼5 years (Claussen et al. 1996), meaning that maser emission would not be detected during that time even for sources known to be associated with water masers. This is consistent with the fact that several water masers reported by previous studies are not detected in our observations (see Table B.1). It is possible that non-detection of water masers is due to their inactive state. Indeed, including historical detections, the detection rate of water masers in eruptive stars in our sample is ∼15% (excluding the unclassified Gaia alerts), which is higher than our survey detection rate of 6%, suggesting that previously detected water masers were in an inactive phase during our observations.
Conclusions
In this paper, we presented the results of the first dedicated water maser survey towards FUors and EXors, two classes of low-mass young eruptive stars. We detected H 2 O masers toward five objects, of which three are young eruptive stars: Z CMa (FUor; Class I), HH 354 IRS (FUor; Class 0/I), V512 Per (EXor; Class I), IRAS 00338+6312 (Class 0) and H 2 O(B) (Class 0). Our detection is the first report of water maser emission in HH 354 IRS. Our observations reveal the highest peak flux density yet reported towards H 2 O(B) (498.7 Jy), indicative of a recent H 2 O maser flare. Overall, our observations result in a detection rate of ∼6% for young eruptive stars. Analysis of the longterm time series of the water masers suggests that V512 Per and H 2 O(B) have experienced multiple water maser flares.
Despite the low detection rate, our observations have confirmed the presence of 22.2 GHz water maser emission in FUors and EXors, meaning that follow-up radio interferometric observations can be used to probe the environments of eruptive stars on small scales (see, e.g., Haschick et al. 1980;Rodríguez et al. 2002). If water masers are in general weak in FUors/EXors (Sect. 4.2), deeper observations would also be expected to find more of them. Expanding on optical and near-infrared knowledge of FUors/EXors with more radio observations, especially future VLBI measurements, will be crucial to better understand the underlying physics (e.g., mass accretion and ejection) of such peculiar objects, and eventually the formation of Sun-like stars. | 7,102.2 | 2023-05-01T00:00:00.000 | [
"Physics"
] |
Prospects for predatory mirid bugs as biocontrol agents of aphids in sweet peppers
In recent years, biological control strategies to control many major horticultural pests have been successfully implemented in the Eastern Mediterranean basin. However, the management of some pests, such as aphids in sweet pepper crops, can still be improved. The goal of this study was to examine the potential of the omnivorous predatory mirids Nesidiocoris tenuis, Macrolophus pygmaeus, and Dicyphus maroccanus as biocontrol agents of aphids in sweet pepper crops. First, the capacity to detect Myzus persicae-infested and un-infested plants was studied in a Y-tube olfactometer. Females of the three species of predatory mirids were strongly attracted to the odor of infested M. persicae plants. Second, the prey suitability of young and mature nymphs of M. persicae for these three mirid species was studied. The three species actively preyed on M. persicae, although D. maroccanus resulted the most voracious species preying both young and mature nymphs. Finally, the capacity of the three omnivorous predators to reduce M. persicae in heavily infested plants was determined in semi-field conditions. The three species of mirids could reproduce on aphids and establish on sweet pepper plants. Mirids significantly reduced the number of M. persicae per leaf, reaching levels of aphid reduction close to 100 % when compared to the untreated control. These results suggest that mirids might play a major role in aphid management in sweet peppers. The potential implementation methods of predatory mirids for the biological control in sweet peppers are discussed.
Introduction
During the last decade, biological control strategies for various major pests have been successfully implemented in greenhouse crops in southern Europe (Calvo et al. 2009(Calvo et al. , 2011(Calvo et al. , 2012avan Lenteren 2012). The two most recent and remarkable successes of biological control have occurred in tomato and sweet pepper production in southeastern Spain. These successes have occurred mainly due to the selection and implementation of generalist predators native to the Mediterranean Basin. For tomatoes, the inoculation of the predatory mirid bug Nesidiocoris tenuis (Reuter) (Hemiptera: Miridae) in the nursery proved very effective in controlling key tomato pests, such as the sweet potato whitefly Bemisia tabaci (Gennadius) (Hemiptera: Aleyrodidae) and the South American tomato pinworm Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae) (Calvo et al. 2012a, b;Urbaneja et al. 2012). For sweet peppers, the release of the predatory mite Amblyseius swirskii (Athias-Henriot) (Acari: Phytoseiidae) and the minute pirate bug Orius laevigatus (Fieber) (Hemiptera: Anthocoridae) provided effective control of the two key pests in this crop, the B. tabaci and the western flower thrips Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae) (Blom 2008;Calvo et al. 2009). Currently, all sweet pepper (approximately 9,300 ha) and tomato (approximately 8,000 ha) production operations in southeast Spain use these indigenous and polyphagous biocontrol agents as the main pest control strategy.
Generalist predators are widely recognized to significantly contribute to the biological control of populations of several agricultural pests throughout the world (Symondson et al. 2002). Zoophytophagous or plant feeding predators constitute a special case of generalist predators. Generalist zoophytophagous predators can use various food resources, such as alternative prey, and can feed on plant material, which further facilitates their establishment prior to pest infestation and their maintenance in the crop during periods of prey scarcity, resulting in crop systems that are more resilient to pest invasions (Ramakers and Rabasse 1995;Messelink et al. 2008;Castañé et al. 2011;Urbaneja et al. 2012). These special features result in one of the most important challenges currently in biological control, which is the search and selection of generalist zoophytophagous predators (Bueno et al. 2013).
The biological control of aphids in sweet pepper plants needs improvement, because this strategy has required multiple releases of natural enemies (Blom 2008). These releases are not always sufficiently effective (Belliure et al. 2008, Sanchez et al. 2011) and may considerably increase the final cost of the biocontrol program in this crop ). The most common species of aphids in sweet peppers are Myzus persicae (Sulzer), Aphis gossypii Glover, Macrosiphum euphorbiae Thomas, and Aulacorthum solani Kaltenbach (Hemiptera: Aphididae) (Belliure et al. 2008;Blom 2008;Sanchez et al. 2011). The first two species, which are smaller in size than the last two, are regularly controlled via the release of the parasitoid Aphidius colemani Viereck (Hymenoptera: Braconidae). In addition, the introduction of banker plants that were previously infested with the cereal aphid Rhopalosiphum padi (L) (Hemiptera: Aphidiidae) is a good method to increase the reservoir populations of A. colemani before detecting the presence of the pest preventing outbreaks of this pest (Huang et al. 2011). However, both strategies using this parasitoid (continuous augmentative releases and introduction of banker plants) can dramatically be disrupted by the action of hyperparasitoids, which are relatively abundant in southeastern Spain (Belliure et al. 2008;Sanchez et al. 2011). Therefore, the specialized aphid predatory midge Aphidoletes aphidimyza (Rondani) (Diptera: Cecidomyiidae) and other generalist predators, such as chrysopid, syrphid, or coccinellid predators, are also frequently released against these two aphid species, but these populations fail to establish when the aphids disappear (Pineda and Marcos-Garcia 2008;Messelink et al. 2011Messelink et al. , 2013. To date, the biological control of M. euphorbiae and A. solani has not been satisfactory in most cases using the parasitoids Aphidius ervi Haliday and Aphelinus abdominalis (Dalman) (Hymenoptera: Aphelinidae).
These problems with controlling aphids in sweet pepper plants emphasize the need to continue looking for alternative natural enemies that would satisfy the aforementioned needs. Messelink et al. (2011) stated that the biological control of aphids might be significantly enhanced by using zoophytophagous predators that can establish in a sweet pepper crop prior to aphid infestations. A group of endemic natural enemies that commonly inhabit sweet pepper crops and might meet these criteria that are the predatory mirid bugs (Hemiptera: Miridae). Indeed, previous studies suggest that the positive role of the predatory mirid Macrolophus pygmaeus Rambur (Hemiptera: Miridae) could have in controlling aphids in sweet pepper plants Perdikis and Lykouressis 2004). Two other predatory mirids, N. tenuis and Dicyphus maroccanus Wagner, together with M. pygmaeus, have been detected preying on aphids and other agricultural pests, such as spider mites, thrips, and lepidopterans in protected sweet pepper crops of southeastern Spain (Jacas et al. 2008;Gonzalez-Cabrera et al. 2011;Molla et al. 2011Molla et al. , 2014. Therefore, this study is aiming to further confirm the previous results obtained with M. pygmaeus on M. persicae (Perdikis and Lykouressis 2004;Messelink et al. 2011) and compare this with the other two mirid species, N. tenuis and D. maroccanus. We report here on the capacity of N. tenuis, M. pygmaeus, and D. maroccanus to detect M. persicae-infested plants and the prey suitability of M. persicae for these three species of mirids under laboratory conditions. Finally, the capacity of the three zoophytophagous predators to reduce M. persicae populations in heavily infested plants was determined under semi-field conditions. Implications of these results and whether these predatory mirid bugs could be considered biocontrol agents of aphids in sweet pepper crops are discussed.
Materials and methods
The environmental conditions in laboratory experiments were 25 ± 2°C, 60 ± 10 % RH with a photoperiod of 16:8 h (L:D).
Experimental insects
The colonies of M. persicae (green phenotype) on sweet pepper plants were initiated from a laboratory stock colony maintained on potted broad bean plants (Vicia faba L. Fabales: Fabaceae) and housed in a climatic chamber at 25 ± 2°C, 60-80 % RH, and a 16:8 h (L:D) photoperiod at IVIA (La-Spina et al. 2008). M. persicae-infested bean sprouts were then reared on pesticide-free sweet pepper seedlings (var. ''Lipari'', Clause Spain S.A.U. Almería, Spain). Rearing took place in screened cages (120 9 70 9 125 cm) in which groups of six sweet pepper plants (approximately 25 cm high) were weekly introduced.
Dicyphus maroccanus individuals were initially collected in tomato fields located in the Valencia province (Spain) and were then reared on pesticide-free tomato seedlings ''Optima'' (Seminis Vegetable Seeds, Inc., Almería, Spain) using frozen eggs of the factitious prey Ephestia kuehniella (Zeller) as food. N. tenuis and M. pygmaeus adults were obtained from a commercial supplier (NESIBUG Ò and MYRICAL Ò ; Koppert Biological Systems, S.L., Á guilas, Murcia, Spain). Each bottle contained approximately 500 specimens consisting of a mixture of mature nymphs and young adults (less than 3-day-old) fed with E. kuehniella eggs (FJ Calvo, Koppert BS; Personal Communication). Adults from these bottles were then kept for 24 h on sweet pepper plants and fed with E. kuehniella eggs before used.
Attraction to volatiles
To investigate the olfactory response of the omnivorous predators N. tenuis, M. pygmaeus, and D. maroccanus, two separate series of Y-tube experiments were conducted. Four 60-cm-long fluorescent tubes (OSRAM, L18 W/765, OSRAM GmbH, Germany) were positioned 40 cm above the arms. Light intensity over the Y-tube was measured with a ceptometer (LP-80 AccuPAR, Decagon Devices, Inc. Pullman, WA) and resulted 2,516 lux. The environmental conditions in the Y-tube experiments were 23 ± 2°C, 60 ± 10 % RH. In the first series of experiments, we compared the responses of mirids to the presence of sweet pepper plants. In a second series of Y-tube experiments, we tested whether M. persicae-infested sweet pepper plants were equally attractive to the mirids as uninfested ones.
The Y-tube olfactometer (Analytical Research Systems, Gainesville, FL) consisted of a Y-shaped glass tube of 2.4 cm in diameter with a base that was 13.5-cm long and two arms that were 5.75-cm long. The base of the Y-tube was connected to an air pump that produced a unidirectional airflow of 150 mL/min from the arms to the base of the tube. The arms were connected via a plastic tube to two identical glass jars (5 L of volume), each of which contained a tested odor source.
The first experiment was conducted to test the attractiveness of an un-infested potted sweet pepper plant (25-cm high) (var. ''Lipari'') odor compared with an empty jar to females of three species (aged from 3 to 5 days). The plants were healthy and growing from natural soil mixed with local peat moss in plastic pots (8 9 8 9 8 cm). Adult females of the three omnivorous predators were maintained individually in a plastic vial (10 mm diam. and 50 mm long) without food for 24 h prior to the olfactometer bioassays. Each vial was sealed with moistened cotton.
Mirid females were individually introduced into the base arm of the Y-tube. Each female was observed, until she had walked at least 3 cm up one of the side arms or until 15 min had elapsed (McGregor and Gillespie 2004). Females who had not made a choice within 15 min were considered to be ''non-responders'' and discarded in the subsequent data analysis. At least 24 responses were recorded for each pair of odor sources. After five individual females had been tested, the olfactometer arms were flipped around (180°) to minimize spatial effect on choices. After 10 females had been bioassayed, the olfactometer set-up was rinsed with soap water and acetone and then air-dried.
In the second experiment, the attraction of the three species of mirids to the volatiles emitted by M. persicaeinfested sweet pepper plants and un-infested plants was tested. Heavily infested sweet pepper plants (75-100 nymphs per leaf) from the stock colonies and height and foliar mass similar to that of control plants (un-infested) were used in Y-tube bioassays. At least 26 responses were recorded for each pair of odor sources. The same protocol explained in the first set of experiments was followed for this second experiment.
Prey suitability
For each species of mirid, approximately 100 adults less than 4-d-old were placed inside a 60 9 60 9 60 cm plastic cage (BugDorm-2; MegaView Science Co., Ltd.; Taichung, Taiwan) and starved of prey for 24 h before use. In each cage, four un-infested sweet pepper plants without flowers (25 cm high) were also introduced during the prey starvation period to allow mirids to adapt to sweet pepper plants. Cannibalism was not observed for any of the three species of mirids.
Myzus persicae-infested leaves were detached from infested plants of the stock colony. The first and fourth instar nymphs of M. persicae were selected under a stereoscopic binocular microscope with a small brush and placed on sweet pepper leaves inside Petri dishes (9 cm in diameter). The fourth nymphal instars of M. persicae are well defined, differing both in morphology and size (Horsfall 1924). The three predator species and both sexes were separately exposed to either 20 first instar nymphs or 20 fourth instar nymphs. Water was supplied on soaked cotton plugs. Twenty replicates were performed for each density and sex. After 24 h, the predators were removed from the arenas and the number of consumed nymphs was evaluated.
Since the sex ratio of these three species of mirids is quite different, to better compare and interpret the values of predation among mirid species, the consumption data were calculated for a theoretical population, which was estimated from the sex ratio values for each predator species. The sex ratios considered were 0.44, 0.41, and 0.85 (female/total) for N. tenuis, M. pygmaeus, and D. maroccanus, respectively (same authors, data in preparation). To do this, for each predator species, each value of consumption for a given sex was multiplied by the sex ratio of the opposite sex. This estimation provided the number of prey consumed by a single individual irrespective of gender of a population with the sex ratio mentioned above.
To determine the efficacy of N. tenuis, M. pygmaeus, and D. maroccanus in reducing M. persicae densities in sweet pepper plants, 16 plastic cages 60 9 60 9 60 cm (BugDorm-2; MegaView Science Co., Ltd.; Taichung, Taiwan) were used. In each cage (replicate), six heavily infested sweet pepper plants (25-cm high) and one couple of each species of mirid per plant were introduced the same day (August 20th). The control treatment did not receive any mirid release. Healthy sweet pepper plants were placed inside the stock colony of M. persicae during 2 weeks. A homogenous and heavy aphid infestation (around 70 nymphs per leaf) was obtained. Plants were randomly distributed in the 4 treatments, and ANOVA test confirmed that there were no statistical differences among treatments (F 3,14 = 0.8241, P = 0.5075). In total, four replicates were included per treatment, with six plants per replicate.
During the following 6 weeks after the release, four randomly selected infested leaves per cage were removed weekly and introduced to a 150-mL plastic cup containing 70 % alcohol. Since flowers can serve as alternative food for predatory mirids , all flower buds that were appearing in the course of the experiment were removed manually to avoid any possible interference. The collected material from each replicate and treatment was transported to the laboratory, where aphids and mirids were filtered through a sieve of 32 9 32 threads/cm 2 and then counted under a stereoscopic binocular microscope. These data were used to calculate the efficacy of the mirids tested using the Henderson-Tilton formula (Henderson and Tilton 1955).
Statistical analysis
v 2 goodness of fit tests were used to test the hypothesis that the distribution of side-arm choices between pairs of odors affected the olfactory responses in the experiments. In the prey suitability experiment, the data on the number of consumed M. persicae nymphs were subjected to a oneway analysis of variance to evaluate the effect predator species. The number of M. persicae and mirids per leaf in sweet pepper plants and the percentage of efficacy of the three species of mirids in reducing M. persicae populations were analyzed using a generalized linear mixed model with repeated measurements. Treatment was considered as a fixed factor, and time was considered random. When significant differences were found, pairwise comparisons of the fixed factor levels were performed with the least significant difference post hoc test (P \ 0.05).
Attraction to volatiles
Females of the three species of predatory mirids were more attracted to the odor of un-infested plants compared to the plant-free control (N. tenuis: v 2 = 6.667; P = 0.010; M.
When nymphal consumption was extrapolated to theoretical population level, estimated from the sex ratio of each species (Fig. 3) Independently of the species tested, releases of one couple of mirids per plant proved to be sufficient for their establishment in the experimental cages (Fig. 4B). The numbers of mirids increased regularly from their release until 4 weeks later. However, mirid population decreased in parallel to the decrease of the number of aphids present on sweet pepper plant after this time. The number of mirids per leaf was almost the same for all three treatments (F 2, 63 = 1.856; P = 0.165).
The percentage of reduction of the number of aphids per plant reached numbers close to 100 % for the three species of mirids tested, with no significant differences among them (F 2,52 = 0.478; P = 0.623) (Fig. 4).
Discussion
The three predatory mirid bugs tested in this study are native natural enemies that spontaneously appear in various crops in the Mediterranean basin (Alomar et al. 2002;Gonzalez-Cabrera et al. 2011). Two of these zoophytophagous predators, N. tenuis and M. pygmaeus, are massreared and have been released in augmentative biocontrol programs aimed at controlling whiteflies and T. absoluta, mainly in protected crops (Arnó et al. 2010;Calvo et al. 2012a, b;Urbaneja et al. 2012). However, D. maroccanus was recently first detected actively preying on T. absoluta in the Iberian Peninsula in tomato crops . Even though D. maroccanus has been frequently observed preying on aphids and mites in tomato and egg plant crops under low pesticide-input management (Same authors, unpublished results), studies of its prey range or its possibilities to be integrated in biological control programs are lacking. In any case, little is known about the efficacy and practical use of mirids in sweet pepper agro-ecosystems. A limited number of studies have increased expectations that mirids can play a significant role in the regulation of some horticultural pest species (Perdikis and Lykouressis 2004;Messelink et al. 2011;Bueno et al. 2013) following the success of N. tenuis in the Mediterranean area (Calvo et al. 2012b).
The three species of predatory mirids showed a stronger response to odors from infested plants than to odors from un-infested plants. To our knowledge, this is the first olfactory study of N. tenuis and D. maroccanus, but not for M. pygmaeus. Our results for M. pygmaeus are consistent with previous results (Moayeri et al. 2006;Ingegno et al. 2011). In our work, mirids were in contact with a pepper plant during 24 h before starvation; hence, learning behavior related to plant odor could be included. It would be interesting to clarify whether these three species of predatory mirid bugs are able to associate prey odors as honeydew with the presence of prey. In this regard, Moayeri et al. (2007) showed that M. pygmaeus do not seem to exploit odors emitted directly from the prey themselves, which suggests that mirids respond to herbivore-induced volatiles rather than to the prey itself.
We found that the three species of mirids successfully preyed on M. persicae nymphs with significant differences among them, which depended on the predator gender or the instar preyed. To our knowledge, this is the first study reporting the capacity of D. maroccanus for prey on M. persicae. However, this study was not the first of its kind for N. tenuis and M. pygmaeus. Previous studies have demonstrated that both M. pygmaeus and N. tenuis feed on M. persicae under laboratory conditions Lykouressis 2002a, b, 2004;Valderrama et al. 2007;Fantinou et al. 2008Fantinou et al. , 2009Vandekerkhove and De Clercq 2010). The three species tested in this study showed a higher predation rate and preference for smaller prey instars, which was also observed by Valderrama et al. (2007) for N. tenuis and by Fantinou et al. (2009) for M. pygmaeus. The number of nymphs consumed in our assays by both predators revealed that mirid females could prey on approximately 12 and 6 first and fourth nymphal instars, respectively, over 24 h. These numbers are in agreement with the data for M. pygmaeus and for N. tenuis obtained in previous works when offered the same amount of prey densities (20 nymphs). However, both predators may consume a higher number of aphid nymphs, because their predation rate depends on the amount of prey offered (Valderrama et al. 2007;Fantinou et al. 2009Fantinou et al. , 2008. M. pygmaeus, which shows a Type II functional response, maximizes its prey consumption in 24 h at approximately 17 and 11 first and fourth nymphal instars, respectively, densities at which the predator is satiated (Fantinou et al. 2008(Fantinou et al. , 2009. Surprisingly, N. tenuis, which shows a Type III functional response, can prey on almost 70 third-fourth instar nymphs when a prey density of 90 nymphs is offered (Valderrama et al. 2007). Based on our personal observations, we hypothesize that N. tenuis would not be able to completely consume 70 nymphs of M. persicae. Therefore, an over-killing behavior was likely given, but Valderrama et al. (2007) did not report it in their paper. Supporting this hypothesis, Fantinou et al. (2008) demonstrated that M. pygmaeus exhibits an over-killing behavior that might include partial prey consumption and/or killing without consumption. Unfortunately, we could not detect this behavior in our work.
Further studies would be needed to determine the predatory behavior and performance of these mirids on the other three species of aphids that may appear in sweet pepper crops, A. gossypii, A. solani, and M. euphorbiae. To date, M. pygmaues is only known to actively prey on A. gossypii and M. euphorbiae (Lykouressis et al. 2007;Alvarado et al. 1997;Perdikis and Lykouressis 2000). Lykouressis et al. (2007) observed that the predation rate of M. pygmaeus was higher on M. persicae than on M. euphorbiae, but this difference could be related to a major amount of biomass obtained from M. euphorbiae. Therefore, determining the suitability of these three other aphid species as prey as well as efficacy studies that compare the control capacity of these mirid species would be interesting. In our experiment, the three species of mirids could at minimum reduce the densities of M. persicae on sweet pepper plants in the semi-field assay. Indeed, plants that received mirid releases appeared healthy at the end of the experiment, and some mirids still remained active on these plants. Messelink et al. (2011) evaluated the efficacy of inoculative releases of Orius majusculus (Reuter) (Hemiptera: Anthocoridae), O. laevigatus, and M pygmaeus on M. persicae in a sweet pepper greenhouse. They found that compared to the two Orius species, M. pygmaeus was by far the best predator for controlling M. persicae. Thus, they suggested that the use of mirids instead or in addition to O. laevigatus may be preferable in sweet pepper crops, although this approach might need additional releases of effective predatory thrips, such as A. swirskii (Messelink et al. 2008).
The performance of the three species of mirids reducing M. persicae resulted similar in our experiments. Other biological traits that are important for selection and in which these species probably differ will help us to select any of these mirid species, e.g., (1) compatibility with other natural enemies, (2) developmental rate on sweet pepper plants without prey, (3) developmental rate on alternative food and mixtures of prey, and (4) climatic preferences. In this regard, unpublished results lead us to think that D. maroccanus has its optimal temperature range below than M. pygmaeus and N. tenuis, which may indicate that this new biocontrol agent could be considered for use in crops with cooler temperatures.
Another further criterion of selection will be the efficacy of these predators when released prior to aphid outbreaks. In our experiment, mirids were released when aphid population densities were relatively high and even so aphid control was satisfactory but logically slow. It would, therefore, be very interesting to establish first the mirids in the crop (before the appearance of aphids) and determine their effectiveness on both, exploiting alternative food resources (e.g., pollen on flowers or other alternative prey) and the aphids that subsequently appear in the crop.
Overall, the data presented here reveal that mirids are highly efficient predators of M. persicae; they can detect M. persicae-infested sweet pepper plants and reduce heavy M. persicae infestations. These findings indicate that mirids may play an important role in the control of the green peach aphids in sweet peppers. The incorporation of any of these three species of mirids in inoculative biological control strategies in sweet pepper crops will be a challenge for future studies. | 5,709.2 | 2014-04-12T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Corrosion and Discharge Behaviors of Mg-Al-Zn and Mg-Al-Zn-In Alloys as Anode Materials
The Mg-6%Al-3%Zn and Mg-6%Al-3%Zn-(1%, 1.5%, 2%)In alloys were prepared by melting and casting. Their microstructures were investigated via metallographic and energy-dispersive X-ray spectroscopy (EDS) analysis. Moreover, hydrogen evolution and electrochemical tests were carried out in 3.5 wt% NaCl solution aiming at identifying their corrosion mechanisms and discharge behaviors. The results suggested that indium exerts an improvement on both the corrosion rate and the discharge activity of Mg-Al-Zn alloy via the effects of grain refining, β-Mg17Al12 precipitation, dissolving-reprecipitation, and self-peeling. The Mg-6%Al-3%Zn-1.5%In alloy with the highest corrosion rate at free corrosion potential did not perform desirable discharge activity indicating that the barrier effect caused by the β-Mg17Al12 phase would have been enhanced under the conditions of anodic polarization. The Mg-6%Al-3%Zn-1.0%In alloy with a relative low corrosion rate and a high discharge activity is a promising anode material for both cathodic protection and chemical power source applications.
Introduction
Magnesium and its alloys are promising candidates for use in aerospace, vehicle, and electric products due to their high ratio of strength to weight, low density, and good castability [1][2][3].Considerable investigations have been conducted to clarify the corrosion mechanism and to achieve desirable corrosion resistance by designing and developing alloys of high corrosion resistance, inhibitors, and coatings [4][5][6][7][8][9][10][11][12].In addition, the desirable electrochemical properties of magnesium, including highly negative standard potential (´2.34 V vs. Standard Hydrogen Electrode (SHE)), high theoretical specific charge capacity (2.2 A¨h/g), high theoretical energy density (3.8 A¨h/cm 3 ) [13] make it an ideal anode material for cathodic protection and power sources [14][15][16][17].Some other particular features, such as low toxicity and the allowance for urban waste disposal in comparison to lithium, make magnesium an attractive candidate as a high-energy storage electrode in the battery field [18].
However, there are several inherent defects of magnesium as anode material.On the one hand, the magnesium surface is often covered with discharge product in the electrolyte, which hinders the further discharge process via the accumulation effect [19].On the other hand, the high Faradic capacity of magnesium could not be thoroughly utilized for discharge due to the self-discharge activity accompanied by detachment of α-Mg grains during the discharge process [20].The impurities and precipitates with relative positive potentials acting as cathodes also promote the dissolution of adjacent α-Mg grains, which result in a decrease of current efficiency [21].Some approaches for promoting the discharge activity have been developed including plastic deformation, heat treatment, and alloying [22,23].Alloying via adding other elements (such as Al, Zn, Mn, In, Ga, Hg, Ce, Y, etc.) into the Mg substrate is a promising way to promote self-peeling and minimize self-discharge [24][25][26][27][28].
AZ63 alloy is one of the most popular magnesium alloys in structural and power source applications due to its desirable mechanical and electrochemical characteristics [29,30].As anode material, AZ63 alloy is mainly used in cathodic protection and long-term, low-power chemical power sources [22].Indium is a popular alloying element for activation of anode materials.As reported, the 1-3 wt% constituent of indium in aluminum matrix activates the alloy significantly in natural seawater [31].Wang [32] and Jin [2] suggested that the indium in AP65 magnesium alloy promotes electrochemical properties through decreasing the area ratio of cathode to anode.However, few works have been conducted on the influence of indium on the electrochemical behavior of AZ series magnesium alloys.In this work, the corrosion and discharge behavior of AZ63 alloys with different indium concentrations (0%, 1%, 1.5%, and 2%) were investigated to clarify the mechanism of indium on activation of AZ magnesium alloys and to find a proper anodic material with negative discharge potential, short incubation time, low self-discharge, and high current efficiency in different conditions.
Materials
The alloys used in this work were prepared by adding Mg, Al, Zn, Mg-30 wt% Mn, and In ingots into a graphite crucible in a resistance furnace at 760 ˝C.The melt covered with sulfur powder was stirred for about 5 min, held for 10 min to guarantee homogenization, and then cast into a steel model preheated to 200 ˝C with a dimension of Φ 300 mm ˆ50 mm.The actual chemical compositions of the alloys were analyzed by inductively coupled plasma mass spectrometry (ICP-MS, Agilent, California, CA, USA) and the corresponding contents are listed in Table 1.For simplicity, the indium containing alloys are denoted as AZI in this work, which is convenient to compare with the AZ63 alloy in the discussion.For example, AZ63 + 2 wt% In alloy can be abbreviated as AZI2.0.
Microstructure Characterization
For metallographic observation, the cast alloys were cut into cube-like coupons (10 ˆ10 ˆ10 mm), ground from 400 grit to 3000 grit successively, polished by diamond grinding paste, cleaned with ethanol, wiped with 2.5 wt% nital alcohol solution and blow-dried with cool air.
The surface morphology of the alloys and corroded samples were examined using a JSF-6700F scanning electron microscope (SEM) (JSM, Tokyo, Japan) equipped with energy dispersive X-ray spectroscopy (EDS) (Oxford instrument, Oxford, UK).The phases of substrate and corrosion products of the alloys were analyzed by X-ray diffraction (XRD; D/Max 2550, Rigaku, Tokyo, Japan) using Cu Kα radiation.
Hydrogen Collection
This is a common method to evaluate the corrosion rate via the volume of hydrogen evolved due to hydrogen evolution being the dominant reaction of the cathode [16,33,34].The samples (50 ˆ10 ˆ3 mm) located within an upturned filter funnel, which channels the evolving hydrogen into an upturned burette, were ground to 1200 grit and sealed by electrical tape leaving 5 cm 2 exposed in 2000 mL 3.5 wt% NaCl over 24 h.
Electrochemical Measurements
Samples for electrochemical tests were encapsulated in epoxy resin leaving an exposed surface of 10 ˆ10 mm, ground successively to 1200 grit with SiC paper, degreased by alcohol, washed with distilled water, and cooled by cold air.The prepared samples were stored in a desiccator prior to the electrochemical tests to ensure a uniform surface condition.
The electrochemical tests were conducted by an electrochemical workstation (GAMRY Reference 3000, Pennsylvania, PA, USA) in a three-electrode configuration (a platinum foil as counter electrode, a saturated calomel electrode as reference electrode, and a working electrode) containing 450 mL 3.5 wt% NaCl.The potentiodynamic polarization was carried out towards the noble value at a scan rate of 0.333 mV/s after a steady state of corrosion potential had been established.The electrochemical impedance spectroscopy (EIS) tests at free corrosion potential were conducted after 1 h monitoring of open circuit potentials (OCP) in solution.The AC amplitude of the perturbing signal was 5 mV with a frequency range from 100 kHz to 10 mHz.
Three samples in the as-cast state were adopted for hydrogen evolution and electrochemical tests at 25 ˘1 ˝C in this work.All the potential values mentioned were versus a saturated calomel electrode (SCE).Measurements mentioned above were performed at least three times to ensure reproducibility.
Microstructure
Figure 1 shows the XRD patterns of AZ63 and AZI alloys.The patterns indicate that AZ63 and AZI alloys mainly consist of α-Mg and β-Mg 17 Al 12 phases.There is no zinc-containing peak detectable owing to its high solid solubility in Mg substrate.Indium related peaks not being detected is probably due to its low concentration in the substrate, which is consistent with the result reported in Wang's work [2,32].
Hydrogen Collection
This is a common method to evaluate the corrosion rate via the volume of hydrogen evolved due to hydrogen evolution being the dominant reaction of the cathode [16,33,34].The samples (50 10 3 mm) located within an upturned filter funnel, which channels the evolving hydrogen into an upturned burette, were ground to 1200 grit and sealed by electrical tape leaving 5 cm 2 exposed in 2000 mL 3.5 wt% NaCl over 24 h.
Electrochemical Measurements
Samples for electrochemical tests were encapsulated in epoxy resin leaving an exposed surface of 10 10 mm, ground successively to 1200 grit with SiC paper, degreased by alcohol, washed with distilled water, and cooled by cold air.The prepared samples were stored in a desiccator prior to the electrochemical tests to ensure a uniform surface condition.
The electrochemical tests were conducted by an electrochemical workstation (GAMRY Reference 3000, Pennsylvania, PA, USA) in a three-electrode configuration (a platinum foil as counter electrode, a saturated calomel electrode as reference electrode, and a working electrode) containing 450 mL 3.5 wt% NaCl.The potentiodynamic polarization was carried out towards the noble value at a scan rate of 0.333 mV/s after a steady state of corrosion potential had been established.The electrochemical impedance spectroscopy (EIS) tests at free corrosion potential were conducted after 1 h monitoring of open circuit potentials (OCP) in solution.The AC amplitude of the perturbing signal was 5 mV with a frequency range from 100 kHz to 10 mHz.
Three samples in the as-cast state were adopted for hydrogen evolution and electrochemical tests at 25 ± 1 °C in this work.All the potential values mentioned were versus a saturated calomel electrode (SCE).Measurements mentioned above were performed at least three times to ensure reproducibility.
Microstructure
Figure 1 shows the XRD patterns of AZ63 and AZI alloys.The patterns indicate that AZ63 and AZI alloys mainly consist of α-Mg and β-Mg17Al12 phases.There is no zinc-containing peak detectable owing to its high solid solubility in Mg substrate.Indium related peaks not being detected is probably due to its low concentration in the substrate, which is consistent with the result reported in Wang's work [2,32].Figure 2 illustrates the microstructure of AZ63 and AZI alloys, which exhibit similar morphologies consisting of α-Mg surrounded with eutectic α and discontinuous β-Mg 17 Al 12 phases along the grain boundaries.It is apparent that the α-Mg is refined and the amount of precipitate increases with the increase of indium content in the Mg substrate, which is probably due to the relative low solubility of indium in the aluminum enriched phase.This is in good agreement with Becerra's previous work of adding indium into pure magnesium [35].The increase of indium refines grains by promoting the precipitation of the β-Mg 17 Al 12 phase in the α-Mg grains, resulting in a continuous distribution of precipitates in a net structure form.Figure 2 illustrates the microstructure of AZ63 and AZI alloys, which exhibit similar morphologies consisting of α-Mg surrounded with eutectic α and discontinuous β-Mg17Al12 phases along the grain boundaries.It is apparent that the α-Mg is refined and the amount of precipitate increases with the increase of indium content in the Mg substrate, which is probably due to the relative low solubility of indium in the aluminum enriched phase.This is in good agreement with Becerra's previous work of adding indium into pure magnesium [35].The increase of indium refines grains by promoting the precipitation of the β-Mg17Al12 phase in the α-Mg grains, resulting in a continuous distribution of precipitates in a net structure form.An SEM image of AZI1.0 alloy and the EDS results of different locations are presented in Figure 3.The microstructure of AZI1.0 alloy consists of α-Mg, precipitates in α-Mg and on grain boundaries.The black dot (location B) locates in α-Mg and the eutectic α-Mg phase (location D) contain a higher content of indium and lower content of aluminum and zinc on comparing with the results of β-Mg17Al12 (location C) and white dot (location A).The substrate (location E) has a lower concentration of aluminum, zinc, and indium than the nominal composition of Al 6 wt%, Zn 3 wt% and In 1.0 wt% owing to the segregation effect.Therefore, it can be deduced that the indium favors segregation from the Mg substrate and separate precipitation in α-Mg or in the eutectic α phase.The poor solubility of indium in aluminum and zinc promotes the grain refinement via the segregation process, which provides an interpretation of the phenomenon shown in Figure 2. Figure 2 illustrates the microstructure of AZ63 and AZI alloys, which exhibit similar morphologies consisting of α-Mg surrounded with eutectic α and discontinuous β-Mg17Al12 phases along the grain boundaries.It is apparent that the α-Mg is refined and the amount of precipitate increases with the increase of indium content in the Mg substrate, which is probably due to the relative low solubility of indium in the aluminum enriched phase.This is in good agreement with Becerra's previous work of adding indium into pure magnesium [35].The increase of indium refines grains by promoting the precipitation of the β-Mg17Al12 phase in the α-Mg grains, resulting in a continuous distribution of precipitates in a net structure form.
Hydrogen Collection
Hydrogen collection is more convenient than weight-loss measurement in studying the corrosion rate dependence on time due to hydrogen evolution rather than reduction of oxygen being the dominant cathodic reaction [36].The variation of hydrogen volume over a given period can be employed in calculating the average corrosion rate in situ [37].Figure 4 illustrates the hydrogen evolution volume and rate with immersion time over a 24 h period.The corrosion rate of the AZ63 and AZI alloys can be ranked as AZ63 < AZI2.0 < AZI1.0 < AZI1.5.The indium addition promotes the corrosion rate of the alloys; this being probably due to the increasing amount of precipitates both in α-Mg and on the grain boundaries (see Figure 2), which act as the cathode in forming a micro-cell with α-Mg during the corrosion.The presence of β-Mg 17 Al 12 phase in Mg-Al alloys has two influences on their corrosion behavior, a galvanic cathode for promoting the corrosion and a barrier for inhibiting the corrosion [38].The corrosion rate of AZI2.0 alloy is not consistent with the increasing tendency of the corrosion rate mentioned above and could be attributed to the barrier effect aroused by the continuous net structure of the β-Mg 17 Al 12 phase on the grain boundaries as shown in Figure 2d, which decreases the corrosion rate by the barrier effect.This result is also consistent with Liu's work [39], which suggested that the barrier effect would perform effectively once the β phase was distributed continuously around the α-Mg substrate.
Hydrogen Collection
Hydrogen collection is more convenient than weight-loss measurement in studying the corrosion rate dependence on time due to hydrogen evolution rather than reduction of oxygen being the dominant cathodic reaction [36].The variation of hydrogen volume over a given period can be employed in calculating the average corrosion rate in situ [37].Figure 4 illustrates the hydrogen evolution volume and rate with immersion time over a 24 h period.The corrosion rate of the AZ63 and AZI alloys can be ranked as AZ63 < AZI2.0 < AZI1.0 < AZI1.5.The indium addition promotes the corrosion rate of the alloys; this being probably due to the increasing amount of precipitates both in α-Mg and on the grain boundaries (see Figure 2), which act as the cathode in forming a micro-cell with α-Mg during the corrosion.The presence of β-Mg17Al12 phase in Mg-Al alloys has two influences on their corrosion behavior, a galvanic cathode for promoting the corrosion and a barrier for inhibiting the corrosion [38].The corrosion rate of AZI2.0 alloy is not consistent with the increasing tendency of the corrosion rate mentioned above and could be attributed to the barrier effect aroused by the continuous net structure of the β-Mg17Al12 phase on the grain boundaries as shown in Figure 2d, which decreases the corrosion rate by the barrier effect.This result is also consistent with Liu's work [39], which suggested that the barrier effect would perform effectively once the β phase was distributed continuously around the α-Mg substrate.
Open Circuit Potential
Open circuit potential (OCP) variation can provide some information about the initiation and propogation of corrosion [40].Relatively stable OCP values imply an electrochemical steady state on the electrode surface.The OCP curves of AZ63 and AZI alloys are shown in Figure 5.
Open Circuit Potential
Open circuit potential (OCP) variation can provide some information about the initiation and propogation of corrosion [40].Relatively stable OCP values imply an electrochemical steady state on the electrode surface.The OCP curves of AZ63 and AZI alloys are shown in Figure 5.
Hydrogen Collection
Hydrogen collection is more convenient than weight-loss measurement in studying the corrosion rate dependence on time due to hydrogen evolution rather than reduction of oxygen being the dominant cathodic reaction [36].The variation of hydrogen volume over a given period can be employed in calculating the average corrosion rate in situ [37].Figure 4 illustrates the hydrogen evolution volume and rate with immersion time over a 24 h period.The corrosion rate of the AZ63 and AZI alloys can be ranked as AZ63 < AZI2.0 < AZI1.0 < AZI1.5.The indium addition promotes the corrosion rate of the alloys; this being probably due to the increasing amount of precipitates both in α-Mg and on the grain boundaries (see Figure 2), which act as the cathode in forming a micro-cell with α-Mg during the corrosion.The presence of β-Mg17Al12 phase in Mg-Al alloys has two influences on their corrosion behavior, a galvanic cathode for promoting the corrosion and a barrier for inhibiting the corrosion [38].The corrosion rate of AZI2.0 alloy is not consistent with the increasing tendency of the corrosion rate mentioned above and could be attributed to the barrier effect aroused by the continuous net structure of the β-Mg17Al12 phase on the grain boundaries as shown in Figure 2d, which decreases the corrosion rate by the barrier effect.This result is also consistent with Liu's work [39], which suggested that the barrier effect would perform effectively once the β phase was distributed continuously around the α-Mg substrate.
Open Circuit Potential
Open circuit potential (OCP) variation can provide some information about the initiation and propogation of corrosion [40].Relatively stable OCP values imply an electrochemical steady state on the electrode surface.The OCP curves of AZ63 and AZI alloys are shown in Figure 5.As shown in Figure 5, the OCP values can be ranked as AZI1.5 < AZI 2.0 <AZI1.0< AZ63.The indium addition into AZ63 alloy shifts the OCP towards the negative direction apparently, indicating an enhanced driving force of corrosion via alloying.The incubation period, representing the interval from the beginning of immersion to the OCP values obtaining steady values, characterizes the activity of the alloy samples.The OCP values of AZI alloys get steady values over a short period (less than 1000 s) as shown in Figure 5, implying a short activation characteristic aroused by indium and a promoted active dissolution of the alloys.The OCP curve of AZ63 alloy exhibits fluctuation characteristics throughout the test, which could be attributable to the dynamic unsteadiness between the advance of the corrosion and the deposit of corrosion products, implying an unestablished steady state due to its poor activity in comparison with AZI alloys.
The activating mechanism of indium serving as an alloying element for the sacrificial anode has been widely investigated.The dissolving-reprecipitation mechanism is widely accepted in interpreting the activation effect of indium during the corrosion process [41].It can be expressed by Equations ( 1) and ( 2): MpInq Ñ M n``I n 3``p n `3qe ´(1) The indium exists in solid solution and in segregation states in the substrate, whereas the indium in the segregation state does not participate in the activation process.As shown in Equations ( 1) and ( 2), the indium atom in the solid solution state converts to an ion accompanied by the loss of three electrons.The dissolved indium ion precipitates back to the surface of the electrode and then reacts with the magnesium atom.The dissolving-reprecipitation process would promote the dissolution of magnesium due to this providing an extra reaction compared with the alloy without indium.
Potentiodynamic Polarization
The polarization curves presented in Figure 6a indicate similar polarization behavior among the investigated alloys in 3.5 wt% NaCl at 25 ˘1 ˝C.The anodic currents increase exponentially with anodic polarization indicating high activity of the investigated alloys.Thus, the cathodic reaction of hydrogen evolution dominates the electrochemical reaction at the larger slopes of the cathodic polarization branches, compared with those of the anodic branches as shown in Figure 6a.The Tafel extrapolation could be carried out as illustrated in Figure 6b, and is due to the absence of Tafel region in the anodic branch of the polarization curve; the anodic reaction may be complicated due to the effect of negative difference effect (NDE) [42].The extrapolated parameters are listed in Table 2.
indium addition into AZ63 alloy shifts the OCP towards the negative direction apparently, indicating an enhanced driving force of corrosion via alloying.The incubation period, representing the interval from the beginning of immersion to the OCP values obtaining steady values, characterizes the activity of the alloy samples.The OCP values of AZI alloys get steady values over a short period (less than 1000 s) as shown in Figure 5, implying a short activation characteristic aroused by indium and a promoted active dissolution of the alloys.The OCP curve of AZ63 alloy exhibits fluctuation characteristics throughout the test, which could be attributable to the dynamic unsteadiness between the advance of the corrosion and the deposit of corrosion products, implying an unestablished steady state due to its poor activity in comparison with AZI alloys.
The activating mechanism of indium serving as an alloying element for the sacrificial anode has been widely investigated.The dissolving-reprecipitation mechanism is widely accepted in interpreting the activation effect of indium during the corrosion process [41].It can be expressed by Equations ( 1) and ( 2): The indium exists in solid solution and in segregation states in the substrate, whereas the indium in the segregation state does not participate in the activation process.As shown in Equations ( 1) and ( 2), the indium atom in the solid solution state converts to an ion accompanied by the loss of three electrons.The dissolved indium ion precipitates back to the surface of the electrode and then reacts with the magnesium atom.The dissolving-reprecipitation process would promote the dissolution of magnesium due to this providing an extra reaction compared with the alloy without indium.
Potentiodynamic Polarization
The polarization curves presented in Figure 6a indicate similar polarization behavior among the investigated alloys in 3.5 wt% NaCl at 25 ± 1 °C .The anodic currents increase exponentially with anodic polarization indicating high activity of the investigated alloys.Thus, the cathodic reaction of hydrogen evolution dominates the electrochemical reaction at the larger slopes of the cathodic polarization branches, compared with those of the anodic branches as shown in Figure 6a.The Tafel extrapolation could be carried out as illustrated in Figure 6b, and is due to the absence of Tafel region in the anodic branch of the polarization curve; the anodic reaction may be complicated due to the effect of negative difference effect (NDE) [42].The extrapolated parameters are listed in Table 2.It is obvious that indium affects the cathodic process (hydrogen evolution) slightly but favors anodic dissolution (oxidation of magnesium), seen by analyzing the slope values of b c and b a in Table 2 [32].The free corrosion potential shifts 40-80 mV negatively on comparing AZ63 with AZI alloys, implying an increasing corrosion tendency, which is consistent with the OCP results mentioned above.The extrapolated corrosion current densities of the alloys, which can be ranked as AZI 1.5 > AZI2.0 > AZI1.0 > AZ63, are shown in Table 2.The corrosion rate of AZI alloys increases with increasing indium (less than 1.5 wt%) in the substrate, this result is also consistent with the hydrogen evolution test.
Electrochemical Impedance Spectroscopy (EIS)
The EIS spectra of AZ63 and AZI alloys obtained at open circuit potentials after 3600 s immersion are shown in Figure 7.The Nyquist diagrams of the investigated alloys are comprised of one capacitive loop at high frequency and two inductive loops at intermediate and low frequencies.The capacitive loop is related to the property of the electric double layer at the electrode/electrolyte interface [43].Two inductive loops in the fourth quadrant are due to the chemical reaction of Mg + with H 2 O on the breaking area of the corrosion products film and the desorption of corrosion products respectively [44,45].It is obvious that indium affects the cathodic process (hydrogen evolution) slightly but favors anodic dissolution (oxidation of magnesium), seen by analyzing the slope values of bc and ba in Table 2 [32].The free corrosion potential shifts 40-80 mV negatively on comparing AZ63 with AZI alloys, implying an increasing corrosion tendency, which is consistent with the OCP results mentioned above.The extrapolated corrosion current densities of the alloys, which can be ranked as AZI 1.5 > AZI2.0 > AZI1.0 > AZ63, are shown in Table 2.The corrosion rate of AZI alloys increases with increasing indium (less than 1.5 wt%) in the substrate, this result is also consistent with the hydrogen evolution test.
Electrochemical Impedance Spectroscopy (EIS)
The EIS spectra of AZ63 and AZI alloys obtained at open circuit potentials after 3600 s immersion are shown in Figure 7.The Nyquist diagrams of the investigated alloys are comprised of one capacitive loop at high frequency and two inductive loops at intermediate and low frequencies.The capacitive loop is related to the property of the electric double layer at the electrode/electrolyte interface [43].Two inductive loops in the fourth quadrant are due to the chemical reaction of Mg + with H2O on the breaking area of the corrosion products film and the desorption of corrosion products respectively [44,45].Since the EIS plots reveal the physical and chemical dynamic processes via the analysis of spectra, a circuit model can be founded to simulate the practical variations on the electrode surface.The electronic components connected in series or in parallel in circuit should correspond to the specific dynamic processes.The dissolving surface model for describing the dynamic processes within the porous corrosion products and the equivalent circuit for fitting the EIS data are shown in Figure 8.Since the EIS plots reveal the physical and chemical dynamic processes via the analysis of spectra, a circuit model can be founded to simulate the practical variations on the electrode surface.The electronic components connected in series or in parallel in circuit should correspond to the specific dynamic processes.The dissolving surface model for describing the dynamic processes within the porous corrosion products and the equivalent circuit for fitting the EIS data are shown in Figure 8.
In Figure 8a, each of the physical/chemical reactions are endowed with equivalent components in series or in parallel connection.The fitting curves, using the equivalent circuit in Figure 8b, are presented in Figure 7.In Figure 8b, R s is the solution resistance, R t and Q dl are the charge transfer resistance and the constant phase element (CPE) of the electric double layer, Q dl is used to substitute the capacitor of the electric double layer on consideration of the deviation effect [46].As illustrated in Figure 8a, the chemical reaction of Mg + can be equivalent to R LMg+ and L Mg+ in series connection [44,45], and then the desorption of corrosion products can be equivalent to R L and L in series connection [1,47].The parallel connection of the components shown in Figure 8b is due to these physical/chemical processes taking place on the surface simultaneously.In Figure 8a, each of the physical/chemical reactions are endowed with equivalent components in series or in parallel connection.The fitting curves, using the equivalent circuit in Figure 8b, are presented in Figure 7.In Figure 8b, Rs is the solution resistance, Rt and Qdl are the charge transfer resistance and the constant phase element (CPE) of the electric double layer, Qdl is used to substitute the capacitor of the electric double layer on consideration of the deviation effect [46].As illustrated in Figure 8a, the chemical reaction of Mg + can be equivalent to RLMg+ and LMg+ in series connection [44,45], and then the desorption of corrosion products can be equivalent to RL and L in series connection [1,47].The parallel connection of the components shown in Figure 8b is due to these physical/chemical processes taking place on the surface simultaneously.
The polarization resistance (Rp), relating to the activity of the investigated sample [48], is inversely proportional to the corrosion rate [44,49].According to the equivalent circuit, the expression of Rp can be written as: Table 3 lists the calculated electrochemical parameters of AZ63 and AZI alloys obtained by fitting the Nyquist plots.Both the polarization resistance Rp and charge transfer resistance Rt can be ranked as Rp,t (AZ63) > Rp,t (AZI1.0)> Rp,t (AZI2.0)> Rp,t (AZI1.5),indicating an active effect of indium in AZ63 alloy, which is consistent with previous findings.In general, the activity of the investigated alloys increases with increasing indium in the Mg substrate (less than 1.5 wt%).
Galvanostatic Discharge
The galvanostatic discharge behaviors of the AZ63 and AZI alloys were investigated by impressing three anodic current densities at 10, 100 and 200 mA•cm −2 , respectively.The different discharge times, 20 h, 2 h, and 1 h, corresponding to the current densities are designed for comparison of the current efficiencies at the equivalent discharge capacity.The increasing current densities The polarization resistance (R p ), relating to the activity of the investigated sample [48], is inversely proportional to the corrosion rate [44,49].According to the equivalent circuit, the expression of R p can be written as: Table 3 lists the calculated electrochemical parameters of AZ63 and AZI alloys obtained by fitting the Nyquist plots.Both the polarization resistance R p and charge transfer resistance R t can be ranked as R p,t (AZ63) > R p,t (AZI1.0)> R p,t (AZI2.0)> R p,t (AZI1.5),indicating an active effect of indium in AZ63 alloy, which is consistent with previous findings.In general, the activity of the investigated alloys increases with increasing indium in the Mg substrate (less than 1.5 wt%).
Galvanostatic Discharge
The galvanostatic discharge behaviors of the AZ63 and AZI alloys were investigated by impressing three anodic current densities at 10, 100 and 200 mA¨cm ´2, respectively.The different discharge times, 20 h, 2 h, and 1 h, corresponding to the current densities are designed for comparison of the current efficiencies at the equivalent discharge capacity.The increasing current densities ranging from 10-200 mA¨cm ´2 aim at studying the discharge behavior of magnesium anodes for different applications [25].The current efficiency given in literature [50] is shown in Equation (4): where M t is the theoretical weight loss corresponding to the discharge capacity and M a is the actual weight loss.The potential-time curves of AZ63 and AZI alloys are presented in Figure 9a-c.
ranging from 10-200 mA•cm −2 aim at studying the discharge behavior of magnesium anodes for different applications [25].The current efficiency given in literature [50] is shown in Equation 4: where Mt is the theoretical weight loss corresponding to the discharge capacity and Ma is the actual weight loss.The potential-time curves of AZ63 and AZI alloys are presented in Figure 9a-c.The potentials of both AZ63 and AZI alloys shift negatively at the outset of the discharge process, then increase to steady values over different periods.The increase of potential over a certain period of discharge is due to the discharge products deposition on the electrode surface, which hinders the discharge process.The consequent steady potential values indicate an established dynamic equilibrium between the formation and desorption of discharge products [50].The fluctuation of potentials becomes apparent with the increase of the discharge current by comparing Figure 9a-c within the steady discharge period.This is probably due to the initiation of localized corrosion and the desorption of α-Mg grains [32].Figure 10 shows the product removed morphologies of AZI1.0 alloy under different discharge conditions.The potentials of both AZ63 and AZI alloys shift negatively at the outset of the discharge process, then increase to steady values over different periods.The increase of potential over a certain period of discharge is due to the discharge products deposition on the electrode surface, which hinders the discharge process.The consequent steady potential values indicate an established dynamic equilibrium between the formation and desorption of discharge products [50].The fluctuation of potentials becomes apparent with the increase of the discharge current by comparing Figure 9a-c within the steady discharge period.This is probably due to the initiation of localized corrosion and the desorption of α-Mg grains [32].Figure 10 shows the product removed morphologies of AZI1.0 alloy under different discharge conditions.
Metals 2016, 6, 65 9 of 14 ranging from 10-200 mA•cm −2 aim at studying the discharge behavior of magnesium anodes for different applications [25].The current efficiency given in literature [50] is shown in Equation 4: where Mt is the theoretical weight loss corresponding to the discharge capacity and Ma is the actual weight loss.The potential-time curves of AZ63 and AZI alloys are presented in Figure 9a-c.The potentials of both AZ63 and AZI alloys shift negatively at the outset of the discharge process, then increase to steady values over different periods.The increase of potential over a certain period of discharge is due to the discharge products deposition on the electrode surface, which hinders the discharge process.The consequent steady potential values indicate an established dynamic equilibrium between the formation and desorption of discharge products [50].The fluctuation of potentials becomes apparent with the increase of the discharge current by comparing Figure 9a-c within the steady discharge period.This is probably due to the initiation of localized corrosion and the desorption of α-Mg grains [32].Figure 10 shows the product removed morphologies of AZI1.0 alloy under different discharge conditions.As shown in Figure 10a, the morphology of the sample over 20 h discharge at 10 mA¨cm ´2 current density is more uniform than that of the other two samples (see Figure 10b,c).The angular caverns in Figure 10b,c indicate that the corrosion tends to develop vertically accompanied by α-Mg detachment [32].The fallen α-Mg without discharge would cause the fluctuation of potentials and decrease the current efficiency.In contrast, the corroded pores in Figure 10a are quite small and shallow, indicating a uniform anodic dissolution process rather than localized detachment [51].The average discharge potentials and current efficiencies of the investigated alloys are listed in Tables 4 and 5.As reported in [25], the more negative discharge potential an anode has, the stronger the discharge activity and the higher the power density of the material.The AZI alloys, especially for AZI1.0 alloy, process more negative potentials than that of the AZ63 alloy as in Table 4, implying that the indium does not only promote the activation discussed in Sections 3.2-5 at free corrosion potential, but also increases the discharge activity of the AZ63 alloy.This result is consistent with Feng's work [52].The discharge potential curves of AZI1.0 alloy demonstrate the most negative values at different anodic current densities among the investigated alloys, implying desirable characteristics as anode material for different applications.In addition, the potential standard deviations of AZI1.0 alloy are relative smaller than others, suggesting a good discharging stability.In this work, the current efficiency improves almost 10 percent by indium alloying at different discharge currents, which is probably due to the refined grains as shown in Figure 2. It is also supported by Zhao's work [53], suggesting that the fine grain promotes the discharge activity of magnesium alloy.Additionally, the enhanced discharge activity by indium alloying may alleviate the self-discharge of magnesium alloy.
The XRD patterns for discharge products of AZI1.0 alloys under different discharge conditions are presented in Figure 11.The discharge products mainly consist of magnesium hydroxide and sodium chloride.No peaks representing zinc and indium compounds are detected, probably due to the low concentration in the Mg substrate.Some peaks of Mg 17 Al 12 can be detected in the corrosion products, probably due to the detachment of the β-Mg 17 Al 12 phase on the grain boundaries.The peak intensity of the NaCl phase increases with the discharge current density, indicating an enhanced chloride ion absorption effect caused by indium, which is consistent with Bessone's work [54], suggesting that the indium promotes the absorption of chloride ion in accelerating anode dissolution.This capability makes AZI alloys more suitable for serving as anode material at large discharge current density.
Figure 3 .
Figure 3.A scanning electron microscopy (SEM) image of AZI1.0 alloy and the energy-dispersive Xray spectroscopy (EDS) results of different locations.
Figure 2 .
Figure 2. Optical morphologies of AZ63 (a), AZI1.0 (b), AZI1.5 (c) and AZI2.0(d) alloys.An SEM image of AZI1.0 alloy and the EDS results of different locations are presented in Figure3.The microstructure of AZI1.0 alloy consists of α-Mg, precipitates in α-Mg and on grain boundaries.The black dot (location B) locates in α-Mg and the eutectic α-Mg phase (location D) contain a higher content of indium and lower content of aluminum and zinc on comparing with the results of β-Mg17Al12 (location C) and white dot (location A).The substrate (location E) has a lower concentration of aluminum, zinc, and indium than the nominal composition of Al 6 wt%, Zn 3 wt% and In 1.0 wt% owing to the segregation effect.Therefore, it can be deduced that the indium favors segregation from the Mg substrate and separate precipitation in α-Mg or in the eutectic α phase.The poor solubility of indium in aluminum and zinc promotes the grain refinement via the segregation process, which provides an interpretation of the phenomenon shown in Figure2.
Figure 3 .
Figure 3.A scanning electron microscopy (SEM) image of AZI1.0 alloy and the energy-dispersive Xray spectroscopy (EDS) results of different locations.
Figure 3 .
Figure 3.A scanning electron microscopy (SEM) image of AZI1.0 alloy and the energy-dispersive X-ray spectroscopy (EDS) results of different locations.
Figure 4 .
Figure 4. Hydrogen evolution volume (a) and hydrogen evolution rate (b) as a function of immersion time of Mg-Al-Zn and Mg-Al-Zn-In alloys in 3.5 wt% NaCl at 25 ± 1 °C .
Figure 5 .
Figure 5.The variation of open circuit potential (OCP) for AZ63 and AZI alloys in 3600 s in 3.5 wt% NaCl at 25 ± 1 °C .
Figure 4 .
Figure 4. Hydrogen evolution volume (a) and hydrogen evolution rate (b) as a function of immersion time of Mg-Al-Zn and Mg-Al-Zn-In alloys in 3.5 wt% NaCl at 25 ˘1 ˝C.
Figure 4 .
Figure 4. Hydrogen evolution volume (a) and hydrogen evolution rate (b) as a function of immersion time of Mg-Al-Zn and Mg-Al-Zn-In alloys in 3.5 wt% NaCl at 25 ± 1 °C .
Figure 5 .
Figure 5.The variation of open circuit potential (OCP) for AZ63 and AZI alloys in 3600 s in 3.5 wt% NaCl at 25 ± 1 °C .Figure 5.The variation of open circuit potential (OCP) for AZ63 and AZI alloys in 3600 s in 3.5 wt% NaCl at 25 ˘1 ˝C.
Figure 5 .
Figure 5.The variation of open circuit potential (OCP) for AZ63 and AZI alloys in 3600 s in 3.5 wt% NaCl at 25 ± 1 °C .Figure 5.The variation of open circuit potential (OCP) for AZ63 and AZI alloys in 3600 s in 3.5 wt% NaCl at 25 ˘1 ˝C.
Figure 6 .
Figure 6.Potentiodynamic polarization curves of AZ63 and AZI alloys after 1 h immersion in 3.5 wt% NaCl at 25 ± 1 °C (a) and sketch map of polarization curve for Tafel extrapolation (b).
Figure 6 .
Figure 6.Potentiodynamic polarization curves of AZ63 and AZI alloys after 1 h immersion in 3.5 wt% NaCl at 25 ˘1 ˝C (a) and sketch map of polarization curve for Tafel extrapolation (b).
Figure 7 .
Figure 7. Nyquist plots and fitting curves of (a) AZ63 and AZI alloys and (b) details of vague section in (a) after 1 h immersion in 3.5 wt% NaCl at 25 ± 1 °C.
Figure 7 .
Figure 7. Nyquist plots and fitting curves of (a) AZ63 and AZI alloys and (b) details of vague section in (a) after 1 h immersion in 3.5 wt% NaCl at 25 ˘1 ˝C.
Figure 8 .
Figure 8.The dissolving surface model for describing the physical/chemical characteristics on the surface of alloys (a) and equivalent circuit (b).
Figure 8 .
Figure 8.The dissolving surface model for describing the physical/chemical characteristics on the surface of alloys (a) and equivalent circuit (b).
Table 1 .
Actual chemical composition of the AZ63 and AZI alloys analyzed by inductively coupled plasma mass spectrometry (ICP-MS) (wt%).
Table 2 .
Corrosion parameters of AZ63 and AZI alloys derived from the polarization curves.
Table 2 .
Corrosion parameters of AZ63 and AZI alloys derived from the polarization curves.
Table 3 .
Electrochemical parameters of AZ63 and AZ63I alloys obtained by fitting the electrochemical impedance spectra.
Table 3 .
Electrochemical parameters of AZ63 and AZ63I alloys obtained by fitting the electrochemical impedance spectra.
Table 4 .
Average discharge potentials of AZ63 and AZI alloys under different conditions.
Table 5 .
Current efficiencies of AZ63 and AZI alloys under different conditions. | 9,318.4 | 2016-03-17T00:00:00.000 | [
"Materials Science"
] |
Hollow microspheres as targets for staged laser-driven proton acceleration
A coated hollow core microsphere is introduced as a novel target in ultra-intense laser–matter interaction experiments. In particular, it facilitates staged laser-driven proton acceleration by combining conventional target normal sheath acceleration (TNSA), power recycling of hot laterally spreading electrons and staging in a very simple and cheap target geometry. During TNSA of protons from one area of the sphere surface, laterally spreading hot electrons form a charge wave. Due to the spherical geometry, this wave refocuses on the opposite side of the sphere, where an opening has been laser micromachined. This leads to a strong transient charge separation field being set up there, which can post-accelerate those TNSA protons passing through the hole at the right time. Experimentally, the feasibility of using such targets is demonstrated. A redistribution is encountered in the experimental proton energy spectra, as predicted by particle-in-cell simulations and attributed to transient fields set up by oscillating currents on the sphere surface.
Introduction
Laser-driven ion acceleration is an area of research that currently attracts significant scientific interest.The ion beams produced in these experiments have several attractive characteristics, such as very low transverse emittance and small virtual source size [1] together with a short pulse duration (at the source).Proposed applications of this possibly compact ion beam source include ion radiotherapy for cancer treatment [2,3], isotope production for medical imaging techniques [4], proton radiography of inertial fusion plasmas [5] and implementation as injectors for future ion accelerators.
In a typical experiment, a high power laser pulse of short duration, ≤ ps, is focused on the surface of a thin foil to an intensity exceeding 10 19 W/cm 2 .The laser interacts with target electrons and a population of hot electrons with a Maxwellian temperature of typically a few MeV is generated.A large fraction of these electrons traverse the target and build up exceptionally high electrostatic fields, ∼ TV/m, at the rear surface of the foil, in a direction normal to the target surface.Atoms on the target surface are rapidly field-ionized and accelerated.This is referred to as Target Normal Sheath Acceleration (TNSA) [6].Because of the presence of hydrocarbon and water vapour on the surfaces of the foils (at typical vacuum conditions ∼ 10 −5 mbar), protons are the dominating ion species.Due to their high chargeto-mass ratio, protons are more efficiently accelerated than heavier ions.
The acceleration of protons behind the target foil is very rapid, due to the high field strength.However, this field is present during a short time only, limiting the maximum energy reached by the protons.The energy spectra of these proton beams exhibit a longitudinal emittance comparable to that of conventional accelerators, with a quasi-exponential shape and a distinct cut-off energy [6].The divergence of the proton beam is typically ∼ 30 • half angle.Significant theoretical and experimental efforts have been devoted to the exploration of means to boost the maximum proton energy without the use of increasingly larger laser systems [7,8].
Practical limitations in laser size and costs, laser materials and repetition rate are craving for alternative or modified laser acceleration schemes and targets to further increase the peak proton energy.It has been found that the maximum proton energy and laser-to-ion energy conversion is enhanced by the use of ultra thin targets in combination with laser pulses of high temporal contrast [9].Staging, i.e. combining two or more accelerator stages in series, may be a way to post-accelerate a selected portion of the protons accelerated in a preceding TNSA stage and thus raise the maximum proton energy and reduce the energy spread [10].In parallel, extensive studies on controlling beam parameters such as collimation and means to produce quasi-monoenergetic energy distributions have been carried out [11,12].In particular, mass-limited targets can be used to reduce the energy spread of the protons [13,14,15].Curved target foils [16], electrostatic charging of specially shaped targets [17] or separate focusing cylinders [18], enable spatial shaping of the proton beam.
In addition, experiments and numerical modelling have shown that while part of the hot electron distribution is passing through the target foil, a significant part is spreading also laterally along the target.McKenna et al. [19] found that, when these electrons reach the target edges, after a time determined by the geometrical size of the target and the lateral electron transport velocity, they establish quasi static electric fields, similar to the one produced behind the target during TNSA, resulting in ion acceleration from the edges.Normally, this mechanism just represents a loss of absorbed laser energy, which is converted to hot electrons but not contributing to the quasi static sheath built up at the target rear side.In a recent study [15], however, using very small diameter targets, the refluxing of transversely spreading electrons were found to enhance and smooth the sheath field for TNSA from the rear surface.
In this paper we discuss the use of hollow microspheres, as novel targets for laser acceleration of protons.With this target, several of the above features are combined, which may facilitate improved laser-to-proton efficiency, eventually leading to increased proton energy and reduced divergence.Lateral electron transport is here utilized to set up a post-acceleration field for staged acceleration.
The basic idea behind our approach is to use hollow microspheres with diameters of about 10 − 50 µm and sub-micrometer wall thickness.In each sphere a small circular opening is made.We refer to the position of this opening as the "north pole" (see figure 1).A short pulse laser irradiates the sphere at the "south pole", where TNSA takes place.The primary proton direction will be along the z-axis, defined as the axis from the south pole passing through the north pole of the sphere.The spherical surface, with TNSA taking place from the concave side, results in a collimated or even converging proton beam.Therefore, all the protons can be made to pass through the opening at the north pole.In addition -and this is the key point -electrons leaving the laser focus laterally in any direction along the sphere surface, will be guided on different longitudes over the sphere and eventually reach the edge of the opening at the north pole simultaneously after some given time.A very strong quasi static electric field is then formed in the opening, along the z-axis.This quasi-static field will post-accelerate protons passing through the opening at the correct time.A glass hollow microsphere, with an ∼ 50 nm silver coating on its ≤ 1µm thick wall is struck by a laser pulse at the "south pole".TNSA protons are emitted though a circular opening at the "north pole" and post-accelerated In our approach to test this idea, theoretical and experimental studies go hand in hand: To test the experimental feasibility, we perform experiments, at the Lund High Power Laser Facility, with commercially available hollow microspheres of 50 µm diameter 1 .The walls of these spheres are made of glass with a thickness of 0.5 − 1 µm, and coated with a ∼ 50 nm silver layer (see inset in figure 1), which facilitates optical alignment and guiding of electrons along the sphere surface.Openings of different sizes are laser micromachined.We present these experiments in Section 2, including target preparation, fixation and alignment in the experimental setup together with first results.
In parallel, we perform Particle-In-Cell (PIC) simulations of hollow conducting spheres with openings, irradiated by short laser pulses.These simulations, presented in Section 3, qualitatively describe the dynamics involved.
We discuss the outlook and prospect for further experiments in Section 4, and conclude in Section 5.
Target preparation
Isolated spheres are suspended into a nylon mesh grid, where both the front and back side of each sphere is accessible for further processing.Openings in the spheres are made with a confocal microscope based laser micromachining setup.In a two step process the silver coating is ablated in a well defined region on the sphere surface, followed by ablation of the glass substrate.This is done with a lateral resolution of about 2 µm utilizing femtosecond laser system running at 10 Hz repetition rate.Real time target observation and a high numerical aperture in the setup facilitate both, high drilling accuracy and control in the transverse direction while at the same time preventing the TNSA surface inside the sphere from being damaged in the procedure.
Afterwards the target, which is still fixed in the nylon mesh, is mounted into a holder.This also accommodates a gold mesh, placed close to the sphere's north pole, to extract information about proton trajectories, as will be discussed later.
Experimental setup
The Lund multi-terrawatt laser, which is a Ti:Sapphire system based on chirped pulse amplification (CPA), is used for this experiment.In this experiment, it is tuned to 850 mJ pulse energy at 805 nm centre wavelength with a typical pulse duration of 42 fs FWHM.Due to the sub micron thickness of the sphere walls, and the thin silver coating on them, an amplified spontaneous emission (ASE) contrast better than 10 8 some tens of picoseconds prior to the pulse peak is desirable.
To optimize contrast on a very fast time scale, the convergent, horizontally polarized, laser pulse hits a dielectric plasma mirror (PM) at Brewster's angle (3.0 ± 0.2) mm prior to the primary focus.At this location, the plasma mirror is operating at (8.5 ± 1.1)× 10 15 W/cm 2 spatially averaged peak intensity over the beam diameter (I centre /e 2 ).When activated, it deflects the laser beam normal onto the target (see figure 2).The laser pulse (red) is impinging on the PM at Brewster's angle, which reflects the pulse normal onto the target.After some centimetres of free passage the resulting particle beam (blue) reaches a CR-39 detector plate, which has the function of providing a lateral image of the beam profile and at the same time enabling a fraction of the beam to enter slit of a subsequent permanent magnet spectrometer.After traversing the field, a vertically dispersed spectrum is recorded on a second CR-39 plate.
Plasma mirror characteristics have been investigated by many groups (see e.g.[20,21,22]).A PM assembly, similar to the one applied in the present experiment was utilized by Neely et al. [9], using the Lund laser system.In that experiment proton beams from Al foil targets as thin as 20 nm were observed.Our experiment relates to that one as the very thin silver coating on our target surface is of comparable thickness.Ray tracing, taking a 22 nm FWHM broad Gaussian spectrum and a p-polarized converging beam into account, predicts a contrast increase by a factor 100 in our experiment (assuming a maximum reflectivity ∼ 50 % from the PM [22]).
Together with a 3 rd order autocorrelation contrast measurement, 2 ps prior to the main pulse, a contrast better than 10 6 on target can be guaranteed for an intact rear TNSA surface during the first phase of acceleration.This contrast is due to non-perfect compression in the CPA chain and should not be mistaken for the ASE contrast, encountered on a longer picosecond timescale prior to the main pulse, which is of the order 10 11 on target.
The infrared (IR) pulse is focused by an f /3 off-axis parabolic mirror (OAP) down to a 4.4 µm spot diameter (intensity FWHM), containing 39% of the total energy and reaching a peak intensity of ∼ 3×10 19 W/cm 2 .Target positioning is accomplished by a confocal imaging system: An expanded HeNe laser beam is superimposed with the IR and a confocal reflection from the silver coated target surface is imaged, utilizing the OAP as an objective.
The detector system for protons, designed to simultaneously provide a spatial beam profile and a spectrum, is depicted in figure 2. It consists of a primary CR-39 plate at some centimetres distance from the target, which is utilized to characterize the transverse spatial beam profile.This plate is covered by a 6 µm Al foil, which stops protons below 0.5 MeV [23].In its centre, a ∼ 4 mm diameter hole enables protons close to the target normal axis to continue to the momentum dispersive part of the detector and access an 88 µm wide entrance slit of a permanent magnet spectrometer, where they traverse a 51 mm long and 818 mT strong effective field.The vertically dispersed proton spectrum is then recorded by a second CR-39 plate with an accuracy of ±0.2 MeV at 4 MeV proton energy.By this arrangement spectra can be correlated to the lateral position within the particle beam.
Experimental measurements and results
After optimization of the plasma mirror working distance with the help of proton beams originating from flat 400 nm thick Al foil targets, shots were taken on machined and unmachined microspheres as well as on 0.9 µm thick Mylar foil targets.
Several shots were taken on machined microspheres with holes of typically 18 µm diameter.An example of a proton beam imprint originating from such a sphere on the spatial CR-39 plate, covered by 6 µm Al and located at (24.0 ± 0.5) mm distance from the target, can be seen in figure 3.
One can see a slightly oval beam profile.This slight asymmetry can be attributed to either a non uniform Ag coating on the sphere's south pole or to grazing incidence of the laser energy around the z-axis due to the curved sphere surface: At a given latitude, the linear polarization of the laser pulse will be incident as p or s polarization on the spherical surface, depending on the azimuth angle.Polarization effects will also facilitate different strengths of the surface currents, depending on the longitude where electrons propagate around the sphere [24].
One can further see the imprint of the earlier mentioned Au mesh on the spatial proton beam image, which is introduced into the proton beam path close to the target.This rectangularly shaped 4 µm thick mesh features squared holes with a nominal aperture of 11 × 11 µm 2 and a lateral wire widths of 5 µm.It is fixed at (165 ± 2) µm distance from the rim of the opening of the microsphere.The motivation to introduce this mesh is twofold: Firstly, it verifies, that protons contributing to the signal on the spatial CR-39 plate are not due to edge emission at the north pole, which would have resulted in a distinctively different shadow image.Secondly, thanks to this mesh the protons can be shown to come from a virtual source located (54 ± 12) µm from the inner sphere TNSA surface very close to the opening.This estimate is valid for the majority of particles traversing the opening at late times, where we expect the surface oscillations to have vanished.In order to visualise effects of a transient field emanating from the rim of the microsphere opening, one would have to filter this image to the appropriate energy.However, as will be discussed later, with our present experimental parameters, we expect only the very fastest particles to be affected by these fields, so a filtered signal would become very weak.
By neglecting Coulomb repulsion between protons in the beam, and tracing proton trajectories further back, one can make a rough estimate about the proton emission surface.We find that proton emission seems to occur from a solid angle covering ≈ 140π msr of the inner shell, measured from the centre of the sphere.This compares to a focal spot, covering ≈ 8π msr (intensity FWHM) and is consistent with previous observations of the TNSA surface source area being considerably larger than the laser focus [25,26].The virtual proton source in a flat foil TNSA experiment [27] was pinpointed to several hundred microns distance before the target front side, i.e. the laser irradiated side.A spherical target, such as the one used for isochoric heating [28,29], combines ballistic proton propagation with target curvature and an altered electron distribution.A particle focus near the north pole is consistent with these findings.
To further verify that the protons are indeed emitted from the sphere interior, we irradiate closed hollow microspheres that have not gone through the laser micromachining stage.Lacking hydrogenic contaminations, such as water vapour, on the interior surface of the sphere, TNSA of protons is not expected to occur there.Indeed no protons with energies sufficient to be recorded by our diagnostics (E proton > 0.8 MeV) are observed.In addition, lacking the opening and retaining the silver coating, return currents will prevent the formation of a strong edge field at the north pole.This will be further discussed in section 3.1.
Spectra from sphere targets with openings between 18 and 20 µm where taken and a typcial spectrum can be seen in figure 4 a).A clear high energy "cut off" is not visible in the data, and values above 8 MeV are ignored to assure that the data presented here are at least one order of magnitude above the noise level.Reference shots are taken on flat 0.9,µm thick Mylar foil targets.A typical proton spectrum can be seen in figure 4 b).Even though the laser absorption and particle yield is expected to be different as compared to the silver coated glass surface of the microspheres (∼ 30 times higher particle number), those shots provide reference spectra, enabling the identification of special features in the microsphere spectra.
All microsphere spectra from the experimental study look very similar, but with an integrated particle yield varying by a factor 4, which is twice as much as for the Mylar targets.There are indeed features present, that could possibly be attributed to a secondary field interaction and a post-acceleration by a secondary field at the sphere opening, which is still prevailing during arrival of the fastest protons.In all microsphere spectra there is a slight modification of particle yield between 5.5 to 6.5 MeV, where the counts per energy bin remain nearly constant (Figure 4 a), red arrow) as compared to the strictly decreasing dual exponential decay observed from the flat foil targets, shot under the same experimental conditions (Figure 4 b)).(Likewise, previous experiments carried out with various planar targets using the Lund laser system [30] have resulted in spectra very similar to the ones presented here for Mylar foils.)The plateau in the microsphere spectra can be understood in terms of a spectral redistribution of the order 1 MeV of proton energies as a result of a secondary field interaction near the opening.Looking at the experiment the other way around, we can regard the protons as a probe for intra cavity fields as well.A two-temperature curve fit provides comparable temperatures for low energies for both targets ((0.6 ± 0.1) MeV and (0.5 ± 0.3) MeV for spheres and Mylar, respectively) but larger values for the high energy component from the microsphere spectra ((2.0 ± 0.1) MeV) as compared to the foil spectra ((1.6 ± 0.3) MeV).
In the following section we will discuss modelling of the present experiment, giving insight into the dynamic processes of hot electron transport on the sphere surface and motivate the observed spectral features.Beyond that, we will consider means to enhance the postacceleration mechanism.
Simulations
To improve our understanding of the underlying dynamic processes, we have carried out a number of numerical experiments with the parallel PIC code Extreme Laser Matter Interaction Simulator (ELMIS), developed by the SimLight group [31].ELMIS is a relativistic code, which uses a parallel Fast Fourier Transform (FFT) technique to solve Maxwell's equations.
The processes involved in the setup are essentially of two-dimensional (2D) nature.Thus, we perform 2D simulations in order to retain the appropriate space and time scales, while still not compromising the physical outcome.However the 2D nature restricts the interpretation of these results to a qualitative level as scaling laws behave differently in two dimensions as compared to three dimensional space.
In the simulations a linearly polarized TEM00 laser pulse (where electric field lies in the plane of simulation) with τ l = 50 fs duration (Gaussian profile, FWHM) and a total energy of 1 J is focused to a 10 µm spot on the target surface.The laser field reaches a maximum field strength equal to ∼ 3.5 a rel , where a rel = 2πmc 2 /(eλ ) ≈ 3.2 × 10 12 V/m, e and m are the electron charge and mass, respectively, c the speed of light and λ = 1 µm is the laser wavelength.This corresponds to a maximum intensity of 2 × 10 19 W/cm 2 .The target consists of a hollow metal sphere with D S = 32 µm diameter and a 10 µm opening.It has a 0.5 µm thick wall, which in our 2D PIC simulation was considered as a (cylindrical) overdense plasma with the electron density 50 n crit and an Au 6+ ion density of 50 n crit /6, where n crit = πmc 2 /(λ 2 e 2 ) ≈ 1.1 × 10 21 cm −3 is the critical density for λ = 1 µm.
To simulate TNSA, we consider a 100 nm contaminant layer of protons and electrons with a density 10 n crit , covering the internal surface of the target.The simulation is done for a box size of 64 µm × 64 µm (4096 × 4096 cells) with absorbing boundaries for the fields and accumulating boundaries for the particles.The initial plasma temperature is set to 16 keV, and the cell size is 15.625 nm, which is approximately 4 times the Debye length for the considered plasma.In the simulation 100 virtual particles per cell are used for the electrons and Au 6+ ions and 20 particles per cell are used for the protons; the total number of virtual particles is 4 × 10 7 .The time steps are set to (2π/ω p )/16 ≈ 3 × 10 −17 s, where ω p = (4πe 2 50n crit /m) 1/2 is the plasma frequency.The duration limit of the simulation is set to when the leading protons in the accelerating bunch reach the simulation box boundary.
Simulation results
As the laser pulse reaches the target at the south pole it is reflected from the outer overdense plasma surface, initiating electron heating.This time is set to t = 0 in the simulation.Subsequently protons undergo TNSA from the internal surface and move towards the north pole.A part of the heated electrons leave the plasma, thereby producing an electric field retaining part of the electrons to the target surface.Those trapped hot electrons move along the plasma layer, recirculating near the wall and conserving their momentum in the direction along the surface.Eventually they will arrive at the edge of the sphere opening, where they will leave and return to the plasma layer, thus setting up a charge separation field.This process, albeit for a flat target, was discussed and experimentally observed by McKenna et al. [19].Due to the relativistic intensity of the laser pulse, the electrons move with a speed close to c, thus forming a bunch size comparable to the longitudinal extension of the laser pulse (cτ l = 15 µm).This bunch does effectively not carry any charges due to cold return currents within the plasma.However, due to the absence of a return current at the edges, the bunch produces a charge separation field.The simulation shows that, this wave then reflects from the opening at the north pole and heads back to the south pole, where it refocuses, passes through The dynamics are illustrated in figure 5 a), where the radial electric field at 3 µm distance from the sphere surface is plotted over time and latitude angle (180 • corresponds to the south pole while 0 • represents the north pole).Figure 5 b) depicts the electric field component E z along the z-axis, using red and blue colours.The evolution of the proton density along the z-axis is visualized by the grey distributions, in equidistant frames.In both pictures, one can identify large charge separation fields, set up at the rim due to the absence of a return current.
One can identify certain frames where the field becomes large, alternatingly at the north and south pole.The periodicity can be estimated as T d ≈ πD S /v e , where v e is the velocity of the surface dipole wave.From our simulations we obtain T d ≈ 380 fs, corresponding to a frequency of 2.64 THz and an electron wave velocity of v e ≈ 0.9 c.
When the surface dipole wave reaches the opening it produces electric field maxima at the north pole at times t 1 , t 2 , t 3 (see labels in figure 5 b)), which can be utilized for proton post-acceleration, i.e. staging.Those maxima exist only during relatively short periods.Thus, the timing of the proton bunch and the electron dipole wave is important.In order to have an effective post-acceleration, one would like protons to be pre-accelerated by the TNSA mechanism in such a way that they traverse the vicinity of the north pole when a maximum in accelerating field is present, resulting in a redistribution of the proton energy spectrum.It should be noted that, the surface dipole wave oscillation amplitude decays which implies that fewer oscillations prior to the proton passage provide a stronger postacceleration.Additionally to the maxima, a weaker quasi constant accelerating background field occurs at the opening from t 1 onwards, showing no significant decay on this time scale.Post-acceleration by this background field, which occurs due to a charge up of the sphere during laser irradiation, does not require an accurate timing for the proton passage.However, it provides smaller field strengths as compared to the surface dipole wave oscillations.The effect from both contributions can be seen in figure 6 in the upper marked region, labelled "post-acceleration stage", where the evolution of the proton energy distribution is displayed as a function of time.(Note that, due to the limited number of particles, modulations in these spatially integrated spectral distributions manifest themselves as lines that could easily be misinterpreted as trajectories.) The final state of this simulation is summarized in figure 7 together with the emission angle-energy distribution in the inset, taking all protons into account.From this figure it can be seen that, the considered geometry provides additional energy of several MeV for some part of protons which form a bunch with a small divergence of about 8 • half angle.
Discussion and outlook
In the present simulation, a fraction of the protons, with kinetic energies around 9 MeV, reach the opening at time t 3 and pass through a spike in accelerating field of the post-acceleration stage.However, the acceleration exerted on the protons by this field is relatively small.In 2D simulations the charge increase due to refocusing of electron trajectories at the north pole is not fully reproduced.Even though qualitatively correct, the simulations are therefore expected to underestimate the field effects.However, a larger fraction of the particles could be post-accelerated if the relative timing between the surface dipole wave and the proton arrival could be controlled.This could be done by either making use of smaller spheres or oblate spheroidal targets to compensate for their different propagation velocities, or by simply increasing the proton temperature in the TNSA stage as they are still moving non-relativistically. In the latter case an energy of 9 MeV is sufficient for protons to reach the north pole at time t 3 , while 37 MeV would be required for a passage at t 2 and GeV energies for t 1 .TNSA acceleration to the GeV regime is not feasible, but 37 MeV should be within reach of short pulse laser systems at intensities below I laser = 10 21 W/cm 2 [32,33].In addition, one might be able to reduce the velocity of the surface electron wave by surrounding the sphere with an appropriate dielectric.
With spheroids the advantages of a spheres are conserved, but the relative distance for protons and electrons to propagate can be varied.Simulations with oblate spheroidal surfaces, performed as for the spherical targets above, show both stronger acceleration and a narrower collimation of the protons.Experimentally however, such targets are not as easily available as spheres.
In the experiment, we irradiated the target at normal incidence to obtain maximum symmetry and facilitate direct comparison with our simulations.However, it is well known that by irradiating the target with p-polarized light at an angle, the efficiency of coupling laser energy to the plasma increases.This should, in our case, enhance both TNSA at the south pole and drive a significantly stronger transverse electron current along the target surface [24].An extension of both simulation and experimental geometry to allow for non-normal incidence irradiation will be a topic of further study.
Finally, we would like to point out that our target has additional interesting features.One of them is an intermediate particle focus slightly outside the spatial boundaries of the target.This could be used for experiments in fundamental physics that require high proton flux, inherently synchronized with a high power laser beam line.Additionally, as the microsphere acts as a cavity for electron surface dipole waves, it could provide efficient means to produce THz radiation with high intensity lasers.In such an experiment almost all the incident laser energy can be absorbed by irradiating the sphere through the opening at the north pole, launching an electron surface wave from the inside.
Conclusions
We have introduced a new scheme for staged laser-driven proton acceleration, using hollow microspheres as targets.On one side of a microsphere, protons are accelerated by TNSA from the concave inside surface of the sphere.Laser heated electrons that are spreading transversely in the target, as a charge wave, are refocused on the opposite side of the sphere, where they produce a strong but transient charge separation field in an opening located there.Protons passing through the opening at the correct time can thus be post accelerated.We have done two-dimensional PIC simulations that confirm that this process indeed occurs, and that the electrons spread over the sphere as a charge wave.This wave is found to oscillate back and forth over the sphere while decaying in amplitude, forming charge separation fields in the opening at regular intervals.These simulations also show that protons arriving at the correct time, i.e. those protons that have right kinetic energy, are post accelerated.Experimentally we have demonstrated the technical feasibility of preparing and irradiating this type of targets.In addition, the preliminary results show some signatures of post acceleration, although the timing between the electron charge wave and the TNSA protons were far from optimal in this first experiment.Further work with improved relative timing will be needed to fully explore the potential of this new scheme and target geometry.
Figure 1 .
Figure1.A glass hollow microsphere, with an ∼ 50 nm silver coating on its ≤ 1µm thick wall is struck by a laser pulse at the "south pole".TNSA protons are emitted though a circular opening at the "north pole" and post-accelerated
Figure 2 .
Figure 2.The laser pulse (red) is impinging on the PM at Brewster's angle, which reflects the pulse normal onto the target.After some centimetres of free passage the resulting particle beam (blue) reaches a CR-39 detector plate, which has the function of providing a lateral image of the beam profile and at the same time enabling a fraction of the beam to enter slit of a subsequent permanent magnet spectrometer.After traversing the field, a vertically dispersed spectrum is recorded on a second CR-39 plate.
Figure 3 .
Figure 3. Proton beam profile on a CR-39 plate showing a magnified Au mesh image, recorded (24 ± 0.5) mm distance from the target
Figure 4 .
Figure 4. Normalized proton spectra from a Microsphere target (a)) and a Mylar plane foil (b)); The red arrow in a) indicates a region of constant yield in contrast to the strictly decreasing number density from plane foil TNSA experiments, such as the one depicted in b); black lines are a dual temperature guide for the eye
Figure 5 .
Figure 5. a) Radial electric field strength, encountered at 3 µm distance from the surface as function of time b) Electric field component E z along the z-axis; grey plots depict the proton density evolution along the z-axis in equidistant frames
Figure 6 .Figure 7 .
Figure 6.spatially integrated proton energy distribution along the z-axis with color coded number density and its evolution in time | 7,438.2 | 2011-01-21T00:00:00.000 | [
"Physics",
"Engineering"
] |